diff --git "a/abs_29K_G/test_abstract_long_2405.01175v1.json" "b/abs_29K_G/test_abstract_long_2405.01175v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.01175v1.json" @@ -0,0 +1,182 @@ +{ + "url": "http://arxiv.org/abs/2405.01175v1", + "title": "Uncertainty-aware self-training with expectation maximization basis transformation", + "abstract": "Self-training is a powerful approach to deep learning. The key process is to\nfind a pseudo-label for modeling. However, previous self-training algorithms\nsuffer from the over-confidence issue brought by the hard labels, even some\nconfidence-related regularizers cannot comprehensively catch the uncertainty.\nTherefore, we propose a new self-training framework to combine uncertainty\ninformation of both model and dataset. Specifically, we propose to use\nExpectation-Maximization (EM) to smooth the labels and comprehensively estimate\nthe uncertainty information. We further design a basis extraction network to\nestimate the initial basis from the dataset. The obtained basis with\nuncertainty can be filtered based on uncertainty information. It can then be\ntransformed into the real hard label to iteratively update the model and basis\nin the retraining process. Experiments on image classification and semantic\nsegmentation show the advantages of our methods among confidence-aware\nself-training algorithms with 1-3 percentage improvement on different datasets.", + "authors": "Zijia Wang, Wenbin Yang, Zhisong Liu, Zhen Jia", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Semantic AND Segmentation AND Image", + "gt": "Self-training is a powerful approach to deep learning. The key process is to\nfind a pseudo-label for modeling. However, previous self-training algorithms\nsuffer from the over-confidence issue brought by the hard labels, even some\nconfidence-related regularizers cannot comprehensively catch the uncertainty.\nTherefore, we propose a new self-training framework to combine uncertainty\ninformation of both model and dataset. Specifically, we propose to use\nExpectation-Maximization (EM) to smooth the labels and comprehensively estimate\nthe uncertainty information. We further design a basis extraction network to\nestimate the initial basis from the dataset. The obtained basis with\nuncertainty can be filtered based on uncertainty information. It can then be\ntransformed into the real hard label to iteratively update the model and basis\nin the retraining process. Experiments on image classification and semantic\nsegmentation show the advantages of our methods among confidence-aware\nself-training algorithms with 1-3 percentage improvement on different datasets.", + "main_content": "Introduction Deep neural networks have been developed for many years and achieved great outcomes. However, its superiority relies on large-scale data labeling. In some real situations, like agriculture, it is difficult to obtain labeled data. To alleviate the burden of data labeling, many methods like domain adaption Chen et al. (2018, 2017b); Hoffman et al. (2018); Kim et al. (2019); Long et al. (2017a), and self-training Busto et al. (2018); Chen et al. (2019); Inoue et al. (2018); Lee et al. (2013); Saito et al. (2017a); Zou et al. (2018) have been proposed. For example, BERT Devlin et al. (2018) and GPT Radford et al. (2018, 2019); Brown et al. (2020), directly leverage a large amount of unlabeled data to pretrain the model. However, they cannot be generally applied in other areas. Among these methods, self training methodsScudder (1965); He et al. (2019) show promising results and it attracts much attention. Self training is a semi-supervised learning method Chapelle et al. (2009), which iteratively generates task specific pseudo-labels using a model trained on some labeled data. It then retrains the model using the labeled data. However, there are many issues in this bootstrap process, one of them is the noise in the pseudo-labeled data. Some researchers resolve this problem by learning from noisy labels Natarajan et al. (2013); Reed et al. (2014); Sukhbaatar et al. (2014); Yu et al. (2018). It can also be optimized by sample selection Mukherjee and Awadallah (2020a) or label smoothing Zou et al. (2019a). However, none of the previous works focused on data properties. Recently, a novel 36th Conference on Neural Information Processing Systems (NeurIPS 2022). arXiv:2405.01175v1 [cs.CV] 2 May 2024 \fFigure 1: Uncertainty-aware representations. In the right part of this figure, dashed curves represent the basis distributions while the blue curve represent the uncertainty-aware representation and uncertainty-aware labels of the data. The expectation of the labels could be used as the final label and the variance could be used to evaluate the uncertainty. Figure 2: One self training round. Pseudo-label generation (a) use EM algorithm to update the Gaussian basis and the classifier, then it generates some pseudo-labels with uncertainty information while the classifier is also trained in this stage. Then in model retraining stage (b), an uncertaintyaware training strategy is used to update the whole model (CNN and classifier). knowledge distillation Hinton et al. (2015) is proposed to distill the large dataset into a small one Sucholutsky and Schonlau (2019); Wang et al. (2018).The intuition of these methods is to find the key samples, like means in the feature spaces, to capture the data properties. These means could also be referred as basis of the data. They can be used to formulate the latent representations of the data in a probabilistic way using expectation maximization algorithm Li et al. (2019); Moon (1996). Therefore, as shown in figure 1, we propose a probabilistic model to extract uncertainty for selftraining. Concretely, expectation maximization algorithm is adapted to get the probabilistic latent representations of the data and their corresponding pseudo-label distributions can be obtained. Then the samples are selected based on the variance of the (pseudo-)label distribution where distributions with lower variance represent good (pseudo-)labels. Finally, an uncertainty-aware training process is used to retrain the model using the new dataset where the expectation of distributions becomes the final pseudo-labels. Overall, our contributions in this paper are: 2 \f\u2022 Adapt Expectation Maximization algorithm to perform basis transformation on data features. We use neural networks for expectation maximization process to generate the latent probabilistic representations of the data using base transformation. These representations are low-rank while keeping the uncertainty information and deprecating the noises. \u2022 A novel regularizer is used for pseudo-label generation. Variance and classification loss are combined in the pseudo-label generation process to get the best pseudo-label distributions which contain comprehensive uncertainty information. \u2022 A basis generation process with basis regularizer is proposed. An attention-like module (ATT block) is introduced here to extract basis from the dataset or feature space. To make the basis more robust, we propose a basis regularizer to make all basis orthogonal, which could lower the rank of final latent representations. 2 Related work Self-training: Self-training is a wide and meaningful research area in semi-supervised learning Amini and Gallinari (2002); Yarowsky (1995); Grandvalet et al. (2005), one basic direction in this area is to train a student net using a teacher net Laine and Aila (2016); Tarvainen and Valpola (2017); Luo et al. (2018), some other works use a pseudo-label-based method for self-training Zou et al. (2018). In this paper, we choose to use pseudo-label-based method while keeping the uncertainty information in the label, an iterative training framework is proposed according to the self-training paradigm and uncertainty information to improve the network performance. Expectation-Maximization and Gaussian Mixture Model: Expectation-maximization (EM) Dempster et al. (1977) is to find solutions for latent variables models using likelihood maximization algorithm while Gaussian mixture model (GMM) Richardson and Green (1997) is also one kind of EM algorithm with specific constraints. Latent variables models with GMM could naturally capture the uncertainty information considering the data properties. In GMM, the data could be represented in the distribution form: p( \u02c6 xn) = K X k=1 znkN(xn|\u00b5k, \u03a3k), (1) where the latent representation \u02c6 xn is viewed as a linear superposition of k Gaussian basis N(xn|\u00b5k, \u03a3k) and K is the basis number, znk represents the weight of this linear composition. In the GMM, znk could be updated in the E step: znew nk = N(\u00b5new k , \u03a3k) PK j=1 N(\u00b5new j , \u03a3j) , (2) Notably, the \u03a3k in the Gaussian basis is set to be identity matrix I in this paper, so the \u03a3 update process is ignored in our algorithm. 3 Problem definition In this part, we formally define the uncertainty-aware self-training problem. Given a set of labeled samples {XL, YL} and a set of unlabeled data XU where XU and XL belong to same domain. Then the goal is to find a latent representation \u02c6 X and uncertainty-aware pseudo-labels YU by using a CNN feature extractor and a simple classifier. As shown in Figure 2, our problem could be solved by alternating the following steps Zou et al. (2019a): a) Pseudo-label generation: Given all the data, EM algorithm is used to generate the pseudo-labels with uncertainty information while the classifier is also trained in this process based on a combined loss to reduce the variance of pseudo-labels and optimize the classification accuracy for labeled data. 3 \fFigure 3: Whole training process for basis initialization net. Concretely, we train the model like classical machine learning training process and add a small module (attention block) to extract the processed weights which then become the initialized basis of EM algorithm. b) Network retraining. Data are sampled from the pseudo-labeled data based on the label variance, then the sampled data, along with the original labeled data, are used to train the whole classification network. 4 Uncertainty-aware self training To generate the pseudo-label for unlabeled data XU, we first use a base extraction net trained on labeled data to get basis for XL, then these bases could be used as the initialized \u00b5(0) of EM stage to speed up the convergence. Notably, as mentioned in related work section, the \u03a3 is set to be identity matrix and not updated in our algorithm considering a good basis should have identical variance. After the initialization, the EM algorithm is adapted to update the \u00b5 while the prediction net is simultaneously updated in the EM stage. Concretely, the details of base extraction net is shown in section 4.1, then two losses which are used in the EM stage to update the pseudo label generator parameters (classifier in figure 2 a) are demonstrated in section 4.2. After the definition of losses, the whole EM stage is described in section 4.2.1. 4.1 Basis Extraction net As shown in figure 3, we demonstrate the generalized basis initialization net. In this paper, we use classification as an example where the model trained in this stage has 3 components: \u2022 Feature extractor. In fig 3, CNN functions as the feature extractor. The weights we extracted are from this part. \u2022 Classifier. The fully connected layer could be the classifier in our setting, this part is for the original machine learning tasks like classification. \u2022 Weight extractor. An additional ATT block is added to extract the informative basis from the feature space. Clearly in training process, there are 2 tasks: classification and weights extraction. For classification, we use classical classification loss negative log likelihood loss (Lnll). Then for weight extraction part, we want our weights to be basis with low rank, so they need to be orthogonal: L2 = W \u2217W T \u2212I (3) Where W is the weight and I is the unity matrix. Therefore, the loss becomes: Ls1 = Lnll + L2 (4) 4 \fIn Attention block (ATT block), given a matrix X \u2208RN\u00d7d which contains the features of all data samples, we try to extract the inherent low-rank properties of features by basis extraction. The basis extraction, says the problem to find the most informative projection of features, can be formally expressed as min\u00b5 \r \rX \u2212\u00b5Z \r \r F s.t.\u00b5T \u00b5 = I Z = \u00b5T X (5) where \u00b5 \u2208RK\u00d7d represents the basis matrix of the latent features. Through the process, the inherent data structure can be founded. However, as an unsupervised method, the problem is reported easily suffer from the model collapse problems. Considering the important label information in classification problems. then we can modify the problem above into a semi-supervised manner as min\u00b5 \r \rX \u2212\u00b5Z \r \r F + \r \rZZT \u2212Y Y T \r \r F + \r \r\u00b5T \u00b5 \u2212I \r \r F s.tZ = \u00b5T X (6) where Y donates all the labels. We can solve the problems above with standard gradient decent methods. Then, after stage I, we generated some basis which the latent space features of data samples effectively and precisely. 4.2 Pseudo-label generation Recall that the latent representation should be transformed into the pseudo label using a function f\u03b8. Given a latent representation \u02c6 xn will obey the fallowing distribution: p( \u02c6 xn) = K X k=1 znkN(xn|\u00b5k, \u03a3k), (7) where K is the number of basis, G(\u00b5, \u03a3) is the final distribution basis representation. Then the corresponding pseudo label for sample \u02c6 xn(m) is \u02c6 yn(m) = f\u03b8( \u02c6 xn(m)). With the will know reparameter trick, distribution p(yn) can be formally expressed as p(yn) = ZZ p(yn|xn)p(xn|\u03f5)dxnd\u03f5, \u03f5 \u223cN(0, I) (8) where p(xn|\u03f5) = K X k=1 znk\u00b5k + \u03a3k\u03f5 (9) Then, we could easily compute the variance V AR( \u02c6 yn) and expectation E( \u02c6 yn) using these sampled pseudo label. For latent representations in XL which have label yn, the loss function for f\u03b8 is: LossL = E( \u02c6 yn) \u2212yn (10) For latent representations in XU which don\u2019t have label, the loss is basically the variance, therefore the final loss for pseudo label prediction model is: L = \u03bbLossL + (1 \u2212\u03bb)V AR( \u02c6 yn), (11) where \u03bb = 1 if the latent representation is from XU and vice versa. 4.2.1 Expectation-Maximization Now we can get the ideally orthogonal base vectors from weights and use them as initialized \u00b5 in the base generation block and compute the loss. Then in this section, we formally define the adapted EM process. At first, we need to update znk: 5 \fznew nk = K(xn, \u00b5k) PK j=1 K(xn, \u00b5j) , (12) where K(a, b) is a kernel function to evaluate the similarity between a and b. Then in the algorithm, the t-th Z could be formulated as: z(t) = softmax(\u03bbX(\u00b5(t\u22121)) T ), (13) where \u03bb is manually set to control Z distribution. Then in the M step (likelihood maximization), we update the \u00b5 based on the weighted summation of X to make them in one space. Then the update process in t-th iteration could be formulated as: \u00b5(t) k = z(t) nkxn PN m=1 z(t) mk (14) After T iterations, we could get the final basis \u00b5k(T), \u03a3k(T) and the prediction model \u03b8k(T). The generated pseudo label for each sample is a distribution, which can be formulated as: yn = f\u03b8(xn), (15) where f\u03b8 is a linear transformation, so distribution of yn could be easily calculated. The whole process of pseudo-label generation is summarized in algorithm 1. Algorithm 1: Pseudo-label generation Input :XL, XU, YL, f\u03b8 Output :\u00b5k(T), \u03a3k(T), \u03b8k(T) Initialize \u00b5k(0), \u03a3k(0), \u03b8(0) for t \u21901 to T do update znk(t) (eq 13) compute \u02c6 xn(t) (eq 10) compute pseudo-label yn (eq 15) compute loss function (eq 11) update \u03b8(t) using back propagation update \u00b5k(t) (eq 14) return 4.3 Network retraining Because in section 4.1, we define the problem as a classification task, so in this part we simply use classification as our final task. Considering we have the distribution for pseudo-labels, there are mainly two steps in the retraining part sample selection and model retraining. Method A\u2192W D\u2192W W\u2192D A\u2192D D\u2192A W\u2192A Mean ResNet-50 He et al. (2016) 68.4\u00b10.2 96.7\u00b10.1 99.3\u00b10.1 68.9\u00b10.2 62.5\u00b10.3 60.7\u00b10.3 76.1 DAN Long et al. (2015) 80.5\u00b10.4 97.1\u00b10.2 99.6\u00b10.1 78.6\u00b10.2 63.6\u00b10.3 62.8\u00b10.2 80.4 RTN Long et al. (2016) 84.5\u00b10.2 96.8\u00b10.1 99.4\u00b10.1 77.5\u00b10.3 66.2\u00b10.2 64.8\u00b10.3 81.6 DANN Ganin et al. (2016) 82.0\u00b10.4 96.9\u00b10.2 99.1\u00b10.1 79.7\u00b10.4 68.2\u00b10.4 67.4\u00b10.5 82.2 ADDA Tzeng et al. (2017) 86.2\u00b10.5 96.2\u00b10.3 98.4\u00b10.3 77.8\u00b10.3 69.5\u00b10.4 68.9\u00b10.5 82.9 JAN Long et al. (2017b) 85.4\u00b10.3 97.4\u00b10.2 99.8\u00b10.2 84.7\u00b10.3 68.6\u00b10.3 70.0\u00b10.4 84.3 GTA Sankaranarayanan et al. (2018) 89.5\u00b10.5 97.9\u00b10.3 99.8\u00b10.4 87.7\u00b10.5 72.8\u00b10.3 71.4\u00b10.4 86.5 MRKLD+LRENT Zou et al. (2019b) 89.4\u00b10.7 98.9\u00b10.4 100\u00b10.0 88.7\u00b10.8 72.6\u00b10.7 70.9\u00b10.5 86.8 Ours 92.2\u00b10.5 98.2\u00b10.3 99.6\u00b10.4 87.2\u00b10.5 72.8\u00b10.3 72.4\u00b10.4 87.1 Table 1: Comparison on Office-31 experiments 6 \fMethod Aero Bike Bus Car Horse Knife Motor Person Plant Skateboard Train Truck Mean Source Saito et al. (2017b) 55.1 53.3 61.9 59.1 80.6 17.9 79.7 31.2 81 26.5 73.5 8.5 52.4 MMD Long et al. (2015) 87.1 63 76.5 42 90.3 42.9 85.9 53.1 49.7 36.3 85.8 20.7 61.1 DANN Ganin et al. (2016) 81.9 77.7 82.8 44.3 81.2 29.5 65.1 28.6 51.9 54.6 82.8 7.8 57.4 ENT Grandvalet et al. (2005) 80.3 75.5 75.8 48.3 77.9 27.3 69.7 40.2 46.5 46.6 79.3 16 57 MCD Saito et al. (2018) 87 60.9 83.7 64 88.9 79.6 84.7 76.9 88.6 40.3 83 25.8 71.9 ADR Saito et al. (2017b) 87.8 79.5 83.7 65.3 92.3 61.8 88.9 73.2 87.8 60 85.5 32.3 74.8 SimNet-Res152Pinheiro (2018) 94.3 82.3 73.5 47.2 87.9 49.2 75.1 79.7 85.3 68.5 81.1 50.3 72.9 GTA-Res152 Sankaranarayanan et al. (2018) 77.1 MRKLD+LRENT Zou et al. (2019b) 88.0 79.2 61.0 60.0 87.5 81.4 86.3 78.8 85.6 86.6 73.9 68.8 78.1 Ours 89.1 81.7 82.1 57.7 83.2 79.7 83.9 77.2 86.2 82.7 83.8 65.9 79.4 Table 2: Comparison on VisDA17 experiments Method Backbone Road SW Build Wall Fence Pole TL TS Veg. Terrain Sky PR Rider Car Truck Bus Train Motor Bike mIoU Source 42.7 26.3 51.7 5.5 6.8 13.8 23.6 6.9 75.5 11.5 36.8 49.3 0.9 46.7 3.4 5 0 5 1.4 21.7 CyCADA Hoffman et al. (2018) DRN-26 79.1 33.1 77.9 23.4 17.3 32.1 33.3 31.8 81.5 26.7 69 62.8 14.7 74.5 20.9 25.6 6.9 18.8 20.4 39.5 Source 36.4 14.2 67.4 16.4 12 20.1 8.7 0.7 69.8 13.3 56.9 37 0.4 53.6 10.6 3.2 0.2 0.9 0 22.2 MCD Saito et al. (2018) DRN-105 90.3 31 78.5 19.7 17.3 28.6 30.9 16.1 83.7 30 69.1 58.5 19.6 81.5 23.8 30 5.7 25.7 14.3 39.7 Source 75.8 16.8 77.2 12.5 21 25.5 30.1 20.1 81.3 24.6 70.3 53.8 26.4 49.9 17.2 25.9 6.5 25.3 36 36.6 AdaptSegNet Tsai et al. (2018) DeepLabv2 86.5 36 79.9 23.4 23.3 23.9 35.2 14.8 83.4 33.3 75.6 58.5 27.6 73.7 32.5 35.4 3.9 30.1 28.1 42.4 AdvEnt Vu et al. (2019) DeepLabv2 89.4 33.1 81 26.6 26.8 27.2 33.5 24.7 83.9 36.7 78.8 58.7 30.5 84.8 38.5 44.5 1.7 31.6 32.4 45.5 Source 29.2 FCAN Zhang et al. (2018) DeepLabv2 46.6 Ours DeepLabv2 87 47.7 80.3 25.9 26.3 47.9 34.7 29 80.9 45.7 80.3 60 29.2 81.7 37.9 47.5 37.2 29.8 47.7 50.4 Table 3: Adaptation results of experiments transferring from GTA5 to Cityscapes. 4.3.1 Sample selection After pseudo-label generation process, the generated pseudo-labels are formulated in a distribution format (Gaussian form) shown in equation 8 which contains variance and mean information. Then for classification task, a class-dependent selection Mukherjee and Awadallah (2020b) could be performed to construct a dataset with hard labels DS,U = {xu,s \u2208Su,c, yu}. Here, Su,c \u2208XU is constructed based on the score rank of each sample, if the sample\u2019s pseudo-label has higher variance, then it\u2019s more likely to be discarded. For yu, one can simply use its mean as its hard pseudo label, but here we want to accurately model the uncertainty information. Therefore, we randomly sample hard labels from the pseudo-label distribution to incorporate the uncertainty information encoded in the distribution. 4.3.2 Uncertainty aware retraining After the sample selection, a retraining dataset is derived as Dr = {XL, YL} S{xu,s, yu}, then for the retraining part, the final goal is to minimize following loss: minW LL + LU V ar(y) (16) Where W is the model parameter, LL and LU represent the task loss for labeled data and unlabeled data respectively, here in this classification example, they represent same classification loss like cross entropy. V ar(y) represents the sample uncertainty, for samples x \u2208XU, variance is same to the variance in the distribution to catch the uncertainty information of teacher model. In this setting, samples with higher variance, which basically means that the previous model is not confident on this sample, have lower weights in the back propagation process of training. After the retraining, one round shown in figure 2 is completed. Then we simply repeat the whole process until the ideal results are derived. Method Backbone Road SW Build Wall* Fence* Pole* TL TS Veg. Sky PR Rider Car Bus Motor Bike mIoU mIoU* Source DRN-105 14.9 11.4 58.7 1.9 0 24.1 1.2 6 68.8 76 54.3 7.1 34.2 15 0.8 0 23.4 26.8 MCD Saito et al. (2018) 84.8 43.6 79 3.9 0.2 29.1 7.2 5.5 83.8 83.1 51 11.7 79.9 27.2 6.2 0 37.3 43.5 Source DeepLabv2 55.6 23.8 74.6 6.1 12.1 74.8 79 55.3 19.1 39.6 23.3 13.7 25 38.6 AdaptSegNetTsai et al. (2018) 84.3 42.7 77.5 4.7 7 77.9 82.5 54.3 21 72.3 32.2 18.9 32.3 46.7 Source ResNet-38 32.6 21.5 46.5 4.8 0.1 26.5 14.8 13.1 70.8 60.3 56.6 3.5 74.1 20.4 8.9 13.1 29.2 33.6 CBST Zou et al. (2019b) 53.6 23.7 75 12.5 0.3 36.4 23.5 26.3 84.8 74.7 67.2 17.5 84.5 28.4 15.2 55.8 42.5 48.4 AdvEnt Vu et al. (2019) DeepLabv2 85.6 42.2 79.7 8.7 0.4 25.9 5.4 8.1 80.4 84.1 57.9 23.8 73.3 36.4 14.2 33 41.2 48 Source DeepLabv2 64.3 21.3 73.1 2.4 1.1 31.4 7 27.7 63.1 67.6 42.2 19.9 73.1 15.3 10.5 38.9 34.9 40.3 Ours 68 29.9 76.3 10.8 1.4 33.9 22.8 29.5 77.6 78.3 60.6 28.3 81.6 23.5 18.8 39.8 42.6 48.9 Table 4: Adaptation results of experiments transferring from SYNTHIA to Cityscapes. 7 \f5 Experiment In this section, we demonstrate the advantages of proposed methods by comparing the performance of proposed methods with the SOTA confidence-aware self-training strategy on 2 tasks image classification and image segmentation. To make the results comparative, we basically follow the settings in Zou et al. (2019b) which achieves SOTA results in confidence-aware self-training domain, details will be illustrated in following sections. 5.1 Dataset and evaluation metric 5.1.1 Image classification. For domain adaption in image classification task, VisDA17 Peng et al. (2018) and Office-31 Saenko et al. (2010) are used to evaluate the algorithm performance. In VisDA17, there are 12 classes with 152, 409 virtual images for training while 55, 400 real images from MS-COCO Lin et al. (2014) are target dataset. For Office-31, 31 classes collected from Amazon(A, 2817 images), Webcam(W, 795 images) and DSLR(D, 498 images) domains are included. We strictly follow the settings in Saenko et al. (2010); Sankaranarayanan et al. (2018); Zou et al. (2019b) which evaluate the domain adaption performance on A \u2192W, D \u2192W, W \u2192D, A \u2192D, D \u2192A, W \u2192A. For evaluation, we simply use the accuracy for each class and mean accuracy across all classes as the evaluation metric. 5.1.2 Semantic segmentation For domain adaption in image segmentation tasks, 2 virtual datasets GTA5 Richter et al. (2016), SYNTHIA Ros et al. (2016) and 1 real dataset Cityscapes Cordts et al. (2016) are used to evaluate the performance of proposed method. Concretely, GTA5 contains 24, 966 images based on the game GTA5, SYNTHIA-RAND-CITYSCAPES (subset of SYNTHIA) has 9400 images. For the experiment setup, we also strictly follow Hoffman et al. (2018); Tsai et al. (2018); Zou et al. (2019b) which use Cityscapes as target domain and view virtual datasets (GTA5 and CITYSCAPES) as training domain. For evaluation, the Intersection over Union (IoU) is used to measure the performance of models where. 5.2 Experiment setup To make our results comparable with current SOTA confidence-aware method, we adapt the settings in Zou et al. (2019b). Besides, all the training process is performed on 4 Tesla V100 GPUs which have 32GB memory. Image Classification: ResNet101/ ResNet-50 He et al. (2016) are used as backbones, which are pretrained based on ImageNet Deng et al. (2009). Then in source domain, we fine-tune the model using SGD while the learning rate is 1 \u00d7 10\u22124, weight decay is set to be 5 \u00d7 10\u22125, momentum is 0.8 and the batch size is 32. In the self-training round, the parameters are same except for the different learning rates which are 5 \u00d7 10\u22124. Image Segmentation: In image segmentation part, we mainly use the older DeepLab v2 Chen et al. (2017a) as backbone to align with previous results. DeepLab v2 is first pretrained on ImageNet and then finetuned on source domain using SGD. Here we set learning rate as 5 \u00d7 10\u22124, weight decay is set to be 1 \u00d7 10\u22125, momentum is 0.9, the batch size is 8 while the patch size is 512 \u00d7 1024. In self-training, we basically run 3 rounds which has 4 retraining epochs. 5.3 Experiment results Comparison on image classification. As shown in table 1 and table 2, compared with previous SOTA result in confidence-aware self-training and other self-training algorithms, although our algorithm does not achieve best performance in all sub-tasks, the mean results (87.1 and 79.4 for Office-31 and VisDA17 respectively) achieves SOTA while our results (derivations and means) are obtained from 5 runs of the experiment. Comparison on image segmentation.As shown in table 3 and 4, in semantic segmentation task, our results of average IoU (mIoU) achieves SOTA among confidence-aware self-training algorithms. 8 \f6", + "additional_graph_info": { + "graph": [ + [ + "Zijia Wang", + "Zhen Jia" + ], + [ + "Zhen Jia", + "Rishiraj Saha Roy" + ], + [ + "Zhen Jia", + "Philipp Christmann" + ] + ], + "node_feat": { + "Zijia Wang": [ + { + "url": "http://arxiv.org/abs/2405.01175v1", + "title": "Uncertainty-aware self-training with expectation maximization basis transformation", + "abstract": "Self-training is a powerful approach to deep learning. The key process is to\nfind a pseudo-label for modeling. However, previous self-training algorithms\nsuffer from the over-confidence issue brought by the hard labels, even some\nconfidence-related regularizers cannot comprehensively catch the uncertainty.\nTherefore, we propose a new self-training framework to combine uncertainty\ninformation of both model and dataset. Specifically, we propose to use\nExpectation-Maximization (EM) to smooth the labels and comprehensively estimate\nthe uncertainty information. We further design a basis extraction network to\nestimate the initial basis from the dataset. The obtained basis with\nuncertainty can be filtered based on uncertainty information. It can then be\ntransformed into the real hard label to iteratively update the model and basis\nin the retraining process. Experiments on image classification and semantic\nsegmentation show the advantages of our methods among confidence-aware\nself-training algorithms with 1-3 percentage improvement on different datasets.", + "authors": "Zijia Wang, Wenbin Yang, Zhisong Liu, Zhen Jia", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "main_content": "Introduction Deep neural networks have been developed for many years and achieved great outcomes. However, its superiority relies on large-scale data labeling. In some real situations, like agriculture, it is difficult to obtain labeled data. To alleviate the burden of data labeling, many methods like domain adaption Chen et al. (2018, 2017b); Hoffman et al. (2018); Kim et al. (2019); Long et al. (2017a), and self-training Busto et al. (2018); Chen et al. (2019); Inoue et al. (2018); Lee et al. (2013); Saito et al. (2017a); Zou et al. (2018) have been proposed. For example, BERT Devlin et al. (2018) and GPT Radford et al. (2018, 2019); Brown et al. (2020), directly leverage a large amount of unlabeled data to pretrain the model. However, they cannot be generally applied in other areas. Among these methods, self training methodsScudder (1965); He et al. (2019) show promising results and it attracts much attention. Self training is a semi-supervised learning method Chapelle et al. (2009), which iteratively generates task specific pseudo-labels using a model trained on some labeled data. It then retrains the model using the labeled data. However, there are many issues in this bootstrap process, one of them is the noise in the pseudo-labeled data. Some researchers resolve this problem by learning from noisy labels Natarajan et al. (2013); Reed et al. (2014); Sukhbaatar et al. (2014); Yu et al. (2018). It can also be optimized by sample selection Mukherjee and Awadallah (2020a) or label smoothing Zou et al. (2019a). However, none of the previous works focused on data properties. Recently, a novel 36th Conference on Neural Information Processing Systems (NeurIPS 2022). arXiv:2405.01175v1 [cs.CV] 2 May 2024 \fFigure 1: Uncertainty-aware representations. In the right part of this figure, dashed curves represent the basis distributions while the blue curve represent the uncertainty-aware representation and uncertainty-aware labels of the data. The expectation of the labels could be used as the final label and the variance could be used to evaluate the uncertainty. Figure 2: One self training round. Pseudo-label generation (a) use EM algorithm to update the Gaussian basis and the classifier, then it generates some pseudo-labels with uncertainty information while the classifier is also trained in this stage. Then in model retraining stage (b), an uncertaintyaware training strategy is used to update the whole model (CNN and classifier). knowledge distillation Hinton et al. (2015) is proposed to distill the large dataset into a small one Sucholutsky and Schonlau (2019); Wang et al. (2018).The intuition of these methods is to find the key samples, like means in the feature spaces, to capture the data properties. These means could also be referred as basis of the data. They can be used to formulate the latent representations of the data in a probabilistic way using expectation maximization algorithm Li et al. (2019); Moon (1996). Therefore, as shown in figure 1, we propose a probabilistic model to extract uncertainty for selftraining. Concretely, expectation maximization algorithm is adapted to get the probabilistic latent representations of the data and their corresponding pseudo-label distributions can be obtained. Then the samples are selected based on the variance of the (pseudo-)label distribution where distributions with lower variance represent good (pseudo-)labels. Finally, an uncertainty-aware training process is used to retrain the model using the new dataset where the expectation of distributions becomes the final pseudo-labels. Overall, our contributions in this paper are: 2 \f\u2022 Adapt Expectation Maximization algorithm to perform basis transformation on data features. We use neural networks for expectation maximization process to generate the latent probabilistic representations of the data using base transformation. These representations are low-rank while keeping the uncertainty information and deprecating the noises. \u2022 A novel regularizer is used for pseudo-label generation. Variance and classification loss are combined in the pseudo-label generation process to get the best pseudo-label distributions which contain comprehensive uncertainty information. \u2022 A basis generation process with basis regularizer is proposed. An attention-like module (ATT block) is introduced here to extract basis from the dataset or feature space. To make the basis more robust, we propose a basis regularizer to make all basis orthogonal, which could lower the rank of final latent representations. 2 Related work Self-training: Self-training is a wide and meaningful research area in semi-supervised learning Amini and Gallinari (2002); Yarowsky (1995); Grandvalet et al. (2005), one basic direction in this area is to train a student net using a teacher net Laine and Aila (2016); Tarvainen and Valpola (2017); Luo et al. (2018), some other works use a pseudo-label-based method for self-training Zou et al. (2018). In this paper, we choose to use pseudo-label-based method while keeping the uncertainty information in the label, an iterative training framework is proposed according to the self-training paradigm and uncertainty information to improve the network performance. Expectation-Maximization and Gaussian Mixture Model: Expectation-maximization (EM) Dempster et al. (1977) is to find solutions for latent variables models using likelihood maximization algorithm while Gaussian mixture model (GMM) Richardson and Green (1997) is also one kind of EM algorithm with specific constraints. Latent variables models with GMM could naturally capture the uncertainty information considering the data properties. In GMM, the data could be represented in the distribution form: p( \u02c6 xn) = K X k=1 znkN(xn|\u00b5k, \u03a3k), (1) where the latent representation \u02c6 xn is viewed as a linear superposition of k Gaussian basis N(xn|\u00b5k, \u03a3k) and K is the basis number, znk represents the weight of this linear composition. In the GMM, znk could be updated in the E step: znew nk = N(\u00b5new k , \u03a3k) PK j=1 N(\u00b5new j , \u03a3j) , (2) Notably, the \u03a3k in the Gaussian basis is set to be identity matrix I in this paper, so the \u03a3 update process is ignored in our algorithm. 3 Problem definition In this part, we formally define the uncertainty-aware self-training problem. Given a set of labeled samples {XL, YL} and a set of unlabeled data XU where XU and XL belong to same domain. Then the goal is to find a latent representation \u02c6 X and uncertainty-aware pseudo-labels YU by using a CNN feature extractor and a simple classifier. As shown in Figure 2, our problem could be solved by alternating the following steps Zou et al. (2019a): a) Pseudo-label generation: Given all the data, EM algorithm is used to generate the pseudo-labels with uncertainty information while the classifier is also trained in this process based on a combined loss to reduce the variance of pseudo-labels and optimize the classification accuracy for labeled data. 3 \fFigure 3: Whole training process for basis initialization net. Concretely, we train the model like classical machine learning training process and add a small module (attention block) to extract the processed weights which then become the initialized basis of EM algorithm. b) Network retraining. Data are sampled from the pseudo-labeled data based on the label variance, then the sampled data, along with the original labeled data, are used to train the whole classification network. 4 Uncertainty-aware self training To generate the pseudo-label for unlabeled data XU, we first use a base extraction net trained on labeled data to get basis for XL, then these bases could be used as the initialized \u00b5(0) of EM stage to speed up the convergence. Notably, as mentioned in related work section, the \u03a3 is set to be identity matrix and not updated in our algorithm considering a good basis should have identical variance. After the initialization, the EM algorithm is adapted to update the \u00b5 while the prediction net is simultaneously updated in the EM stage. Concretely, the details of base extraction net is shown in section 4.1, then two losses which are used in the EM stage to update the pseudo label generator parameters (classifier in figure 2 a) are demonstrated in section 4.2. After the definition of losses, the whole EM stage is described in section 4.2.1. 4.1 Basis Extraction net As shown in figure 3, we demonstrate the generalized basis initialization net. In this paper, we use classification as an example where the model trained in this stage has 3 components: \u2022 Feature extractor. In fig 3, CNN functions as the feature extractor. The weights we extracted are from this part. \u2022 Classifier. The fully connected layer could be the classifier in our setting, this part is for the original machine learning tasks like classification. \u2022 Weight extractor. An additional ATT block is added to extract the informative basis from the feature space. Clearly in training process, there are 2 tasks: classification and weights extraction. For classification, we use classical classification loss negative log likelihood loss (Lnll). Then for weight extraction part, we want our weights to be basis with low rank, so they need to be orthogonal: L2 = W \u2217W T \u2212I (3) Where W is the weight and I is the unity matrix. Therefore, the loss becomes: Ls1 = Lnll + L2 (4) 4 \fIn Attention block (ATT block), given a matrix X \u2208RN\u00d7d which contains the features of all data samples, we try to extract the inherent low-rank properties of features by basis extraction. The basis extraction, says the problem to find the most informative projection of features, can be formally expressed as min\u00b5 \r \rX \u2212\u00b5Z \r \r F s.t.\u00b5T \u00b5 = I Z = \u00b5T X (5) where \u00b5 \u2208RK\u00d7d represents the basis matrix of the latent features. Through the process, the inherent data structure can be founded. However, as an unsupervised method, the problem is reported easily suffer from the model collapse problems. Considering the important label information in classification problems. then we can modify the problem above into a semi-supervised manner as min\u00b5 \r \rX \u2212\u00b5Z \r \r F + \r \rZZT \u2212Y Y T \r \r F + \r \r\u00b5T \u00b5 \u2212I \r \r F s.tZ = \u00b5T X (6) where Y donates all the labels. We can solve the problems above with standard gradient decent methods. Then, after stage I, we generated some basis which the latent space features of data samples effectively and precisely. 4.2 Pseudo-label generation Recall that the latent representation should be transformed into the pseudo label using a function f\u03b8. Given a latent representation \u02c6 xn will obey the fallowing distribution: p( \u02c6 xn) = K X k=1 znkN(xn|\u00b5k, \u03a3k), (7) where K is the number of basis, G(\u00b5, \u03a3) is the final distribution basis representation. Then the corresponding pseudo label for sample \u02c6 xn(m) is \u02c6 yn(m) = f\u03b8( \u02c6 xn(m)). With the will know reparameter trick, distribution p(yn) can be formally expressed as p(yn) = ZZ p(yn|xn)p(xn|\u03f5)dxnd\u03f5, \u03f5 \u223cN(0, I) (8) where p(xn|\u03f5) = K X k=1 znk\u00b5k + \u03a3k\u03f5 (9) Then, we could easily compute the variance V AR( \u02c6 yn) and expectation E( \u02c6 yn) using these sampled pseudo label. For latent representations in XL which have label yn, the loss function for f\u03b8 is: LossL = E( \u02c6 yn) \u2212yn (10) For latent representations in XU which don\u2019t have label, the loss is basically the variance, therefore the final loss for pseudo label prediction model is: L = \u03bbLossL + (1 \u2212\u03bb)V AR( \u02c6 yn), (11) where \u03bb = 1 if the latent representation is from XU and vice versa. 4.2.1 Expectation-Maximization Now we can get the ideally orthogonal base vectors from weights and use them as initialized \u00b5 in the base generation block and compute the loss. Then in this section, we formally define the adapted EM process. At first, we need to update znk: 5 \fznew nk = K(xn, \u00b5k) PK j=1 K(xn, \u00b5j) , (12) where K(a, b) is a kernel function to evaluate the similarity between a and b. Then in the algorithm, the t-th Z could be formulated as: z(t) = softmax(\u03bbX(\u00b5(t\u22121)) T ), (13) where \u03bb is manually set to control Z distribution. Then in the M step (likelihood maximization), we update the \u00b5 based on the weighted summation of X to make them in one space. Then the update process in t-th iteration could be formulated as: \u00b5(t) k = z(t) nkxn PN m=1 z(t) mk (14) After T iterations, we could get the final basis \u00b5k(T), \u03a3k(T) and the prediction model \u03b8k(T). The generated pseudo label for each sample is a distribution, which can be formulated as: yn = f\u03b8(xn), (15) where f\u03b8 is a linear transformation, so distribution of yn could be easily calculated. The whole process of pseudo-label generation is summarized in algorithm 1. Algorithm 1: Pseudo-label generation Input :XL, XU, YL, f\u03b8 Output :\u00b5k(T), \u03a3k(T), \u03b8k(T) Initialize \u00b5k(0), \u03a3k(0), \u03b8(0) for t \u21901 to T do update znk(t) (eq 13) compute \u02c6 xn(t) (eq 10) compute pseudo-label yn (eq 15) compute loss function (eq 11) update \u03b8(t) using back propagation update \u00b5k(t) (eq 14) return 4.3 Network retraining Because in section 4.1, we define the problem as a classification task, so in this part we simply use classification as our final task. Considering we have the distribution for pseudo-labels, there are mainly two steps in the retraining part sample selection and model retraining. Method A\u2192W D\u2192W W\u2192D A\u2192D D\u2192A W\u2192A Mean ResNet-50 He et al. (2016) 68.4\u00b10.2 96.7\u00b10.1 99.3\u00b10.1 68.9\u00b10.2 62.5\u00b10.3 60.7\u00b10.3 76.1 DAN Long et al. (2015) 80.5\u00b10.4 97.1\u00b10.2 99.6\u00b10.1 78.6\u00b10.2 63.6\u00b10.3 62.8\u00b10.2 80.4 RTN Long et al. (2016) 84.5\u00b10.2 96.8\u00b10.1 99.4\u00b10.1 77.5\u00b10.3 66.2\u00b10.2 64.8\u00b10.3 81.6 DANN Ganin et al. (2016) 82.0\u00b10.4 96.9\u00b10.2 99.1\u00b10.1 79.7\u00b10.4 68.2\u00b10.4 67.4\u00b10.5 82.2 ADDA Tzeng et al. (2017) 86.2\u00b10.5 96.2\u00b10.3 98.4\u00b10.3 77.8\u00b10.3 69.5\u00b10.4 68.9\u00b10.5 82.9 JAN Long et al. (2017b) 85.4\u00b10.3 97.4\u00b10.2 99.8\u00b10.2 84.7\u00b10.3 68.6\u00b10.3 70.0\u00b10.4 84.3 GTA Sankaranarayanan et al. (2018) 89.5\u00b10.5 97.9\u00b10.3 99.8\u00b10.4 87.7\u00b10.5 72.8\u00b10.3 71.4\u00b10.4 86.5 MRKLD+LRENT Zou et al. (2019b) 89.4\u00b10.7 98.9\u00b10.4 100\u00b10.0 88.7\u00b10.8 72.6\u00b10.7 70.9\u00b10.5 86.8 Ours 92.2\u00b10.5 98.2\u00b10.3 99.6\u00b10.4 87.2\u00b10.5 72.8\u00b10.3 72.4\u00b10.4 87.1 Table 1: Comparison on Office-31 experiments 6 \fMethod Aero Bike Bus Car Horse Knife Motor Person Plant Skateboard Train Truck Mean Source Saito et al. (2017b) 55.1 53.3 61.9 59.1 80.6 17.9 79.7 31.2 81 26.5 73.5 8.5 52.4 MMD Long et al. (2015) 87.1 63 76.5 42 90.3 42.9 85.9 53.1 49.7 36.3 85.8 20.7 61.1 DANN Ganin et al. (2016) 81.9 77.7 82.8 44.3 81.2 29.5 65.1 28.6 51.9 54.6 82.8 7.8 57.4 ENT Grandvalet et al. (2005) 80.3 75.5 75.8 48.3 77.9 27.3 69.7 40.2 46.5 46.6 79.3 16 57 MCD Saito et al. (2018) 87 60.9 83.7 64 88.9 79.6 84.7 76.9 88.6 40.3 83 25.8 71.9 ADR Saito et al. (2017b) 87.8 79.5 83.7 65.3 92.3 61.8 88.9 73.2 87.8 60 85.5 32.3 74.8 SimNet-Res152Pinheiro (2018) 94.3 82.3 73.5 47.2 87.9 49.2 75.1 79.7 85.3 68.5 81.1 50.3 72.9 GTA-Res152 Sankaranarayanan et al. (2018) 77.1 MRKLD+LRENT Zou et al. (2019b) 88.0 79.2 61.0 60.0 87.5 81.4 86.3 78.8 85.6 86.6 73.9 68.8 78.1 Ours 89.1 81.7 82.1 57.7 83.2 79.7 83.9 77.2 86.2 82.7 83.8 65.9 79.4 Table 2: Comparison on VisDA17 experiments Method Backbone Road SW Build Wall Fence Pole TL TS Veg. Terrain Sky PR Rider Car Truck Bus Train Motor Bike mIoU Source 42.7 26.3 51.7 5.5 6.8 13.8 23.6 6.9 75.5 11.5 36.8 49.3 0.9 46.7 3.4 5 0 5 1.4 21.7 CyCADA Hoffman et al. (2018) DRN-26 79.1 33.1 77.9 23.4 17.3 32.1 33.3 31.8 81.5 26.7 69 62.8 14.7 74.5 20.9 25.6 6.9 18.8 20.4 39.5 Source 36.4 14.2 67.4 16.4 12 20.1 8.7 0.7 69.8 13.3 56.9 37 0.4 53.6 10.6 3.2 0.2 0.9 0 22.2 MCD Saito et al. (2018) DRN-105 90.3 31 78.5 19.7 17.3 28.6 30.9 16.1 83.7 30 69.1 58.5 19.6 81.5 23.8 30 5.7 25.7 14.3 39.7 Source 75.8 16.8 77.2 12.5 21 25.5 30.1 20.1 81.3 24.6 70.3 53.8 26.4 49.9 17.2 25.9 6.5 25.3 36 36.6 AdaptSegNet Tsai et al. (2018) DeepLabv2 86.5 36 79.9 23.4 23.3 23.9 35.2 14.8 83.4 33.3 75.6 58.5 27.6 73.7 32.5 35.4 3.9 30.1 28.1 42.4 AdvEnt Vu et al. (2019) DeepLabv2 89.4 33.1 81 26.6 26.8 27.2 33.5 24.7 83.9 36.7 78.8 58.7 30.5 84.8 38.5 44.5 1.7 31.6 32.4 45.5 Source 29.2 FCAN Zhang et al. (2018) DeepLabv2 46.6 Ours DeepLabv2 87 47.7 80.3 25.9 26.3 47.9 34.7 29 80.9 45.7 80.3 60 29.2 81.7 37.9 47.5 37.2 29.8 47.7 50.4 Table 3: Adaptation results of experiments transferring from GTA5 to Cityscapes. 4.3.1 Sample selection After pseudo-label generation process, the generated pseudo-labels are formulated in a distribution format (Gaussian form) shown in equation 8 which contains variance and mean information. Then for classification task, a class-dependent selection Mukherjee and Awadallah (2020b) could be performed to construct a dataset with hard labels DS,U = {xu,s \u2208Su,c, yu}. Here, Su,c \u2208XU is constructed based on the score rank of each sample, if the sample\u2019s pseudo-label has higher variance, then it\u2019s more likely to be discarded. For yu, one can simply use its mean as its hard pseudo label, but here we want to accurately model the uncertainty information. Therefore, we randomly sample hard labels from the pseudo-label distribution to incorporate the uncertainty information encoded in the distribution. 4.3.2 Uncertainty aware retraining After the sample selection, a retraining dataset is derived as Dr = {XL, YL} S{xu,s, yu}, then for the retraining part, the final goal is to minimize following loss: minW LL + LU V ar(y) (16) Where W is the model parameter, LL and LU represent the task loss for labeled data and unlabeled data respectively, here in this classification example, they represent same classification loss like cross entropy. V ar(y) represents the sample uncertainty, for samples x \u2208XU, variance is same to the variance in the distribution to catch the uncertainty information of teacher model. In this setting, samples with higher variance, which basically means that the previous model is not confident on this sample, have lower weights in the back propagation process of training. After the retraining, one round shown in figure 2 is completed. Then we simply repeat the whole process until the ideal results are derived. Method Backbone Road SW Build Wall* Fence* Pole* TL TS Veg. Sky PR Rider Car Bus Motor Bike mIoU mIoU* Source DRN-105 14.9 11.4 58.7 1.9 0 24.1 1.2 6 68.8 76 54.3 7.1 34.2 15 0.8 0 23.4 26.8 MCD Saito et al. (2018) 84.8 43.6 79 3.9 0.2 29.1 7.2 5.5 83.8 83.1 51 11.7 79.9 27.2 6.2 0 37.3 43.5 Source DeepLabv2 55.6 23.8 74.6 6.1 12.1 74.8 79 55.3 19.1 39.6 23.3 13.7 25 38.6 AdaptSegNetTsai et al. (2018) 84.3 42.7 77.5 4.7 7 77.9 82.5 54.3 21 72.3 32.2 18.9 32.3 46.7 Source ResNet-38 32.6 21.5 46.5 4.8 0.1 26.5 14.8 13.1 70.8 60.3 56.6 3.5 74.1 20.4 8.9 13.1 29.2 33.6 CBST Zou et al. (2019b) 53.6 23.7 75 12.5 0.3 36.4 23.5 26.3 84.8 74.7 67.2 17.5 84.5 28.4 15.2 55.8 42.5 48.4 AdvEnt Vu et al. (2019) DeepLabv2 85.6 42.2 79.7 8.7 0.4 25.9 5.4 8.1 80.4 84.1 57.9 23.8 73.3 36.4 14.2 33 41.2 48 Source DeepLabv2 64.3 21.3 73.1 2.4 1.1 31.4 7 27.7 63.1 67.6 42.2 19.9 73.1 15.3 10.5 38.9 34.9 40.3 Ours 68 29.9 76.3 10.8 1.4 33.9 22.8 29.5 77.6 78.3 60.6 28.3 81.6 23.5 18.8 39.8 42.6 48.9 Table 4: Adaptation results of experiments transferring from SYNTHIA to Cityscapes. 7 \f5 Experiment In this section, we demonstrate the advantages of proposed methods by comparing the performance of proposed methods with the SOTA confidence-aware self-training strategy on 2 tasks image classification and image segmentation. To make the results comparative, we basically follow the settings in Zou et al. (2019b) which achieves SOTA results in confidence-aware self-training domain, details will be illustrated in following sections. 5.1 Dataset and evaluation metric 5.1.1 Image classification. For domain adaption in image classification task, VisDA17 Peng et al. (2018) and Office-31 Saenko et al. (2010) are used to evaluate the algorithm performance. In VisDA17, there are 12 classes with 152, 409 virtual images for training while 55, 400 real images from MS-COCO Lin et al. (2014) are target dataset. For Office-31, 31 classes collected from Amazon(A, 2817 images), Webcam(W, 795 images) and DSLR(D, 498 images) domains are included. We strictly follow the settings in Saenko et al. (2010); Sankaranarayanan et al. (2018); Zou et al. (2019b) which evaluate the domain adaption performance on A \u2192W, D \u2192W, W \u2192D, A \u2192D, D \u2192A, W \u2192A. For evaluation, we simply use the accuracy for each class and mean accuracy across all classes as the evaluation metric. 5.1.2 Semantic segmentation For domain adaption in image segmentation tasks, 2 virtual datasets GTA5 Richter et al. (2016), SYNTHIA Ros et al. (2016) and 1 real dataset Cityscapes Cordts et al. (2016) are used to evaluate the performance of proposed method. Concretely, GTA5 contains 24, 966 images based on the game GTA5, SYNTHIA-RAND-CITYSCAPES (subset of SYNTHIA) has 9400 images. For the experiment setup, we also strictly follow Hoffman et al. (2018); Tsai et al. (2018); Zou et al. (2019b) which use Cityscapes as target domain and view virtual datasets (GTA5 and CITYSCAPES) as training domain. For evaluation, the Intersection over Union (IoU) is used to measure the performance of models where. 5.2 Experiment setup To make our results comparable with current SOTA confidence-aware method, we adapt the settings in Zou et al. (2019b). Besides, all the training process is performed on 4 Tesla V100 GPUs which have 32GB memory. Image Classification: ResNet101/ ResNet-50 He et al. (2016) are used as backbones, which are pretrained based on ImageNet Deng et al. (2009). Then in source domain, we fine-tune the model using SGD while the learning rate is 1 \u00d7 10\u22124, weight decay is set to be 5 \u00d7 10\u22125, momentum is 0.8 and the batch size is 32. In the self-training round, the parameters are same except for the different learning rates which are 5 \u00d7 10\u22124. Image Segmentation: In image segmentation part, we mainly use the older DeepLab v2 Chen et al. (2017a) as backbone to align with previous results. DeepLab v2 is first pretrained on ImageNet and then finetuned on source domain using SGD. Here we set learning rate as 5 \u00d7 10\u22124, weight decay is set to be 1 \u00d7 10\u22125, momentum is 0.9, the batch size is 8 while the patch size is 512 \u00d7 1024. In self-training, we basically run 3 rounds which has 4 retraining epochs. 5.3 Experiment results Comparison on image classification. As shown in table 1 and table 2, compared with previous SOTA result in confidence-aware self-training and other self-training algorithms, although our algorithm does not achieve best performance in all sub-tasks, the mean results (87.1 and 79.4 for Office-31 and VisDA17 respectively) achieves SOTA while our results (derivations and means) are obtained from 5 runs of the experiment. Comparison on image segmentation.As shown in table 3 and 4, in semantic segmentation task, our results of average IoU (mIoU) achieves SOTA among confidence-aware self-training algorithms. 8 \f6" + } + ], + "Zhen Jia": [ + { + "url": "http://arxiv.org/abs/2402.15400v1", + "title": "Faithful Temporal Question Answering over Heterogeneous Sources", + "abstract": "Temporal question answering (QA) involves time constraints, with phrases such\nas \"... in 2019\" or \"... before COVID\". In the former, time is an explicit\ncondition, in the latter it is implicit. State-of-the-art methods have\nlimitations along three dimensions. First, with neural inference, time\nconstraints are merely soft-matched, giving room to invalid or inexplicable\nanswers. Second, questions with implicit time are poorly supported. Third,\nanswers come from a single source: either a knowledge base (KB) or a text\ncorpus. We propose a temporal QA system that addresses these shortcomings.\nFirst, it enforces temporal constraints for faithful answering with tangible\nevidence. Second, it properly handles implicit questions. Third, it operates\nover heterogeneous sources, covering KB, text and web tables in a unified\nmanner. The method has three stages: (i) understanding the question and its\ntemporal conditions, (ii) retrieving evidence from all sources, and (iii)\nfaithfully answering the question. As implicit questions are sparse in prior\nbenchmarks, we introduce a principled method for generating diverse questions.\nExperiments show superior performance over a suite of baselines.", + "authors": "Zhen Jia, Philipp Christmann, Gerhard Weikum", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Motivation. Question answering (QA) comprises a spectrum of settings for satisfying users\u2019 information needs, ideally giving crisp, entity-level answers to natural-language utterances [46]. Temporal QA specifically focuses on questions with temporal conditions (e.g., [24, 31, 48]), making up a substantial portion of user needs [65], This work is licensed under a Creative Commons Attribution International 4.0 License. WWW \u201924, May 13\u201317, 2024, Singapore, Singapore \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. https://doi.org/10.1145/nnnnnnn.nnnnnnn but poses challenges that are not properly met by universal QA systems. Consider the following example: \ud835\udc5e1: Record company of Queen in 1975? The band Queen had different record companies over the years, so it is decisive to consider the explicit temporal constraint (\u201cin 1975\u201d). Other questions with explicit time are lookups of dates, such as: \ud835\udc5e2: When was Bohemian Rhapsody recorded? Another \u2013 underexplored and most challenging \u2013 situation is when questions involve implicit temporal constraints. These can involve the need to compare different time points or intervals, even when the user input does not explicitly state it. Examples are: \ud835\udc5e3: Queen\u2019s record company when recording Bohemian Rhapsody? \ud835\udc5e4: Queen\u2019s lead singer after Freddie Mercury? For \ud835\udc5e4, the system has to find out when Mercury died or left the band, in order to compute the correct answer that Brian May (the band\u2019s guitarist) took over as lead singer. The research literature on temporal QA is substantial, including [9, 10, 16, 23\u201325, 31, 48, 58]. Most methods address all kinds of temporal questions, but are typically less geared for implicit questions. Some methods operate over curated knowledge bases (KBs) (e.g., [16, 23, 24]), while others are designed for processing text corpora such as news collections or Wikipedia full-text (e.g., [9, 35]). State-of-the-art limitations. We observe three major issues: (i) Many methods use \u201csoft-matching\u201d techniques, based on latent embeddings or language models. This may lead to invalid answers, where the non-temporal part of a question is matched, but the temporal constraint is violated. For example, a question about \u201cQueen\u2019s record company in 1990?\u201d may erroneously return EMI instead of the correct value Parlophone, because EMI is more prominent and was Queen\u2019s company on most albums. Even when the output is correct, this could be by the prominence of the answer alone. For example, \u201cWho was Queen\u2019s lead singer in 1975?\u201d could return the most popular Freddie Mercury without checking the time. When we vary the question into \u201c...in 2000?\u201d, many systems would still yield Freddie Mercury, although he was dead then. This indicates that the system has incomplete inference and is unable to explain its answer derivation. We call this phenomenon unfaithful QA. ii) A weak spot of temporal QA systems is the handling of implicit questions. These are infrequent in established benchmarks. Some methods [16, 23, 34] aim to transform the implicit conditions into explicit temporal constraints, based on classifying phrases starting with \u201cduring\u201d, \u201cbefore\u201d etc. However, they heavily rely 1 arXiv:2402.15400v1 [cs.IR] 23 Feb 2024 \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Zhen Jia, Philipp Christmann, & Gerhard Weikum Figure 1: Overview of the Faith pipeline. The figure illustrates the process for answering \ud835\udc5e3 (\u201cQueen\u2019s record company when recording Bohemian Rhapsody?\u201d) and \ud835\udc5e1 (\u201cRecord company of Queen in 1975?\u201d). For answering \ud835\udc5e3, two intermediate questions \ud835\udc5e31 and \ud835\udc5e32 are generated, and run recursively through the entire temporal QA system. on hand-crafted rules which are rather limited in scope and cannot robustly handle unforeseen utterances. (iii) Prior methods run on a single information source: either a KB or a text corpus. This limits QA coverage: KBs are incomplete and lack refined detail about events, whereas text collections are harder to extract answers from and often fail on complex questions [11, 16]. QA over heterogeneous sources, including also web tables, has been addressed by [13, 38], but these methods do not support temporal conditions. Approach. To overcome these limitations, we propose Faith (FAIthful Temporal question answering over Heterogeneous sources), a temporal QA system that operates over heterogeneous sources, seamlessly combining a KB, a text corpus and web tables. Inspired by the architecture of [13], Faith consists of three main stages: (i) Temporal Question Understanding for representing the question intent into a structured frame, with specific consideration of the temporal aspects; (ii) Faithful Evidence Retrieval for identifying relevant pieces of evidence from KB, text and tables, with time-aware filtering to match the temporal conditions; (iii) Explainable Heterogeneous Answering to compute entitylevel answers and supporting evidence for explanation. A key novelty in the question understanding is that implicit constraints are resolved into explicit temporal values by generating intermediate questions and recursively calling Faith itself. For example, the implicit condition \u201cwhen recording Bohemian Rhapsody\u201d in \ud835\udc5e3 is transformed into \u201cwhen Queen recorded Bohemian Rhapsody?\u201d, and the recursive invocation of Faith returns the explicit condition August 1975 September 1975. This derived explicit condition is then used in a similar vein as the explicit condition 1975 in \ud835\udc5e1, making it easier to answer the information need. Note that this is not just question rewriting, but is driven by the full-fledged QA system itself over the full suite of heterogeneous sources. A second key novelty is that, in contrast to most prior works including large language models, Faith provides tangible provenance for the answer derivation. By providing users with explanatory evidence for answers, Faith is a truly faithful temporal QA system. Existing benchmarks for temporal QA focus on a single information source at hand (either a KB or a text corpus), and include only few questions with implicit constraints (so the weak performance on these hardly affects the overall results). Therefore, we devise a new method for automatically creating temporal questions with implicit constraints, with systematic controllability of different aspects, including the relative importance of different source types (text, infoboxes, KB), coverage of topical domains (sports, politics etc.), fractions of prominent vs. long-tail entities, question complexity, and more. This way, we construct a new dataset named Tiq with 10,000 questions and answers accompanied by supporting evidence. Our code and data is available at https://faith.mpi-inf.mpg.de. Contributions. The salient contributions of this work are: \u2022 the first temporal QA system that taps into heterogeneous sources, and gives faithful answers with explanatory evidence; \u2022 a mechanism that transforms implicit temporal constraints into explicit conditions, by recursively calling the QA system itself; \u2022 a principled method for automatic construction of diverse and difficult temporal questions, releasing the Tiq benchmark. 2 CONCEPTS AND NOTATION This section introduces salient concepts and notation for this work. Temporal value. A temporal value indicates a point in time or time interval. It can be a specific date (e.g., 24 November 1991), a year (e.g., 1975), or a time period (e.g., August 1975 September 1975). Temporal constraint. A temporal constraint specifies a condition about a time point or interval that has to be satisfied by the answer and its evidence. Temporal constraints consist of a temporal value, 2 \fFaithful Temporal Question Answering over Heterogeneous Sources WWW \u201924, May 13\u201317, 2024, Singapore, Singapore and a temporal signal (like before, after, overlap). An example of a (verbalized) temporal constraint is \u201cin 1970\u201d. Explicit question. An explicit question mentions a specific temporal constraint explicitly, as in \u201cRecord company of Queen in 1975?\u201d. Implicit question. An implicit question also specifies a temporal constraint, but keeps this constraint implicit without mentioning the actual temporal value: \u201cQueen\u2019s record company when recording Bohemian Rhapsody?\u201d. Answer. An answer to a question is either an entity (e.g., Brian May) or a literal such as a date (e.g., 24 August 1975), year (e.g., 1975) or number (e.g., 3). Evidence. An evidence is given with an answer as explanatory support. The evidence consists of information snippets that are retrieved from a KB, a text corpus, a table, or a Wikipedia infobox. Following [12], we consider snippets on a sentence-level: text is split into sentences, and KB-facts, table rows and infobox entries are verbalized by concatenating the individual pieces. Faithfulness. A system answers a question faithfully if its evidence, provided with the answer, contains: (i) the answer, (ii) all entities that appear in the question (with any surface name), (iii) all predicates that appear in the question (at least in paraphrased or implicit form), (iv) a temporal expression that satisfies the temporal constraint of the question. The first three aspects are valid in the context of any QA system; the fourth is specific to temporal QA. 3 FAITH METHOD Fig. 1 provides an overview of the system architecture, illustrated with the processing of the running examples \ud835\udc5e3 and \ud835\udc5e1. The following subsections present the three main components (understanding, retrieval, and answering), and will refer to these examples. 3.1 Temporal Question Understanding The goal of this first stage is to capture the temporal information need in a frame-like structure. Notably, this stage identifies and categorizes temporal constraints in the user input, which is later used for pruning temporally-inconsistent answer candidates. TSF. Inspired by [12] and [20] (both addressing other, non-temporal, kinds of QA), we propose to learn a Time-aware Structured Frame (TSF) for an incoming temporal question. The TSF includes both general-QA-relevant slots: \u2022 question entity, \u2022 question relation, \u2022 expected answer type, and temporal-QA-relevant slots: \u2022 temporal signal, indicating the kind of temporal relation, \u2022 temporal category, indicating the type of temporal constraint, \u2022 temporal value, the time point or interval of interest (if present). The question entity and relation are taken from the surface form of the question (i.e. not linked to KB) to allow for uniform treatment of heterogeneous sources. The expected answer type is learned from the training data, in which the KB-type of the gold answer is used. The temporal signal can be overlap (e.g., from cues like \u201cin\u201d, \u201cduring\u201d), before (e.g., from cues like \u201cbefore\u201d, \u201cprior to\u201d), or after (e.g., from cues like \u201cafter\u201d, \u201cfollows\u201d). We categorize the constraint into implicit (e.g., \ud835\udc5e3 and \ud835\udc5e4) and non-implicit (e.g., \ud835\udc5e1 and \ud835\udc5e2). The temporal value can be a year, date or time period. Both the temporal signal and value are derived by identifying and normalizing key phrases in the input question. For example, the TSF for \ud835\udc5e1 is: \u27e8question entity: \u201cQueen\u201d, question relation: \u201crecord company of\u201d, expected answer type: \u201crecord company\u201d, temporal signal: overlap, temporal category: non-implicit, temporal value: 1975 \u27e9 Note that in case the question does not specify temporal constraints (e.g., \ud835\udc5e2), the respective fields are simply kept empty. Resolving implicit questions. For the challenging case of implicit questions, such as \ud835\udc5e3 or \ud835\udc5e4, the temporal value cannot be extracted from the question directly. To resolve this problem, we devise a novel mechanism, the implicit question resolver, based on recursively invoking the temporal QA system itself. To this end, the implicit temporal constraint in the question is identified and transformed into an intermediate question. For instance, the intermediate question for \ud835\udc5e4 would be \u201cwhen Freddie Mercury lead singer of Queen?\u201d. For \ud835\udc5e3, the temporal value should be a time interval (August 1975 September 1975). Thus, two intermediate questions are required: (i) \ud835\udc5e31: \u201cWhen Queen recorded Bohemian Rhapsody start date?\u201d, and (ii) \ud835\udc5e32: \u201cWhen Queen recorded Bohemian Rhapsody end date?\u201d. Although these formulations are ungrammatical, the QA system can process them properly, being robust to such inputs. The intermediate questions are fed into Faith as a recursive call, to obtain the explicit temporal value for filling the TSF of the original question. The TSF for \ud835\udc5e3 thus becomes: \u27e8question entity: \u201cQueen\u201d, question relation: \u201crecorded company\u201d, expected answer type: \u201crecord company\u201d, temporal signal: overlap, temporal category: implicit, temporal value: August 1975 September 1975 \u27e9 Note the similarity to the TSF of the explicit temporal question \ud835\udc5e1. Generating intermediate questions. The intermediate questions are generated by a fine-tuned sequence-to-sequence (Seq2seq) model, specifically BART [27]. A major obstacle, though, is that no prior dataset has suitable annotations, and collecting such data at scale is prohibitive. Therefore, we generated training data using InstructGPT [39], leveraging its in-context learning [3] capabilities. We randomly select 8 implicit questions from our train set and label them manually. For each question, we give the intermediate question and the expected answer type as output. The exact prompts used are shown in Table 9 in the Appendix. The expected answer type of an intermediate question can be date or time interval. When the expected answer type is a time interval (e.g., for \ud835\udc5e3), two intermediate questions are created, appending \u201cstart date\u201d and \u201cend date\u201d to the generated intermediate question, respectively (see \ud835\udc5e31 and \ud835\udc5e32 as example). We use this technique to annotate all implicit questions in the train and dev sets, obtaining training data for fine-tuning the BART model. Note that GPT is used only for the generation of training data. It is not used at run-time to avoid its (computational, monetary, and environmental) costs and dependency on black-box models. 3 \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Zhen Jia, Philipp Christmann, & Gerhard Weikum Constructing the TSF. We also use a fine-tuned Seq2seq model, again BART, for generating the values for the question entity, question relation, expected answer type, temporal signal, and temporal category slots of the TSF representation. The training data this TSF construction model is obtained via (i) distant supervision (for question entity and question relation) [12], (ii) KB-type look-ups (for expected answer type), and (iii) annotations in the benchmark (for temporal signal and temporal category). Further detail in Sec. A.2. The temporal values are obtained via the recursive mechanism discussed above for implicit questions, and via SUTime [6] and regular expression matching for explicit questions. Phrases like \u201ctoday\u201d or \u201ccurrent\u201d are considered as well and properly normalized. We use the creation time of the question [5], as provided in the benchmarks, as reference time. The TSF generated in this understanding stage is used for representing the temporal information need in the subsequent retrieval and answering stages, capturing its key temporal characteristics. 3.2 Faithful Evidence Retrieval In this stage, we first retrieve evidence from heterogeneous sources, and then prune out information inconsistent with the temporal constraint expressed by the temporal signal and value in the TSF. Heterogeneous retrieval. This step largely follows the generalpurpose QA method of [12], and makes use of entity linking. Entity mentions in the input are identified and linked via Clocq [11]. The input here is the concatenation of the question entity, the question relation, and the expected answer type of the TSF. For the resulting linked entities, we retrieve the Wikipedia pages for extracting text, tables, and infoboxes. Further, KB-facts with the linked entities are obtained from Wikidata. All retrieved pieces of evidence are verbalized [38] into textual sentences, for uniform treatment. The KB-facts are verbalized by concatenating their individual parts; the text evidence is split into sentences; table rows are transformed by concatenating the individual \u27e8column headers, cell value\u27e9pairs; infoboxes are handled by linearizing all attribute-value pairs. Temporal pruning. Explicit temporal expressions in the retrieved pieces of evidence are identified and normalized similarly as in the understanding stage. Evidence that does not match the temporal criteria is pruned out. We address two kinds of situations: (i) the question aims for a temporal value as answer and does not have any temporal constraints (e.g., \u201cWhen ...?\u201d); (ii) the question has a temporal constraint which needs to be matched by the evidence. In the first case, all evidence that does not contain any temporal values, and is thus unable to provide the answer, is dropped. In the second case, we remove pieces of evidence that do not match the temporal constraint, to ensure that answers are faithful to the temporal intent of the question. The retrieval output is a smaller set of evidence pieces, faithfully reflecting the temporal constraints of the question. The final answer and its explanatory evidence are computed from this pool. 3.3 Explainable Heterogeneous Answering In the final stage, the answer is derived from this set of evidence pieces that is already known to satisfy the temporal conditions. Topic Entity Sampling {Alicia Keys,\u2026} Question Rephrasing {\u201cWhat album did Alicia Keys release when Norah Jones won the Grammy Award for Best New Artist?\u201d, \u2026} Pseudo-Question Construction {\u201cWhat album Alicia Keys followed up her debut with which was released, during, Norah Jones award received Grammy Award for Best New Artist follows Alicia Keys?\u201d, \u2026} Implicit questions + answer(s) {(\u201cWhat album did Alicia Keys release when Norah Jones won the Grammy Award for Best New Artist?\u201d, The Diary of Alicia Keys), \u2026} Configuration Pipeline Entity prominence Fractions of information sources Temporal scope (year range) Domain coverage Number of questions per entity Information Snippet Retrieval {\u201cAlicia Keys followed up her debut with The Diary of Alicia Keys, which was released in December 2003.\u201d, \u201cNorah Jones, award received, Grammy Award for Best New Artist, follows, Alicia Keys, point in time, 2003.\u201d, \u2026} Figure 2: Steps to create implicit questions with our proposed methodology, highlighting the key configurable parts. Since this part is not the main focus of this work, we employ a state-of-the-art answering model for general-purpose QA. We use the answering stage of Explaignn [13] that is based on graph neural networks (GNNs), and computes a subset of supporting evidence for the predicted answer. Thus, we ensure that the answer can be traced back through the entire system including the answering stage, for end user explainability. The input query to the GNNs is the concatenation of the question entity, question relation, and expected answer type. 4 TIQ BENCHMARK Most existing benchmarks for temporal QA, like TempQuestions [22], TimeQuestions [24] or TempQA-Wd [34], have only few implicit questions (209, 1,476, and 154, respectively), falling short of evaluating one of the key challenges in temporal QA. CronQuestions [48] and TEMPREASON [53] have a larger fraction of implicit questions, but these are based on a small set of hand-crafted rules. Thus, the questions lack syntactic diversity. Further, questions in these benchmarks are always answerable using a single information source (either KB or text corpus). Therefore, we construct a new benchmark with a primary focus on challenging and diverse implicit questions. The obvious idea of using crowdsourcing is expensive and error-prone. Also, crowdworkers increasingly use LLMs as a shortcut [54]. Thus, we pursue an automated process instead. To ensure that questions are not specific to a single input source, our process considers multiple sources: Wikipedia text and infoboxes, and the Wikidata KB. 4.1 Construction Methodology Overview. An implicit question has two parts: the main question that specifies the information need disregarding time (e.g., \u201cQueen\u2019s lead singer\u201d for \ud835\udc5e4), and the implicit part that provides the temporal constraint (e.g., \u201cafter Freddie Mercury\u201d for \ud835\udc5e4). The key idea is to build each of the two parts from independent pieces of evidence, denoted as information snippets. The two snippets can come from very different sources, but need to be thematically related. This construction process operates as follows: (i) sample a set of topic entities to start with; (ii) retrieve temporal information snippets for each such topic entity from Wikipedia text, Wikipedia infoboxes, and Wikidata; 4 \fFaithful Temporal Question Answering over Heterogeneous Sources WWW \u201924, May 13\u201317, 2024, Singapore, Singapore (iii) concatenate information snippets using a suitable temporal signal and construct an interrogative sentence, a pseudo-question; (iv) rephrase the pseudo-question into a natural question using a generative model. An overview of this process is provided in Fig. 2, including an example case of constructing an implicit question. Naturally, implicit constraints are global events (e.g., the COVID pandemic), or major events for a specific entity (e.g., a prestigious award). Sampling topic entities. To obtain significant events, we start with Gregorian calendar year pages in Wikipedia (e.g., https://en. wikipedia.org/wiki/2023) that list notable events. From the pages for the years 1801 2025, we collect information snippets about such significant events. The entities in these snippets constitute the set of topic entities (href anchors are used for entity linking [17]). In our example in Fig. 2 this set includes Alicia Keys. Retrieving the grounding information snippets. We collect snippets about notable events in these year pages, and augment them with salient information about the topic entity from (i) the first five sentences (\u2243first passage) of the entity\u2019s Wikipedia page, (ii) the respective Wikipedia infobox, and (iii) the Wikidata facts. As candidates for the main question part, we consider all information snippets that are retrieved for a topic entity from Wikipedia text, infoboxes and Wikidata, irrespective of their salience. To avoid questions that are trivially answerable without considering the temporal condition, multiple candidate snippets are retrieved for the main question, with different temporal scopes (e.g., a band\u2019s singers from different epochs). This is implemented by measuring semantic similarity among candidates using a SentenceTransformer1 [45]. Creating a pseudo-question. Among the retrieved snippets for an entity, we identify pairs of candidate snippets that can be connected by a temporal conjunction/preposition (\u201cduring\u201d, \u201cafter\u201d and \u201cbefore\u201d). For such a pair, the temporal scopes have to be consistent with the temporal conjunction. A valid pair for the conjunction \u201cduring\u201d would be: \u201cAlicia Keys followed up her debut with The Diary of Alicia Keys, which was released in December 2003.\u201d (main question part from Wikipedia text) and \u201cNorah Jones, award received, Grammy Award for Best New Artist, follows, Alicia Keys, point in time, 2003.\u201d (implicit part from KB). A pseudo-question is created by concatenating the main part with the conjunction and the implicit part. The answer is an entity (not the topic entity) from the main part (The Diary of Alicia Keys). The answer is substituted by the prefix \u201cwhat\u201d followed by the most frequent KB-type of the answer (album in this case). The pseudo-question for the example is: \u201cWhat album Alicia Keys followed up her debut with which was released, during, Norah Jones award received Grammy Award for Best New Artist follows Alicia Keys?\u201d, which is an ungrammatical and unnatural formulation. Rephrasing to a natural question. Therefore, in the last step, we rephrase the pseudo-question to a natural formulation. We use InstructGPT [39] with 8 demonstration examples (pseudo-questions and their natural re-phrasings), to generate the final question2. The pseudo-question of the example is rephrased into the following implicit question: \u201cWhat album did Alicia Keys release when Norah Jones won the Grammy Award for Best New Artist?\u201d 1https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2 2The prompt is given in Table 8 in the Appendix. Figure 3: Distribution of questions over input source combinations (source for main part ; source for implicit part). Table 1: Basic statistics for Tiq. Sources Wikipedia text, infoboxes, and Wikidata Questions 10,000 (train: 6,000, dev: 2,000, test: 2,000) Avg. question length 17.96 words Avg. no. of question entities 2.45 Unique topic entities covered 10,000 Long-tail topic entities covered 2,542 (with < 20 KB-facts) Prominent topic entities covered 2,613 (with > 500 KB-facts) 4.2 Benchmark Characteristics Topic entities. For creating Tiq (Temporal Implicit Questions) we started with the years 1801-2025 and obtained an initial set of 229,318 entities. From this set, we uniformly sampled 10,000 topic entities based on their frequency, to capture a similar amount of long-tail and more prominent entities (see Table 1 for details). These fractions can be configured as required. Since some entity types were over-represented in the calendar year pages (e.g., politicians or countries), we also ensured that individual entity types are not taking up more than 10% of the topic entities. In general, the topic entity set allows to control the domain coverage within the generated implicit questions, by choosing entities of the desired types. We did not specifically configure the proportions to which the individual information sources are used within the questions, since we observed a naturally diverse distribution. Fig. 3 shows the distribution among source combinations for initiating the main and implicit part. The questions are finally split into train (6,000), dev (2,000), and test sets (2,000). Table 1 shows the basic statistics, and Table 2 shows representative questions of the Tiq benchmark. Meta-data. Tiq provides implicit questions and gold answers, as strings as well as canonicalized to Wikipedia and Wikidata. The meta-data includes the information snippets grounding the question, the sources these were obtained from, the explicit temporal value expressed by the implicit constraint, the topic entity, the question entities detected in the snippets, and the temporal signal. The Tiq dataset is available at https://faith.mpi-inf.mpg.de. 5 EXPERIMENTS 5.1 Experimental Setup Benchmarks. We conduct experiments on our new Tiq benchmark and TimeQuestions [24], which has been actively used in recent work on temporal QA. For ordinal questions (e.g., \u201cwhat was the first album by Queen?\u201d) in TimeQuestions, we apply the same method as outlined in Sec. 3, without applying any temporal filtering. Metrics. We use the standard QA metrics precision at 1 (P@1), mean reciprocal rank (MRR), and hit at 5 (Hit@5) [46]. 5 \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Zhen Jia, Philipp Christmann, & Gerhard Weikum Table 2: Representative questions from the Tiq benchmark. The sources below indicate the source that was used for populating the [main question part; implicit question part] of the implicit question. 1. Who bought the Gainesville Sun after it was owned by Cowles Media Company? 2. During Colin Harvey\u2019s senior football career, which club was he a member of while he played for the England national football team? 3. Which album released by Chris Brown topped the Billboard 200 when he was performing in Sydney? 4. What television series was Hulk Hogan starring in when he signed with World Championship Wrestling? 5. Who was Bristol Palin\u2019s partner before she participated in the fall season of Dancing with the Stars, and reached the finals, finishing in third place? The New York Times Company Everton F.C. Fortune Thunder in Paradise Levi Johnston [Text; KB] [Infobox; KB] [Text; Infobox] [Text; Text] [Infobox; Text] 6. During the onset of the COVID19 pandemic, who was the New York City head of government? 7. Who was the chief executive officer at Robert Bosch GmbH before revenue reached \u20ac78.74 billion? 8. After graduating from the Rostovon-Don College of Economics and Finance, which political party did Gyula Horn join? 9. Which national football team did Carlos Alberto Torres manage before joining Flamengo? 10. What university did Robert Lee Moore work for after Northwestern University? Bill de Blasio Volkmar Denner Hungarian Working People\u2019s Party Oman national football team University of Pennsylvania [KB; Text] [KB; Infobox] [Infobox; Text] [Infobox; Infobox] [KB; KB] Table 3: Main results comparing the performance of Faith against baselines on the test sets of Tiq and TimeQuestions. Benchmark \u2192 Tiq TimeQuestions Method \u2193 P@1 MRR Hit@5 P@1 MRR Hit@5 InstructGpt [39] 0.237 n/a n/a 0.224 n/a n/a Gpt-4 [37] 0.236 n/a n/a 0.306 n/a n/a Uniqorn [42] 0.236 0.255 0.277 0.331 0.409 0.538 Unik-Qa [38] 0.425 0.480 0.540 0.424 0.453 0.486 Explaignn [13] 0.446 0.584 0.765 0.525 0.587 0.673 TempoQR [31] 0.011 0.018 0.022 0.438 0.465 0.488 CronKGQA [48] 0.006 0.011 0.014 0.395 0.423 0.450 Exaqt [24] 0.232 0.378 0.587 0.565 0.599 0.664 Faith (Proposed) 0.491 0.603 0.752 0.535 0.582 0.635 Un-Faith 0.459 0.604 0.799 0.571 0.640 0.724 Baselines. We compare Faith with a suite of baselines, covering a diverse range of competitors: \u2022 Generative LLMs. We compare with InstructGpt [39] (\u201ctextdavinci-003\u201d) and Gpt-4 [37] (\u201cgpt-4\u201d) using the OpenAI API3. We tried different prompts, and found the following to perform best: \u201cPlease answer the following question by providing the crisp answer entity, date, year, or number.\u201d. For computing P@1, we check whether the generated answer string matches with the label or any alias of the gold answer. If this is the case, P@1 is 1, else 0. Other (ranking) metrics are not applicable for LLMs. \u2022 Heterogeneous QA methods. Further, we compare against a range of recent general-purpose methods for heterogeneous QA: Uniqorn [42], UniK-Qa [38], and the vanilla Explaignn [13]. \u2022 Temporal QA methods. We also compare with state-of-the-art methods for temporal QA: TempoQR (TempoQR-Hard) [31], CronKGQA [48], and Exaqt [24]. Finally, we show results for a variant of our approach, which does not prune out evidence temporally-inconsistent with the temporal constraint, i.e. drops the temporal pruning component. We term this variant Un-Faith. Configuration. Wikidata [55] is used as the KB for Faith and all baselines. We use Wikipedia text, tables and infoboxes as additional information sources for methods operating over heterogeneous sources. The BART models are initialized via Hugging Face4. We use AdamW as optimizer with a learning rate of 5\u00d710\u22125, batch 3https://platform.openai.com 4https://huggingface.co size of 10, weight decay of 0.01, 5 epochs, and 500 warm-up steps. Explaignn is run using the public code5, retaining the original settings and parameters for optimization. For Faith, we choose the candidate at rank 1 as the answer for intermediate questions in the implicit question resolver. In case too many evidences are obtained as input to the answering stage, we consider the top-100 evidences as computed by a BERT-based reranker [36]. Further detail is given in the Appendix A.4. We follow an epoch-wise evaluation strategy for each module and baseline, and take the version with the best performance on the respective dev set. All training processes and experiments are run on a single GPU (NVIDIA Quadro RTX 8000, 48 GB GDDR6). 5.2 Main Results Answering performance of Faith and baselines on TimeQuestions and on Tiq are in Table 3. Faith outperforms baselines on Tiq. The main insight from Table 3 is that Faith surpasses all baselines on the Tiq dataset for P@1, which is the most relevant metric, demonstrating the benefits of our proposed method for answering implicit temporal questions. Temporal QA methods operating over KBs lack the required coverage on the Tiq dataset, and perform worse than general-purpose QA methods operating over heterogeneous sources. Explaignn comes close to the performance of Faith, and even slightly improves on the Hit@5 metrics. Note, however, that Explaignn and all other baselines do not verify that temporal constraints are met during answering. Thus, the most prominent among answer candidates may simply be picked up, even if no temporal information is provided or matching. Such possibly \u201caccidental\u201d and unfaithful answers are, by design, not considered by Faith. Trade-off between faithfulness and answering performance. Results for Un-Faith illustrate the effect of this phenomenon on our approach: especially the MRR and Hit@5 results are substantially improved. Consequently, Un-Faith outperforms all competitors on TimeQuestions. However, its answers are not always faithfully grounded in evidence sources. These results emphasize the trade-off between faithfulness and answering performance. Faith shows robust performance on TimeQuestions. Faith also shows strong performance on the TimeQuestions benchmark, on which it outperforms all baselines on P@1, except for Exaqt. This indicates the robustness of Faith across different datasets. 5https://github.com/PhilippChr/EXPLAIGNN 6 \fFaithful Temporal Question Answering over Heterogeneous Sources WWW \u201924, May 13\u201317, 2024, Singapore, Singapore Table 4: Comparing the faithfulness of Faith and Un-Faith for correct answers, and how often temporal constraints are violated or ignored. Benchmark \u2192 Tiq TimeQuestions Temporally Temporally Method \u2193 Faithful Unfaithful Faithful Unfaithful Faith 0.95 0.00 0.94 0.01 Un-Faith 0.90 0.08 0.87 0.13 Existing methods for temporal QA show major performance gaps between the two benchmarks: the P@1 of the strongest method on TimeQuestions, Exaqt, substantially drops from 0.565 at P@1 to 0.232 on the Tiq benchmark. Note that all methods are trained on the specific benchmark, if applicable. LLMs fall short on temporal questions. Another key insight from Table 3 is that current LLMs are clearly not capable of answering temporal questions. InstructGpt and Gpt-4 can merely answer \u224323-30% of the questions correctly, and are constantly underperforming Faith and baselines operating over heterogeneous sources. One explanation is that reasoning with continuous variables, such as time, is a well-known weakness of LLMs [15]. 5.3 Faithfulness Evaluation Our main results in Table 3 indicate that ignoring the temporal condition of the question can yield improvements on automatic metrics (compare performance of Faith vs. Un-Faith on TimeQuestions). However, we observe that this can lead to critical failure cases of QA systems and sometimes boils down to lucky guesses of the answer based on priors (e.g., prominence of an answer candidate). Faith refrains to answer in absence of consistent evidence. If there is no temporal information associated with the evidence of candidate answers, or the temporal information does not satisfy the temporal constraint, Faith will refuse answering the question. For example, for the question \u201cWho did Lady Jane Grey marry on the 25th of May 1533?\u201d, there is no answer satisfying the temporal constraint because Lady Jane Grey did not marry anyone on the 25th of May 1533, since she was only born four years later in 1937. However, all of the baselines provide an answer to the question, without indicating that the temporal constraint is violated. Since questions without a temporally-consistent answer are not available at large scale, we randomly sample 500 explicit questions from TimeQuestions, and replace the temporal value with a random date (e.g., \u201c12 October 6267\u201d). None of the resulting questions has a temporally-consistent answer. As expected, the competitors still provide answers6. In contrast, Faith successfully refrained from answering for 467 of the 500 questions (93.4%). Upon investigating the failure cases, we noticed that the date recognition identifies four-digit numbers as years matching with the constraint (e.g., in the infobox entry \u201cVeysonnaz, SFOS number, 6267\u201d). Fallback to Un-Faith. Completely refraining from answering could also be sub-optimal: the user might have made a typo (e.g., \u201cMay 1533\u201d instead of \u201cMay 1553\u201d). We investigated to fall back to UnFaith in such scenarios, which could be indicated to end users with an appropriate warning. Performance on both datasets was slightly 6Except for the LLMs for which we are not able to investigate the behavior at scale, since they would often generate longer texts. Table 5: Ablation study using different source combinations as input for Faith on dev sets. Note that Faith is trained using all sources as input for all cases. Benchmark \u2192 Tiq TimeQuestions Method \u2193 P@1 MRR Hit@5 P@1 MRR Hit@5 KB 0.293 0.368 0.468 0.425 0.464 0.513 Text 0.194 0.262 0.351 0.224 0.269 0.320 Infoboxes 0.169 0.223 0.296 0.093 0.117 0.149 Tables 0.032 0.057 0.083 0.078 0.094 0.114 KB+Text 0.429 0.527 0.649 0.520 0.567 0.626 KB+Tables 0.299 0.379 0.480 0.435 0.479 0.536 KB+Infoboxes 0.384 0.488 0.634 0.443 0.487 0.543 Text+Tables 0.196 0.267 0.362 0.252 0.298 0.350 Text+Infoboxes 0.283 0.372 0.490 0.251 0.299 0.355 Tables+Infoboxes 0.179 0.244 0.331 0.143 0.174 0.208 All sources 0.497 0.610 0.756 0.538 0.583 0.639 Table 6: Ablation studies of Faith on dev sets. Benchmark \u2192 Tiq TimeQuestions Method \u2193 P@1 P@1 Faith 0.497 0.538 w/o temporal pruning 0.443 0.573 w/o implicit question resolver 0.467 0.559 w/o GNN-based answering 0.316 0.399 improved: the P@1 metric increased from 0.491 to 0.492 on Tiq and from 0.535 to 0.539 on TimeQuestions. We further investigated to fall back to Un-Faith in case Faith answered incorrectly. The P@1 metric was improved substantially on both datasets: from 0.491 to 0.622 on Tiq and from 0.535 to 0.653 on TimeQuestions. Manual analysis. Finally, we investigated the faithfulness of correct answers provided by Faith and Un-Faith, to understand how often the question is answered correctly even though the evidence is not faithful to the question. To analyze this qualitatively, we randomly selected 100 questions (from each benchmark) for which both Faith and Un-Faith answered correctly, and then manually verified the faithfulness, based on the definition in Sec. 2. Results are in Table 4. Faith provides faithful answers and evidence in 95%/94% of the time. By design, answers are faithful to the temporal constraints in the question (except for one question which specifies two different temporal constraints). In comparison, Un-Faith violates or ignores the temporal condition in 8%/13% of the cases. For example, to answer the question \u201cWhat movies starring Taylor Lautner in 2011?\u201d (answer: Abduction), the evidence for Faith is \u201cTaylor Lautner, Year is 2011, Title is Abduction, Role is Nathan Harper\u201d (from table), while the evidence for Un-Faith is \u201cAbduction, cast member, Taylor Lautner\u201d (from KB). Even though both pieces of evidence mention the correct answer Abduction, Un-Faith fails to satisfy the temporal constraint (\u201cin 2011\u201d) with its evidence. 5.4 In-depth Analysis Integrating heterogeneous sources is decisive. We further investigated the effect of integrating heterogeneous sources into Faith, and tested giving each individual source independently, and their pairwise combinations as input, in comparison to the default setting with \"All sources\". Results are in Table 5. Each information 7 \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Zhen Jia, Philipp Christmann, & Gerhard Weikum Table 7: Anecdotal examples that Faith answered correctly in Tiq and TimeQuestions. Evidence shows the supporting information snippets along with their source provided in brackets. The part mentioning the predicted answer is in bold, and the detected temporal values are underlined. For the first example from the Tiq benchmark, we show the answering process of the intermediate question, which can be used by end users to verify the entire answer derivation of the system. Benchmark Tiq Question After managing FC Nantes, which football club did Antoine Raab take on next? Answer Stade Lavallois TSF \u27e8question entity: \u201cAntoine Raab, FC Nantes football\u201d, question relation: \u201cAfter managing which club did take on next\u201d, expected answer type: \u201cassociation football club\u201d, temp. signal: after, temp. category: implicit, temp. value: [1946, 1949] \u27e9 Evidence Antoine Raab, Managerial career, 1949\u20131950, Stade Lavallois. (from Infobox) Intermediate questions (i) When Antoine Raab managed FC Nantes start date? (ii) When Antoine Raab managed FC Nantes end date? Answers (to int. questions) (i) 1946, (ii) 1949 TSFs (for int. questions) (i) \u27e8question entity: \u201cFC Nantes, start, Antoine Raab\u201d, question relation: \u201cWhen managed date\u201d, expected answer type: \u201cyear\u201d, temp. signal: _; temp. category: non-implicit; temp. value: _ \u27e9 (ii) \u27e8question entity: \u201cFC Nantes, end, Antoine Raab\u201d, question relation: \u201cWhen managed date\u201d, expected answer type: \u201cyear\u201d, temp. signal: _; temp. category: non-implicit; temp. value: _ \u27e9 Evidence (for int. questions) (i, ii) Antoine Raab, Managerial career, 1946\u20131949, FC Nantes. (from Infobox) (ii) Antoine Raab, After the liberation of Nantes in 1944 Raab joined FC Nantes and played for the club until 1949. (from Text) Benchmark TimeQuestions Question What award did Thomas Keneally receive in the year 1982? Answer Booker Prize TSF \u27e8question entity: \u201cThomas Keneally\u201d, question relation: \u201cWhat award did receive in the year 1982\u201d, expected answer type: \u201cscience award\u201d, temp. signal: overlap, temp. category: non-implicit, temp. value: 1982 \u27e9 Evidence Man Booker Prize, winner, Thomas Keneally, point in time, 1982, for work, Schindler\u2019s Ark. (from KB) Thomas Keneally, Awards is Booker Prize, is Schindler\u2019s Ark, winner 1982. (from table) Thomas Keneally, He is best known for his non-fiction novel Schindler\u2019s Ark, the story of Oskar Schindler\u2019s rescue of Jews during the Holocaust, which won the Booker Prize in 1982. (from Text) source contributes to the performance of Faith, and integrating more information sources consistently enhances all metrics. Ablation studies. We tested variations of our pipeline on the dev sets. Table 6 shows results for Un-Faith (w/o temporal pruning), results without the implicit time resolver, and results with a Seq2seq model for answering (we used BART) instead of the GNN-based approach. Using a GNN-based answering approach plays a crucial role, and enhances not only answering performance, but also explainability. The implicit question resolver is decisive on Tiq, but slightly decreases performance on TimeQuestions. Un-Faith also shows strong performance on the dev sets. However, all modules contribute to the explainability and faithfulness of our approach. Anecdotal examples. Table 7 shows sample cases for which Faith provided the correct answer, and illustrates the answer derivation process providing traceable evidence for end users. Error analysis. To better understand failure cases, we conducted an error analysis measuring the answer presence (i.e. whether the gold answer is among answer candidates) throughout the pipeline. We identified the following error cases and list their percentage among all failure cases for Tiq and TimeQuestions, respectively: (i) the answer was not found in the initial retrieval stage (3.14/29.89), (ii) the answer is lost during temporal pruning (22.00/25.81), (iii) the answer is lost during scoring/graph shrinking (8.45/10.33), (iv) the answer is not considered among top-5 answers (15.13/12.47), (v) the answer is among top candidates but not at rank 1 (51.28/21.51). 6 RELATED WORK General-purpose QA. Question answering has extensive work using single sources like KBs (e.g., [2, 62, 64]) or text (e.g., [7, 21, 44]). Some works have shown that integrating different sources can substantially improve performance [8, 18, 47, 51, 52, 60, 61]. UnikQa [38] verbalizes snippets from a KB, text, tables and infoboxes, as input to a Fusion-in-decoder (FiD) model [21] for answer generation. Udt-QA [29] improved the verbalization technique. Explaignn [13] constructs graphs among such verbalized snippets, and applies graph neural networks for computing answers and explanatory evidence. None of these methods is geared for temporal questions. Another direction is to directly apply large language models (LLMs) for QA [3, 14, 41, 43]. However, LLMs cannot present traceable provenance for the generated outputs, falling short on faithfulness and explainability [1, 30, 33]. Also, LLMs struggle with reasoning on temporal conditions [15]. Temporal QA. Prior work that specifically targets temporal QA [9, 10, 16, 23\u201325, 28, 31, 34, 48\u201350, 57, 58, 63], can largely be divided into work using a KB (e.g., [24, 31, 34]), and work using text (e.g., [9, 35]). Methods operating over KBs, include template-based [16, 23, 34], KBembedding-based [10, 31, 48, 58], and graph-based methods [24, 50, 63]. Methods using textual inputs typically involve an extractive or generative reader [9, 35]. The three methods [24, 31, 48] represent the state-of-the-art on temporal QA. However, temporal constraints are handled solely in the latent space, without explicitly (or faithfully) pruning out temporally inconsistent answer candidates. Other approaches are based on handcrafted rules, and hence bound to fail for unseen question patterns (e.g., [23]). None of the existing work on temporal QA has considered incorporating heterogeneous sources. Temporal KBs. There is substantial work on temporal KBs [4, 19, 26, 32, 40, 56, 59], to assign temporal scopes to KB facts. Advances on the KB itself benefits QA, but is an orthogonal direction. 7" + }, + { + "url": "http://arxiv.org/abs/2109.08935v1", + "title": "Complex Temporal Question Answering on Knowledge Graphs", + "abstract": "Question answering over knowledge graphs (KG-QA) is a vital topic in IR.\nQuestions with temporal intent are a special class of practical importance, but\nhave not received much attention in research. This work presents EXAQT, the\nfirst end-to-end system for answering complex temporal questions that have\nmultiple entities and predicates, and associated temporal conditions. EXAQT\nanswers natural language questions over KGs in two stages, one geared towards\nhigh recall, the other towards precision at top ranks. The first step computes\nquestion-relevant compact subgraphs within the KG, and judiciously enhances\nthem with pertinent temporal facts, using Group Steiner Trees and fine-tuned\nBERT models. The second step constructs relational graph convolutional networks\n(R-GCNs) from the first step's output, and enhances the R-GCNs with time-aware\nentity embeddings and attention over temporal relations. We evaluate EXAQT on\nTimeQuestions, a large dataset of 16k temporal questions we compiled from a\nvariety of general purpose KG-QA benchmarks. Results show that EXAQT\noutperforms three state-of-the-art systems for answering complex questions over\nKGs, thereby justifying specialized treatment of temporal QA.", + "authors": "Zhen Jia, Soumajit Pramanik, Rishiraj Saha Roy, Gerhard Weikum", + "published": "2021-09-18", + "updated": "2021-09-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Motivation. Questions and queries with temporal information needs [7, 8, 14, 20, 40] represent a substantial use case in search. For factual questions, knowledge graphs (KGs) like Wikidata [75], Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia \u00a9 2021 Copyright held by the owner/author(s). ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. https://doi.org/10.1145/nnnnnnn.nnnnnnn Figure 1: Wikidata excerpt showing the relevant KG zone for the question where did obama\u2019s children study when he became president? with answer Sidwell Friends School. YAGO [64], or DBpedia [10], have become the go-to resource for search engines, tapping into structured facts on entities. While question answering over KGs [1, 12, 13, 16, 26, 55, 72, 77, 79] has been a major topic, little attention has been paid to the case of temporal questions. Such questions involve explicit or implicit notions of constraining answers by associated timestamps in the KG. This spans a spectrum, starting from simpler cases such as when was obama born?, where did obama live in 2001?, and where did obama live during 9/11? to more complex temporal questions like: where did obama\u2019s children study when he became president? Complex questions must consider multi-hop constraints (Barack Obama \u21a6\u2192child \u21a6\u2192Malia Obama, Sasha Obama \u21a6\u2192educated at \u21a6\u2192 Sidwell Friends School), and reason on the overlap of the intersection of time points and intervals (the start of the presidency in 2009 with the study period at the school, 2009 \u2013 2016). A simplified excerpt of the relevant zone in the Wikidata KG necessary for answering the question, is shown in Fig. 1. This paper addresses these challenges that arise for complex temporal questions. Limitations of state-of-the-art. Early works on temporal QA over unstructured text sources [5, 18, 33, 53, 56, 58, 71] involve various forms of question and document parsing, but do not carry over to KGs with structured facts comprised of entities and predicates. The few works specifically geared for time-aware QA over KGs include [23, 38, 76]. [38] uses a small set of hand-crafted rules for question decomposition and temporal reasoning. This approach needs human experts for the rules and does not cope with complex questions. [23] creates a QA collection for KGs that capture events and their timelines. A key-value memory network in [76] includes time information from KGs for answering simple questions. arXiv:2109.08935v1 [cs.IR] 18 Sep 2021 \fCIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Jia et al. Approach. We present Exaqt: EXplainable Answering of complex Questions with Temporal intent, a system that does not rely on manual rules for question understanding and reasoning. Exaqt answers complex temporal questions in two steps: (i) Identifying a compact, tractable answer graph that contains all cues required for answering the question, based on densesubgraph algorithms and fine-tuned BERT models; and (ii) A relational graph convolutional network (R-GCN) [66] to infer the answer in the graph, augmented with signals about time. The two stages work as follows (partly illustrated in Fig. 1). Stage 1: Answer graph construction. Exaqt fetches all KG facts of entities mentioned in the question (Barack Obama, President of the United States: dashed outline boxes), as detected by off-theshelf NERD systems [30, 36, 44]. The resulting noisy set of facts is distilled into a tractable set by means of a fine-tuned BERT model (admitting information about the children Malia and Sasha, but not Michelle Obama). To construct a KG subgraph of all questionrelevant KG items and their interconnections from this set, Group Steiner Trees (GST) [22, 47, 61] are computed (dark orange nodes, terminals or keyword matches underlined: \u201cobama\u201d, \u201cpresident\u201d, \u201cchild\u201d, \u201ceducated at\u201d) and completed (light orange nodes). The last and decisive step at this point augments this candidate answer graph with pertinent temporal facts, to bring in cues (potentially multiple hops away from the question entities) about relevant dates, events and time-related predicates. To this end, we use an analogous BERT model for identifying question-relevant temporal facts (blue nodes: educational affiliations of Malia and Sasha and their dates). The resulting answer graph is the input of the second stage. Stage 2: Answer prediction by R-GCN. Inspired by the popular GRAFT-Net model [66] and related work [59, 65], we construct an R-GCN that learns entity embeddings over the answer graph and casts answer prediction into a node classification task. However, R-GCNs as used in prior works are ignorant of temporal constraints [6]. To overcome this obstacle, we augment the R-GCN with time-aware entity embeddings, attention over temporal relations, and encodings of timestamps [80], temporal signals [60], and temporal question categories [38]. In our running example, temporal attention helps Exaqt focus on educated at as a question-relevant relation (partly shaded nodes). The time-enhanced representation of Barack Obama flows through the R-GCN (thick edges) and boosts the likelihood of Sidwell Friends School as the answer (node with thick borders), which contains 2009 (in bold) among its temporal facts. By producing such concise KG snippets for each question (as colored in Fig. 1), Exaqt yields explainable evidence for its answers. Contributions. This work makes the following contributions: \u2022 We propose Exaqt, the first end-to-end system for answering complex temporal questions over large-scale knowledge graphs; \u2022 Exaqt applies fine-tuned BERT models and convolutional graph networks to solve the specific challenges of identifying relevant KG facts for complex temporal questions; \u2022 We compile and release TimeQuestions, a benchmark of about 16\ud835\udc58temporal questions (examples in Table 1); \u2022 Experiments over the full Wikidata KG show the superiority of Exaqt over three state-of-the-art complex KG-QA baselines. All resources from this project are available at https://exaqt.mpiinf.mpg.de/ and https://github.com/zhenjia2017/EXAQT. Category Question who won oscar for best actress 1986? Explicit which movie did jaco van dormael direct in 2009? what currency is used in germany 2012? who was king of france during the ninth crusade? Implicit what did thomas jefferson do before he was president? what club did cristiano ronaldo play for after manchester united? what was the first film julie andrews starred in? Ordinal what was the second position held by pierre de coubertin? who is elizabeth taylor\u2019s last husband? what year did lakers win their first championship? Temp. Ans. when was james cagney\u2019s spouse born? when was the last time the orioles won the world series? Table 1: Sample temporal questions from TimeQuestions. 2 CONCEPTS AND NOTATION We now define the salient concepts that underlie Exaqt. Knowledge graph. A knowledge graph (aka knowledge base) is a collection of facts \ud835\udc39organized as a set of triples. It can be stored as an RDF database of such triples, or equivalently as a graph with nodes and edges. Examples are Wikidata [75], YAGO [64], DBpedia [10], Freebase [17] and industrial KGs. When stored as a graph, edges are directed: subject \u21a6\u2192 predicate \u21a6\u2192object. Subjects and objects are always nodes, while predicates (aka relations) often become edge labels. Fact. A fact \ud835\udc53\u2208\ud835\udc39can either be binary, containing a subject and an object connected by a predicate, or \ud835\udc5b-ary, combining multiple items via main predicates and qualifier predicates. An example of a binary fact is , where subjects are entities (Barack Obama), and objects may be entities (Malia Obama), literals (constants such as dates in ), or types aka classes (private school in ). We use the terms predicate and relation interchangeably in this text. An \ud835\udc5b-ary fact combines several triples that belong together, such as (see Fig. 1). position held is the main predicate, President of the US is the main object, while the remaining data are pairs. \ud835\udc5b-ary facts are of vital importance in temporal QA, with a large fraction of temporal information in modern KGs being stored as qualifiers. One way of representing qualifiers in a KG is shown in Fig. 1, via paths from the main predicate to the qualifier predicate and on to the qualifier object. Temporal fact. We define a temporal fact \ud835\udc61\ud835\udc53\u2208\ud835\udc39as one where the main object or any of the qualifier objects is a timestamp. Examples are (binary), or, (\ud835\udc5b-ary). Temporal predicate. We define a temporal predicate as one that can have a timestamp as its direct object or one of its qualifier objects. Examples are date of birth and position held. Temporal question. A temporal question is one that contains a temporal expression or a temporal signal, or whose answer is of temporal nature [37]. Examples of temporal expressions are \u201cin the year 1998\u201d, \u201cObama\u2019s presidency\u201d, \u201cNew Year\u2019s Eve\u201d, etc. which indicate explicit or implicit temporal scopes [41]. Temporal signals [60] are \fComplex Temporal Question Answering on Knowledge Graphs CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Figure 2: An overview of the two-stage Exaqt pipeline. markers of temporal relations (BEFORE, AFTER, OVERLAP, ...) [6] and are expressed with words like \u201cprior to, after, during, ...\u201d that indicate the need for temporal reasoning. In our models, a question \ud835\udc5eis represented as a set of keywords <\ud835\udc5e1,\ud835\udc5e2, . . .\ud835\udc5e|\ud835\udc5e|>. Temporal question categories. Temporal questions fall into four basic categories [37]: (i) containing explicit temporal expressions (\u201cin 2009\u201d), (ii) containing implicit temporal expressions (\u201cwhen Obama became president\u201d), (iii) containing temporal ordinals (\u201cfirst president\u201d), and (iv) having temporal answers (\u201cWhen did ...\u201d). Table 1 gives several examples of temporal questions. A question may belong to multiple categories. For example, what was the first film julie andrews starred in after her divorce with tony walton? contains both an implicit temporal expression and a temporal ordinal. Answer. An answer to a temporal question is a (possibly singleton) set of entities or literals, e. g., {Chicago University Lab School, Sidwell Friends School} for Where did Malia Obama study before Harvard?, or {08-2017} for When did Malia start at Harvard? Answer graph. An answer graph is a subset of the KG that contains all the necessary facts for correctly answering the question. 3 CONSTRUCTING ANSWER GRAPHS Fig. 2 is an overview of Exaqt, with two main stages: (i) answer graph construction (Sec. 3), and (ii) answer prediction (Sec. 4). 3.1 Finding question-relevant KG facts NERD for question entities. Like most QA pipelines [16, 54], we start off by running named entity recognition and disambiguation (NERD) [36, 44, 73] on the input question (where did obama\u2019s children study when he became president?). NERD systems identify spans of words in the question as mentions of entities (\u201cobama\u201d, \u201cpresident\u201d), and link these spans to KG items or Wikipedia articles (which can easily be mapped to popular KGs). The facts of these linked entities (Barack Obama, President of the United States) provide us with a zone in the KG to start looking for the answer. NERD is a critical cog in the QA wheel: entity linking errors leave the main QA pipeline helpless with respect to answer detection. To mitigate this effect, we use two different systems, TagMe and ELQ [30, 44], to boost answer recall. Complex questions often contain multiple entity mentions, and accounting for two NERD systems, we could easily have 2 \u22124 different entities per question. The total number of associated facts can thus be several hundreds or more. To reduce this large and noisy set of facts to a few question-relevant ones, we fine-tune BERT [24] as follows. Training a classifier for question-relevant facts. For each question in our training set, we run NERD and retrieve all KG facts of the detected entities. We then use a distant supervision mechanism: out of these facts, the ones that contain the gold answer(s) are labeled as positive instances. While several complex questions may not have their answer in the facts of the question entities (multi-hop cases), the ones that do, comprise a reasonable amount of training data for our classifier for question-relevance. Note that facts with qualifiers are also retrieved for the question entities (complete facts where the question entity appears as a subject, object, or qualifier object): this increases our coverage for obtaining positive examples. For each positive instance, we randomly sample five negative instances from the facts that do not contain the answer. Sampling question-specific negative instances helps learn a more discriminative classifier, as all negative instances are guaranteed to contain at least one entity from the question (say, ). Using all facts that do not contain an answer would result in severe class imbalance, as this is much higher than the number of positive instances. We then pool together the paired positive and negative instances for all training questions. The fact in this pair is now verbalized as a natural language sentence by concatenating its constituents; qualifier statements are joined using \u201cand\u201d [50]. For example, the full fact for Obama\u2019s marriage (a negative instance) is: . This has two qualifiers, and would be verbalized as \u201cBarack Obama spouse Michelle Obama and start date 03-10-1992 and place of marriage Trinity United Church of Christ.\u201d. The questions paired with the verbalized facts, along with the binary ground-truth labels, are fed as training input to a sequence pair classification model for BERT. Applying the classifier. Following [24], the question and the fact are concatenated with the special separator token [SEP] in between, and the special classification token [CLS] is added in front of this sequence. The final hidden vector corresponding to [CLS], denoted by \ud835\udc6a\u2208R\ud835\udc3b(\ud835\udc3bis the size of the hidden state), is considered to be the accumulated representation. Weights \ud835\udc7eof a classification layer are the only parameters introduced during fine-tuning, where \ud835\udc7e\u2208R\ud835\udc3e\u00d7\ud835\udc3b, where \ud835\udc3eis the number of class labels (\ud835\udc3e= 2 here, fact is question-relevant or not). log(softmax(\ud835\udc6a\ud835\udc7e\ud835\udc7b)) is used as the classification loss function. Once the classifier is trained, given a new pair, it outputs the probability (and the label) of the fact being relevant for the question. We make this prediction for all candidate facts pertinent to a question, and sort them in descending order of this question relevance likelihood. We pick the top scoring facts {\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59} from here as our question-relevant set. 3.2 Computing compact subgraphs The set of facts {\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59} contains question-relevant facts but is not indicative as to which are a set of coherent KG items that matter for this question, and how they are connected. To this end, we induce a graph as shown in Fig. 1, from the above set of facts where each KG item (entity, predicate, type, literal) becomes a node of its own. Edges run between components of the same fact in the direction mandated in the KG: subject \u21a6\u2192predicate \u21a6\u2192object for the main fact, and subject \u21a6\u2192predicate \u21a6\u2192qualifier predicate \u21a6\u2192qualifier object for (optional) qualifiers. Injecting connectivity. BERT selects {\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59} from the facts of a number of entities as detected by our NERD systems. These entities may not be connected to each other via shared KG facts. However, a connected graph is needed so that our subsequent GST and R-GCN \fCIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Jia et al. algorithms can produce the desired effects. To inject connectivity in the graph induced from BERT facts, we compute the shortest KG path between every pair of question entities, and add these paths to our graph. In case of multiple paths of same length between two entities, they are scored for question-relevance as follows. A KG path is set of facts: a path of length one is made up of one fact (Barack Obama \u21a6\u2192position held \u21a6\u2192President of the United States), a path of length two is made up of two facts (Barack Obama \u21a6\u2192country \u21a6\u2192United States of America \u21a6\u2192office held by head of state \u21a6\u2192President of the United States), and so on. Each candidate path is verbalized as a set of facts (a period separating two facts) and encoded with BERT [39], and so is the question. These BERT encodings are stored in corresponding [CLS] tokens. We compute the cosine similarity of [CLS](question) with [CLS](path), and add the path with the highest cosine similarity to our answer graph. GST model. Computing Group Steiner Trees (GST) [47, 52, 61, 67] has been shown to be an effective mechanism in identifying queryspecific backbone structures in larger graphs, for instance, in keyword search over database graphs [4, 27]. Given a subset of nodes in the graph, called terminals, the Steiner Tree (ST) is the lowestcost tree that connects all terminals. This reduces to the minimum spanning tree problem when all nodes of the graph are terminals, and to the shortest path problem when there are only two terminals. The GST models a more complex situation where the terminals are arranged into groups or sets, and it suffices to find a Steiner Tree that connects at least one node from each group. This scenario fits our requirement perfectly, where each question keyword can match multiple nodes in the graph, and naturally induces a terminal group. Finding a tree that runs through each and every matched node is unrealistic, hence the group model. Edge costs. An integral part of the GST problem is how to define edge costs. Since edges emanate from KG facts, we leverage questionrelevance scores assigned by the classifier of Sec. 3.1: \ud835\udc35\ud835\udc38\ud835\udc45\ud835\udc47(\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59) \u2208 [0, 1], converted to edge costs 1 \u2212\ud835\udc35\ud835\udc38\ud835\udc45\ud835\udc47(\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59) \u2208[0, 1]. GST algorithm. There are good approximation algorithms for GSTs [45, 67], but QA needs high precision. Therefore, we adopted the fixed-parameter-tractable exact algorithm by Ding et al. [27]. It iteratively grows and merges smaller trees over the bigger graph to arrive at the minimal trees. Only taking the best tree can be risky in light of spurious connections potentially irrelevant to the question. Thus, we used a top-\ud835\udc58variant that is naturally supported by the dynamic programming algorithm of [27]. GST completion. As shown in Fig. 1, the GST yields a skeleton connecting the most relevant question nodes. To transform this into a coherent context for the question, we need to complete it with facts from where this skeleton was built. Nodes introduced due to this step are shown in light orange in the figure: dates about the presidency, Obama\u2019s children, and the (noisy) fact about Obama\u2019s education. In case the graph has multiple connected components (still possible as our previous connectivity insertions worked only pairwise over entities), top-\ud835\udc58GSTs are computed for each component and the union graph is used for this fact completion step. Example. We show a simplified example in Fig. 1, where the node Barack Obama matches the question keyword \u201cObama\u201d, child matches \u201cchildren\u201d, educated at matches \u201cstudy\u201d, and President of the United States matches \u201cpresident\u201d. The educated at nodes connected to Malia and Sasha do not feature here as they are not contained in the facts of Barack Obama, and do not yet feature in our answer graph. We consider exact matches, although not just in node labels but also in the set of aliases present in the KG that list common synonyms of entities, predicates and types. This helps us consider relaxed matches without relying on models like word2vec [48] or GloVe [51], that need inconvenient thresholding on similarity values as a noisy proxy for synonyms. The GST is shown using dark orange nodes with the associated question keyword matches underlined (denoting the terminal nodes). In experiments, we only consider as terminals NERD matches for entities, and keyword matches with aliases for other KG items. The GST naturally includes the internal nodes and edges necessary to connect the terminals. Note that the graph is considered undirected (equivalently, bidirectional) for the purpose of GST computation. 3.3 Augmenting subgraphs with temporal facts The final step towards the desired answer graph is to enhance it with temporal facts. Here, we add question-relevant temporal facts of entities in the completed GST. This pulls in temporal information necessary for answering questions that need evidence more than one hop away from the question entities (blue nodes in Fig. 1): (+ noise like Malia\u2019s date of birth). The rationale behind this step is to capture facts necessary for faithfully answering the question, where faithful refers to arriving at the answer not by chance but after satisfying all necessary constraints in the question. For example, the question which oscar did leonardo dicaprio win in 2016? can be answered without temporal reasoning, as he only won one Oscar. We wish to avoid such cases in faithful answering. To this end, we first retrieve from the KG all temporal facts of each entity in the completed GST. We then use an analogously fine-tuned BERT model for question-relevance of temporal facts. The model predicts, for each temporal fact, its likelihood of containing the answer. It is trained using temporal facts of question entities that contain the answer as positive examples, while negative examples are chosen at random from these temporal facts. To trap multi-hop temporal questions in our net, we explore 2-hop facts of question entities for ground truth answers. A larger neighborhood was not used during the first fine-tuning as the total number of facts in two hops of question entities is rather large, but the count of 2-hop temporal facts is a much more tractable number. Moreover, this is in line with our focus on complex temporal questions. Let the likelihood score for a temporal fact \ud835\udc61\ud835\udc53of an entity in the completed GST be \ud835\udc35\ud835\udc38\ud835\udc45\ud835\udc47(\ud835\udc61\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59). As before, we take the top scoring {\ud835\udc61\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59}, add them to the answer graph, that is then passed on to Stage 2. 4 PREDICTING ANSWERS WITH R-GCN R-GCN basics. The answer prediction method of Exaqt is inspired by the Relational Graph Convolution Network model [59], an extension of GCNs [29] tailored for handling large-scale relational data such as knowledge graphs. Typically, a GCN convolves the features (equivalently, representations or embedding vectors) of nodes belonging to a local neighborhood and propagates them to their nearest neighbors. The learned entity representations are used in node classification. Here, this classification decision is whether a node is an answer to the input question or not. \fComplex Temporal Question Answering on Knowledge Graphs CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Figure 3: Architecture of the R-GCN model in Exaqt, that includes several signals of temporal information. In this work, we use the widely popular GRAFT-Net model [66] that adapted R-GCNs to deal with heterogeneous QA over KGs and text [15, 50]. In order to apply such a mechanism for answer prediction in our setup, we convert our answer graph from the previous step into a directed relational graph and build upon the \ud835\udc3e\ud835\udc3aonly setting of GRAFT-Net. In a relational graph, entities, literals, and types become nodes, while predicates (relations) become edge labels. Specifically, we use the KG RDF dump that contains normal SPO triples for binary facts by employing reification [35]. Reified triples can then be straightforwardly represented as a directed relational graph [66]. Exaqt introduces four major extensions over the R-GCN in GRAFT-Net to deal with the task of temporal QA: \u2022 we embed temporal facts to enrich representations of entity nodes, creating time-aware entity embeddings (TEE); \u2022 we encode temporal question categories (TC) and temporal signals (TS) to enrich question representations; \u2022 we employ time encoding (TE) to obtain the vector representations for timestamps; \u2022 we propose attention over temporal relations (ATR) to distinguish the same relation but with different timestamps as objects. In the following, we describe how we encode and update the node representations and perform answer prediction in our extended R-GCN architecture for handling temporal questions. Our neural architecture is shown in Fig. 3, while Table 2 summarizes notation for the salient concepts used in this phase. 4.1 Question representation 4.1.1 Initialization. To encode a temporal question, we first determine its temporal category and extract temporal signals (Sec. 2). Temporal category encoding (TCE). We adopt a noisy yet effective strategy for labeling categories for temporal questions, and leave more sophisticated (multi-label) classification as future work. We use a four-bit multi-hot (recall that a question can belong to multiple categories) vector where each bit indicates whether the question falls into that category. Our tagger works as follows: \u2022 A question is tagged with the \u201cEXPLICIT\u201d category if the annotators SUTime [21] or HeidelTime [62] detect an explicit temporal expression inside it; \u2022 A question is tagged with the \u201cIMPLICIT\u201d category if it contains any of the temporal signal words (we used the dictionary compiled by [60]), and satisfies certain part-of-speech patterns; \u2022 A question is of type \u201cTEMPORAL ANSWER\u201d if it starts with phrases like \u201cwhen ...\u201d, \u201cin which year ...\u201d, and \u201con what date ...\u201d; \u2022 A question is tagged with the \u201cORDINAL\u201d category if it contains an ordinal tag as labeled by the Stanford CoreNLP system [9], along with certain keywords and part-of-speech patterns. Temporal signal encoding (TSE). There are 13 temporal relations defined in Allen\u2019s interval algebra for temporal reasoning [6], namely: \u201cequals\u201d, \u201cbefore\u201d, \u201cmeets\u201d, \u201coverlaps\u201d, \u201cduring\u201d, \u201cstarts\u201d, and \u201cfinishes\u201d, with respective inverses for all of them except \u201cequals\u201d. We simplify these relations and adapt the strategy in [37] into 7 broad classes of temporal signals: \u2022 \u201cbefore\u201d and \u201cmeets\u201d relations are treated as \u201cBEFORE\u201d signals; \u2022 \u201cbefore-inverse\u201d and \u201cmeet-inverse\u201d relations are collapsed into \u201cAFTER\u201d signals; \u2022 \u201cstarts\u201d and \u201cfinishes\u201d relations are respectively mapped to \u201cSTART\u201d and \u201cFINISH\u201d signals; \u2022 words with ordinal tags and \u201clast\u201d are mapped to \u201cORDINAL\u201d; \u2022 all other relations are treated as \u201cOVERLAP\u201d signals; \u2022 absence of any signal word triggers the \u201cNO SIGNAL\u201d case. We map signal words to temporal signals in questions using a dictionary. We then encode these signals using a 7-bit (a question can contain multiple signals) vector, where each bit indicates the presence or absence of a particular temporal signal. Along with these temporal categories and temporal signals, we use a Long Short-Term Memory Network (LSTM) to model the \fCIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Jia et al. words in the question as a sequence (see block A in Fig. 3). Overall, we represent a question \ud835\udc5ewith |\ud835\udc5e| words as: \ud835\udc890 \ud835\udc92= \ud835\udc39\ud835\udc39\ud835\udc41(\ud835\udc7b\ud835\udc6a\ud835\udc6c(\ud835\udc92) \u2295\ud835\udc7b\ud835\udc7a\ud835\udc6c(\ud835\udc92) \u2295\ud835\udc3f\ud835\udc46\ud835\udc47\ud835\udc40(\ud835\udc981, ...,\ud835\udc98|\ud835\udc92|)) (1) Here \ud835\udc7b\ud835\udc6a\ud835\udc6c(\ud835\udc92) and \ud835\udc7b\ud835\udc7a\ud835\udc6c(\ud835\udc92) are multi-hot vectors encoding the temporal categories and temporal signals present in \ud835\udc5e, and\ud835\udc98\ud835\udc8arepresent the pre-trained word embeddings (from Wikipedia2Vec [78]) of the \ud835\udc56\ud835\udc61\u210eword in \ud835\udc5e. We concatenate (\u2295) the \ud835\udc7b\ud835\udc6a\ud835\udc6c(\ud835\udc92) and \ud835\udc7b\ud835\udc7a\ud835\udc6c(\ud835\udc92) vectors with the output vector from the final state of the LSTM. Finally, we pass this concatenated vector through a Feed Forward Network (FFN) and obtain the initial embedding of \ud835\udc5e, denoted as \ud835\udc890 \ud835\udc92. 4.1.2 Update. In subsequent layers, the embedding of the question gets updated with the embeddings of the entities belonging to it (i.e. the question entities obtained from NERD) as follows: \ud835\udc89\ud835\udc8d \ud835\udc92= \ud835\udc39\ud835\udc39\ud835\udc41( \u2211\ufe01 \ud835\udc52\u2208\ud835\udc41\ud835\udc38\ud835\udc45\ud835\udc37(\ud835\udc5e) \ud835\udc89\ud835\udc8d\u22121 \ud835\udc86 ) (2) where \ud835\udc41\ud835\udc38\ud835\udc45\ud835\udc37(\ud835\udc5e) contains the entities for question \ud835\udc5eand \ud835\udc89\ud835\udc8d\u22121 \ud835\udc86 denotes the embedding of an entity \ud835\udc52at layer \ud835\udc59\u22121. 4.2 Entity representation 4.2.1 Initialization. For initializing each entity \ud835\udc52in the relational graph, we use fixed-size pre-trained embeddings \ud835\udc99\ud835\udc86, also from Wikipedia2Vec [78]. Along with conventional skip-gram and context models, Wikipedia2Vec utilizes the Wikipedia link graph that learns entity embeddings by predicting neighboring entities in the Wikipedia graph, producing more reliable entity embeddings: \ud835\udc890 \ud835\udc86= \ud835\udc99\ud835\udc86 (3) 4.2.2 Update. Prior to understanding the update rule for the entities in subsequent layers, we need to introduce the following concepts: (i) Time encoding (TE); (ii) Time-aware entity embeddings (TEE); and (iii) Attention over temporal relations (ATR). Time encoding (TE). Time as an ordering sequence has an inherent similarity to positions of words in text: we thus employ a sinusoidal position encoding method [74, 80] to represent a timestamp \ud835\udc61\ud835\udc60. Here, the \ud835\udc58\ud835\udc61\u210eposition (day, month, etc.) in \ud835\udc61\ud835\udc60will be encoded as: \ud835\udc47\ud835\udc38(\ud835\udc58, \ud835\udc57) = ( sin(\ud835\udc58/10000 2\ud835\udc56 \ud835\udc51), if \ud835\udc57= 2\ud835\udc56 cos(\ud835\udc58/10000 2\ud835\udc56 \ud835\udc51), if \ud835\udc57= 2\ud835\udc56+ 1 (4) where\ud835\udc51is the dimension of the time encoding and \ud835\udc57is the (even/odd) position in the \ud835\udc51-dimensional vector. Further, we represent \ud835\udc7b\ud835\udc6c(\ud835\udc95\ud835\udc94), i.e. the time encoding of \ud835\udc61\ud835\udc60, as the summation of the encodings of each of its corresponding positions. This time encoding method provides an unique encoding to each timestamp and ensures sequential ordering among the timestamps [80], that is vital for reasoning signals like before and after in temporal questions. Time-aware entity embedding (TEE). An entity \ud835\udc52present in the relational graph is associated with a number of temporal facts \ud835\udc61\ud835\udc53\ud835\udc52 1 ,\ud835\udc61\ud835\udc53\ud835\udc52 2 , ...\ud835\udc61\ud835\udc53\ud835\udc52 \ud835\udc5b(Sec. 2) in our answer graph. A temporal fact \ud835\udc61\ud835\udc53\ud835\udc52is said to be associated with an entity \ud835\udc52if \ud835\udc52is present in any position of the fact (subject, object or qualifier object). We encode each \ud835\udc61\ud835\udc53\ud835\udc52 as the concatenation of its entity embeddings, relation embeddings (averaged) and time encodings of the timestamps (as shown in block B of Fig. 3). Further, we arrange each fact in {\ud835\udc61\ud835\udc53\ud835\udc52} in a chronological order and pass them through an LSTM network. Finally, the output from the final state of the LSTM can be used as the time-aware entity representation of \ud835\udc52, TEE(\ud835\udc52), that is vital for reasoning through the R-GCN model: \ud835\udc890 \ud835\udc7b\ud835\udc6c\ud835\udc6c(\ud835\udc86) = \ud835\udc3f\ud835\udc46\ud835\udc47\ud835\udc40(\ud835\udc890 \ud835\udc95\ud835\udc87\ud835\udc86 1 , \ud835\udc890 \ud835\udc95\ud835\udc87\ud835\udc86 2 , ..., \ud835\udc890 \ud835\udc95\ud835\udc87\ud835\udc86 \ud835\udc8f) (5) In subsequent layers, the embedding of \ud835\udc47\ud835\udc38\ud835\udc38(\ud835\udc52) will be updated as the embeddings of its constituent entities get updated. Attention over temporal relations (ATR). In temporal QA, we need to distinguish entities associated with the same relation but having different timestamps (facts with same temporal predicate but different objects, like several educated at facts for a person). We thus introduce the concept of temporal attention here, adapting the more general notion of attention over relations in GRAFT-Net [66]. While computing temporal attention over a relation \ud835\udc5fconnected with entity\ud835\udc52, we concatenate the corresponding relation embedding with the time encoding of its timestamp object and compute its similarity with the question embedding at that stage: \ud835\udc34\ud835\udc47\ud835\udc45(\ud835\udc52,\ud835\udc5f) = \ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65(\ud835\udc99\ud835\udc93\u2295\ud835\udc7b\ud835\udc6c(\ud835\udc95\ud835\udc94\ud835\udc93)\ud835\udc47\ud835\udc89(\ud835\udc8d\u22121) \ud835\udc92 ) (6) where the softmax normalization is over all outgoing edges from \ud835\udc52, \ud835\udc99\ud835\udc93is the pre-trained relation vector embedding for relation \ud835\udc5f (Wikipedia2Vec embeddings averaged over each word of the KG predicate), and \ud835\udc7b\ud835\udc6c(\ud835\udc95\ud835\udc94\ud835\udc93) is the time encoding of the timestamp associated with the relation \ud835\udc5f. For relations not connected with any timestamp, we use a random vector for \ud835\udc7b\ud835\udc6c(\ud835\udc95\ud835\udc94\ud835\udc93). Putting it together. We are now in a position to specify the update rule for entity nodes which involves a single-layer FFN over the concatenation of the following four states (see block C of Fig. 3): \ud835\udc89\ud835\udc8d \ud835\udc86= \ud835\udc39\ud835\udc39\ud835\udc41 \u00a9 \u00ad \u00ad \u00ad \u00ad \u00ab \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \ud835\udc89\ud835\udc8d\u22121 \ud835\udc86 \ud835\udc89\ud835\udc8d\u22121 \ud835\udc92 \ud835\udc89\ud835\udc8d\u22121 \ud835\udc7b\ud835\udc6c\ud835\udc6c(\ud835\udc86) \u00cd \ud835\udc5f \u00cd \ud835\udc52\u2032\u2208\ud835\udc5b\ud835\udc4f\ud835\udc51\ud835\udc5f(\ud835\udc52) (\ud835\udc34\ud835\udc47\ud835\udc45(\ud835\udc52\u2032,\ud835\udc5f).\ud835\udf4d\ud835\udc93(\ud835\udc89\ud835\udc8d\u22121 \ud835\udc86\u2032 )) \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u00aa \u00ae \u00ae \u00ae \u00ae \u00ac (7) Here, (i) the first term corresponds to the entity\u2019s representation from the previous layer; (ii) the second term denotes the question\u2019s representation from the previous layer; (iii) the third term denotes the previous layer\u2019s representation of the time-aware entity representation \ud835\udc47\ud835\udc38\ud835\udc38(\ud835\udc52); and (iv) the fourth term aggregates the states from the entity \ud835\udc52\u2019s neighbors. In the fourth term, the relation-specific neighborhood \ud835\udc5b\ud835\udc4f\ud835\udc51\ud835\udc5fcorresponds to the set of entities connected to \ud835\udc52via relation \ud835\udc5f, \ud835\udc34\ud835\udc47\ud835\udc45(\ud835\udc52\u2032,\ud835\udc5f) is the attention over temporal relations, and \ud835\udf4d\ud835\udc93(\ud835\udc89\ud835\udc8d\u22121 \ud835\udc86\u2032 ) is the relation-specific transformation depending on the type and direction of an edge: \ud835\udf4d\ud835\udc93(\ud835\udc89\ud835\udc8d\u22121 \ud835\udc86\u2032 ) = \ud835\udc43\ud835\udc43\ud835\udc45\ud835\udc59\u22121 \ud835\udc52\u2032 \u00b7 \ud835\udc39\ud835\udc39\ud835\udc41(\ud835\udc99\ud835\udc93, \ud835\udc89\ud835\udc8d\u22121 \ud835\udc86\u2032 ) (8) Here \ud835\udc43\ud835\udc43\ud835\udc45\ud835\udc59\u22121 \ud835\udc52\u2032 is a Personalized PageRank [34] score obtained in the same way as in GRAFT-Net [66] to control the propagation of embeddings along paths starting from the question entities. 4.3 Answer prediction The final entity representations (\ud835\udc89\ud835\udc8d \ud835\udc86) obtained at layer \ud835\udc59, are then used in a binary classification setup to select the answers. For each entity \ud835\udc52, we define its probability to be an answer to \ud835\udc5e: \ud835\udc43\ud835\udc5f(\ud835\udc52\u2208{\ud835\udc4e}\ud835\udc5e|\ud835\udc45\ud835\udc3a\ud835\udc5e,\ud835\udc5e) = \ud835\udf0e(\ud835\udc98\ud835\udc7b\ud835\udc89\ud835\udc8d \ud835\udc86+ \ud835\udc83) (9) where {\ud835\udc4e}\ud835\udc5eis the set of ground truth answers for question \ud835\udc5e, \ud835\udc45\ud835\udc3a\ud835\udc5e is the relational graph built for answering \ud835\udc5efrom its answer graph, \fComplex Temporal Question Answering on Knowledge Graphs CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Notation Concept \ud835\udc89\ud835\udc8d \ud835\udc86 Representation of entity \ud835\udc52at layer \ud835\udc59 \ud835\udc89\ud835\udc8d \ud835\udc92 Representation of question \ud835\udc5eat layer \ud835\udc59 \ud835\udc7b\ud835\udc6a\ud835\udc6c(\ud835\udc92) Temporal category encoding for question \ud835\udc5e \ud835\udc7b\ud835\udc7a\ud835\udc6c(\ud835\udc92) Temporal signal encoding for question \ud835\udc5e \ud835\udc41\ud835\udc38\ud835\udc45\ud835\udc37(\ud835\udc5e) Question entities obtained from NERD \ud835\udc99\ud835\udc86, \ud835\udc99\ud835\udc93 Pre-trained entity (\ud835\udc52) and relation (\ud835\udc5f) embeddings \ud835\udc7b\ud835\udc6c(\ud835\udc95\ud835\udc94) Time encoding for timestamp \ud835\udc61\ud835\udc60 \ud835\udc61\ud835\udc53\ud835\udc52 1 ,\ud835\udc61\ud835\udc53\ud835\udc52 2 , . . . Chronologically ordered temporal facts for \ud835\udc52 \ud835\udc89\ud835\udc8d \ud835\udc95\ud835\udc87\ud835\udc86 \ud835\udc8a Representation of the \ud835\udc56\ud835\udc61\u210etemporal fact for \ud835\udc52at \ud835\udc59 \ud835\udc89\ud835\udc8d \ud835\udc7b\ud835\udc6c\ud835\udc6c(\ud835\udc86) Time-aware entity representation of \ud835\udc52at \ud835\udc59 \ud835\udc34\ud835\udc47\ud835\udc45(\ud835\udc52,\ud835\udc5f) Attention over temporal relation \ud835\udc5fconnected with \ud835\udc52 \ud835\udf4d\ud835\udc93(\ud835\udc89\ud835\udc8d \ud835\udc86) Relation \ud835\udc5f-specific transformation of \u210e\ud835\udc59 \ud835\udc52 \ud835\udc43\ud835\udc43\ud835\udc45\ud835\udc59 \ud835\udc52 Personalized PageRank score for entity \ud835\udc52at \ud835\udc59 Table 2: Notation for concepts in the R-GCN of Exaqt. Category Explicit Implicit Temp. Ans. Ordinal Total Free917 [19] 44 4 76 11 135 WebQ [13] 315 77 283 113 788 ComplexQ [11] 217 131 43 33 424 GraphQ [63] 264 30 13 42 349 ComplexWebQ [68] 1356 224 595 315 2490 ComQA [2] 669 355 1180 1587 3791 LC-QuAD [69] 122 19 0 26 167 LC-QuAD 2.0 [28] 3534 636 3726 819 8715 Total 6521 1476 5916 2946 16859 Table 3: Distribution of question types by source in TimeQuestions. The sum 16859 exceeds the number of questions 16181 as some questions belong to multiple categories. and \ud835\udf0eis the sigmoid activation function. \ud835\udc98and \ud835\udc83are respectively the weight and bias vectors corresponding to the classifier which is trained using binary cross-entropy loss over these \ud835\udc43\ud835\udc5fprobabilities. 5 EXPERIMENTAL SETUP 5.1 Benchmark Previous collections on temporal questions, TempQuestions [37] and Event-QA [23] contain only about a thousand questions each, and are not suitable for building neural models. We leverage recent community efforts in QA benchmarking, and we search through eight KG-QA datasets for time-related questions. The result is a new compilation, TimeQuestions, with 16, 181 questions, that we release with this paper (details in Table 3). Since some of these previous benchmarks were over Freebase or DBpedia, we used Wikipedia links in these KGs to map them to Wikidata, the largest and most actively growing public KG today, and the one that we use in this work. Questions in each benchmark are tagged for temporal expressions using SUTime [21] and HeidelTime [62], and for signal words using a dictionary compiled by [60]. Whenever a question is found to have at least one temporal expression or signal word, it becomes a candidate temporal question. This candidate set (ca. 20\ud835\udc58 questions) was filtered for false positives by the authors. For each of these questions, the authors manually verified the correctness of the answer, and if incorrect, replaced it with the right one. Moreover, each question is manually tagged with its temporal question category (explicit, implicit, temporal answer, or ordinal) that may help in building automated classifiers for temporal questions, a sub-problem interesting in its own right. We split our benchmark in a 60 : 20 : 20 ratio for creating the training (9708 questions), development (3236) and test (3237) sets. 5.2 Baselines We use the following recent methods for complex KG-QA as baselines to compare Exaqt with. All baselines were trained and finetuned using the train and dev sets of TimeQuestions, respectively. They are the most natural choice of baselines as Exaqt is inspired by components in these methods for building its pipeline: while Uniqorn [52] showed the effectiveness of GSTs in complex KG-QA, GRAFT-Net [66] and PullNet [65] showed the value of R-GCNs for answer prediction. These techniques are designed for dealing with heterogeneous answering sources (KGs and text), and we use their KG-only variants: \u2022 Uniqorn [52]: This is a method for answering complex questions using Group Steiner Trees, and is an extension of [47]; \u2022 GRAFT-Net [66]: This was the first technique to adapt R-GCNs for QA over heterogeneous sources; \u2022 PullNet [65]: This algorithm extended the GRAFT-Net classifier to the scenario of multi-hop questions. We used a reimplementation as the code is not public. 5.3 Metrics All systems return a ranked list of answers, consisting of KG entities or literals associated with unique identifiers. We thus use the following metrics for evaluating Exaqt and the baselines, averaged over questions in the benchmark: \u2022 P@1: Precision at the top rank is one if the highest ranked answer is correct, and zero otherwise. \u2022 MRR: This is the reciprocal of the first rank where we have a correct answer. If the correct answer does not feature in the ranked list, MRR is zero. \u2022 Hit@5: This is set to one if a correct answer appears in the first five positions, and zero otherwise. 5.4 Initialization Configuration. We use the Wikidata KG dump (https://dumps. wikimedia.org/wikidatawiki/entities/) in NTriples format from April 2020, comprising 12\ud835\udc35triples and taking 2 TB when uncompressed on disk. We subsequently removed language tags, external IDs, schema labels and URLs from the dump, leaving us with about 2\ud835\udc35 triples with 340 GB disk space consumption. For BERT fine-tuning, positive and negative instances were created from the TimeQuestions train and dev sets in the ratio 1 : 5. These instances were combined and split in the ratio 80 : 20 (test set not needed), where the first split was used for training and the second for hyperparameter selection, respectively, for BERT finetuning. We use the BERT-base-cased model for sequence pair classification (https://bit.ly/3fRVqAG). Best parameters for fine-tuning were: accumulation = 512, number of epochs = 2, dropout = 0.3, mini-batch size = 50 and weight decay = 0.001. We use AdamW as the optimizer with a learning rate of 3\u00d710\u22125. During answer graph construction, we use top-25 question-relevant facts (|{\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59}| = 25), top-25 GSTs (\ud835\udc58= 25), and top-25 temporal facts (|{\ud835\udc61\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59}| = 25). \fCIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Jia et al. R-GCN model training. 100-dimensional embeddings for question words, relation (KG predicate) words and entities, are obtained from Wikipedia2Vec [78], and learned from the Wikipedia dump of March 2021. Dimensions of TCE, TSE, TE and TEE (Sec. 4) were all set to 100 as well. The last hidden states of LSTMs were used as encodings wherever applicable. This was trained on an Nvidia Quadro RTX 8000 GPU server. Hyperparameter values were tuned on the TimeQuestions dev set: number of GCN layers = 3, number of epochs = 100, mini-batch size = 25, gradient clip = 1, learning rate = 0.001, LSTM dropout = 0.3, linear dropout = 0.2, and fact dropout = 0.1. The ReLU activation function was used. 6 KEY FINDINGS Answering performance of Exaqt and baselines are in Table 4 (best value in column in bold). Main observations are as follows. Exaqt outperforms baselines. The main observation from Table 4 is the across-the-board superiority of Exaqt over the baselines. Statistically significant results for each category, baseline and metric, indicate that general-purpose complex QA systems are not able to deal with the challenging requirements of temporal QA, and that temporally augmented methods are needed. Outperforming each baseline offers individual insights, as discussed below. GSTs are not enough. GSTs are a powerful mechanism for complex QA that identify backbone skeletons in KG subsets and prune irrelevant information from noisy graphs. While this motivated the use of GSTs as a building block in Exaqt, outperforming the Uniqorn [52] method shows that non-terminals (internal nodes) in GSTs, by themselves, are not enough to answer temporal questions. Augmenting R-GCNs with time information works well. The fact that R-GCNs are a powerful model is clear from the fact that GRAFT-Net, without any explicit support for temporal QA, emerges as the strongest baseline in this challenging setup. A core contribution of our work is to extend R-GCNs with different kinds of temporal evidence. Improving over GRAFT-Net shows that our multi-pronged mechanism (with TEE, ATR, TCE, TSE, and TE) succeeds in advancing the scope of R-GCN models to questions with temporal intent. Ablation studies (Sec. 7) show that each of these \u201cprongs\u201d play active roles in the overall performance of Exaqt. Not every question is multi-hop. PullNet is a state-of-the-art system for answering multi-hop chain-join questions (where was Obama\u2019s father born?). It may appear strange that PullNet, offered as an improvement over GRAFT-Net, falls short in our setup. Inspecting examples makes the reason for this clear: PullNet has an assumption that all answers are located on a 2-hop circumference of the question entities (ideally, \ud835\udc47-hop, where \ud835\udc47is a variable that needs to be fixed for a benchmark: 1 is an oversimplification, while 3 is intractable for a large KG, and hence our choice of 2 for TimeQuestions). When this is not the case (for instance, the slightly tricky situation when an answer is in a qualifier of a 2-hop fact: when did obama\u2019s children start studying at sidwell friends school? or the question is simple: when was obama born?), PullNet cannot make use of this training point as it relies on shortest KG paths between question and answer entities. This uniform\ud835\udc47-hop assumption is not always practical, and does not generalize to situations beyond what PullNet was trained and evaluated on. Temporal categories vary by difficulty. We use manual groundtruth labels of question categories from our benchmark to drill down on class-wise results (the noisy tagger from Sec. 4.1.1 has \u224390% accuracy). Questions with temporal answers are clearly the easiest. Note that this includes questions starting with \u201cwhen\u201d, that many models tackle with dedicated lexical answer types [3, 12], analogous to location-type answers for \u201cwhere ...?\u201d questions. Questions with explicit temporal expressions are the next rung of the ladder: while they do require reasoning, explicit years often make this matching easier (who became president of south africa in 1989?). Questions with implicit expressions are more challenging: we believe that this is where the power of R-GCNs truly shine, as GST-based Uniqorn clearly falls short. Finally, questions with temporal ordinals seem to be beyond what implicit reasoning in graph neural networks can handle: with P@1 < 0.5, they pose the biggest research challenge. We believe that this calls for revisiting symbolic reasoning, ideally plugged into neural GCN architectures. 7 IN-DEPTH ANALYSIS NERD variants. We experimented with TagMe [30], AIDA [36], and ELQ [44], going by the most popular to the most recent choices. Effects of various choices are in Table 5. Our best configuration is TagMe + ELQ. TagMe (used without threshold on pruning entities) and ELQ (run with default parameters) nicely complement each other, since one is recall-oriented (TagMe) and the other precisionbiased (ELQ). Answer recall measures the fraction of questions for which at least one gold answer was present in the final answer graph (test set). AIDA + ELQ detects a similar number of entities per question, but is slightly worse w.r.t. answer recall. Understanding Stage 1. Traversing over the steps in the recalloriented graph construction phase of Exaqt, we try to understand where we gain (and lose) answers to temporal questions (Table 6, test set). First, we see that even two NERD systems cannot guarantee perfect answer recall (75.8%). The fall from Row 1 to 2 is expected, as one cannot compute graph algorithms efficiently over such large graphs as induced by all facts from Row 1. Adding shortest paths (Row 3), while making the answer graph more connected (before: 1.58 connected components per question, after: 1.16), also marginally helps in bringing correct answers into the graph. From Rows 4 and 5, we see that taking a union of top-\ud835\udc58(\ud835\udc58= 25) GSTs from each connected component proves worthwhile (increase from 0.613 to 0.640), and so does completing the GSTs (further rise to 0.671). Finally, adding temporal facts provides a critical boost, taking the answer recall at the end of Stage 1 to a respectable 72.4%. This translates to 2343 questions having answers in the graph passed on to the R-GCN (cf. 1989 answers are present in the PPR-based answer graph of GRAFT-Net), out of which 1830 are answered correctly at the end. The second column, that counts the average number of entities and literals in the answer graph (answer candidates) is highly insightful to get an idea of the graph size at each step, and its potential trade-off with respect to answer recall. Understanding Stage 2. We performed ablation studies to understand the relative influence of the individual temporal components in the precision-oriented Stage 2 of Exaqt: the R-GCN answer classifier. Table 7 shows P@1 results on the test set, where the full model achieves the best results overall and also for each category. The amount of drop from the full model (Row 1) indicates the degree of importance of a particular component. The most vital enhancement is the attention over temporal relations (ATR). All \fComplex Temporal Question Answering on Knowledge Graphs CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Category Overall Explicit Implicit Temp. Ans. Ordinal Method P@1 MRR Hit@5 P@1 MRR Hit@5 P@1 MRR Hit@5 P@1 MRR Hit@5 P@1 MRR Hit@5 Uniqorn [52] 0.331 0.409 0.538 0.318 0.406 0.536 0.316 0.415 0.545 0.392 0.472 0.597 0.202 0.236 0.356 GRAFT-Net [66] 0.452 0.485 0.554 0.445 0.478 0.531 0.428 0.465 0.525 0.515 0.568 0.660 0.322 0.313 0.371 PullNet [65] 0.105 0.136 0.186 0.022 0.043 0.075 0.081 0.123 0.192 0.234 0.277 0.349 0.029 0.049 0.083 Exaqt 0.565* 0.599* 0.664* 0.568* 0.594* 0.636* 0.508* 0.567* 0.633* 0.623* 0.672* 0.756* 0.420* 0.432* 0.508* Statistical significance of Exaqt over the strongest baseline (GRAFT-Net), under the 2-tailed paired \ud835\udc61-test, is marked with an asterisk (*) (\ud835\udc5d< 0.05). Table 4: Performance comparison of Exaqt with three complex QA baselines over the TimeQuestions test set. NERD Recall #Question entities TagMe 0.682 2.9 ELQ 0.716 1.7 AIDA 0.541 2.8 TagMe + ELQ 0.758 3.5 AIDA + ELQ 0.729 3.5 TagMe + AIDA 0.701 4.3 Table 5: Comparing various NERD methods on the test set. Step in Exaqt pipeline Recall #Candidates All KG facts of NERD entities 0.758 2491 Facts selected by BERT 0.719 48 Shortest paths injected for connectivity 0.720 49 GSTs on largest component 0.613 13 Union of GSTs from all components 0.640 14 Completed GSTs from all components 0.671 21 Temporal facts added by BERT 0.724 67 Table 6: Understanding the recall-oriented Stage 1 of Exaqt. Category Overall Explicit Implicit Temp. Ans. Ordinal Exaqt (Full) 0.565 0.568 0.508 0.623 0.420 Exaqt TCE 0.545 0.556 0.481 0.590 0.406 Exaqt TSE 0.543 0.545 0.465 0.598 0.411 Exaqt TEE 0.556 0.564 0.475 0.614 0.413 Exaqt TE 0.553 0.556 0.495 0.613 0.398 Exaqt ATR 0.534 0.527 0.465 0.594 0.411 Table 7: Inspecting the precision-oriented Stage 2 of Exaqt. what did abraham lincoln do before he was president? who was the king of troy when the trojan war was going on? what films are nominated for the oscar for best picture in 2009? where did harriet tubman live after the civil war? when did owner bill neukom\u2019s sports team last win the world series? Table 8: Anecdotal examples that Exaqt answered correctly. other factors offer varying degrees of assistance. An interesting observation is that TCE, while playing a moderate role in most categories, is of the highest importance for questions with temporal answers: even knowing that a question belongs to this category helps the model. Anecdotal examples. Table 8 shows samples of test questions that are successfully processed by Exaqt but none of the baselines. 8 RELATED WORK Temporal QA in IR. Supporting temporal intent in query and document processing has been a long-standing research topic in IR [8, 14, 20, 40, 49, 60]. This includes work inside the specific use case of QA over text [5, 33, 46, 56]. Most of these efforts require significant preprocessing and markup of documents. There is also onus on questions to be formulated in specific ways so as to conform to carefully crafted parsers. These directions often fall short of realistic settings on the Web, where documents and questions are both formulated ad hoc. Moreover, such corpus markup unfortunately does not play a role in structured knowledge graphs. Notable effort in temporal QA includes work of [56], which decompose complex questions into simpler components, and recompose answer fragments into responses that satisfy the original intent. Such approaches have bottlenecks from parsing issues. Exaqt makes no assumptions on how questions are formulated. Temporal QA over KGs. Questions with temporal conditions have not received much attention in the KG-QA literature. The few works that specifically address temporal questions include [23, 38, 76]. Among these, [38] relies on hand-crafted rules with limited generalization, whereas Exaqt is automatically trained with distant supervision and covers a much wider territory of questions. [23] introduces the task of event-centric QA, which overlaps with our notion of temporal questions, and introduces a benchmark collection. [76] presents a key-value memory network to include KG information about time into a QA pipeline. The method is geared for simple questions, as present in the WebQuestions benchmark. Temporal KGs. Of late, understanding large KGs as a dynamic body of knowledge has gained attention, giving rise to the notion of temporal knowledge graphs or temporal knowledge bases [25, 70]. Here, each edge (corresponding to a fact) is associated with a temporal scope or validity [43], with current efforts mostly focusing on the topic of temporal KG completion [31, 32, 42]. A very recent approach has explored QA over such temporal KGs, along with the creation of an associated benchmark [57]. 9" + }, + { + "url": "http://arxiv.org/abs/1908.03650v4", + "title": "TEQUILA: Temporal Question Answering over Knowledge Bases", + "abstract": "Question answering over knowledge bases (KB-QA) poses challenges in handling\ncomplex questions that need to be decomposed into sub-questions. An important\ncase, addressed here, is that of temporal questions, where cues for temporal\nrelations need to be discovered and handled. We present TEQUILA, an enabler\nmethod for temporal QA that can run on top of any KB-QA engine. TEQUILA has\nfour stages. It detects if a question has temporal intent. It decomposes and\nrewrites the question into non-temporal sub-questions and temporal constraints.\nAnswers to sub-questions are then retrieved from the underlying KB-QA engine.\nFinally, TEQUILA uses constraint reasoning on temporal intervals to compute\nfinal answers to the full question. Comparisons against state-of-the-art\nbaselines show the viability of our method.", + "authors": "Zhen Jia, Abdalghani Abujabal, Rishiraj Saha Roy, Jannik Stroetgen, Gerhard Weikum", + "published": "2019-08-09", + "updated": "2021-01-25", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Motivation and Problem. Knowledge-based question answering (KB-QA) aims to answer questions over large knowledge bases (e.g., DBpedia, Wikidata, YAGO, etc.) or other structured data. KBQA systems take as input questions such as: Q1: \u201cWhich teams did Neymar play for?\u201d and translate them into structured queries, in a formal language like SPARQL or SQL, and execute the queries to retrieve answers from the KB. In doing so, KB-QA methods need to address the vocabulary mismatch between phrases in the input question and entities, types, and predicates in the KB: mapping \u2018Neymar\u2019 to the uniquely identi\ufb01ed entity, \u2018teams\u2019 to the KB type footballClub and \u2018played for\u2019 to the KB predicate memberOf. State-of-the-art KB-QA (see surveys [9, 17]) can handle simple questions like the above example very well, but struggle with complex questions that involve multiple conditions on di\ufb00erent entities and need to join the results from corresponding sub-questions. For example, the question: Q2: \u201cAfter whom did Neymar\u2019s sister choose her last name?\u201d would require a three-way join that connects Neymar, his sister Rafaella Beckran, and David Beckham. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\ufb01t or commercial advantage and that copies bear this notice and the full citation on the \ufb01rst page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior speci\ufb01c permission and/or a fee. Request permissions from permissions@acm.org. CIKM\u201918, October 2018, Turin, Italy \u00a9 2021 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn An important case of complex questions are temporal information needs. Search often comes with explicit or implicit conditions about time [16]. Consider the two examples: Q3: \u201cWhich teams did Neymar play for before joining PSG?\u201d Q4: \u201cUnder which coaches did Neymar play in Barcelona?\u201d In Q3, no explicit date (e.g., August 2017) is mentioned, so a challenge is to detect its temporal nature. The phrase \u2018joining PSG\u2019 refers to an event (Neymar\u2019s transfer to that team). We could detect this, but have to properly disambiguate it to a normalized date. The temporal preposition \u2018before\u2019 is a strong cue as well, but words like \u2018before\u2019, \u2018after\u2019, etc. are also used in non-temporal contexts; Q2 is an example for this. Q4 does not seem to be time-dependent at all, when looking at its surface form. However, it is crucial for correct answers that only coaches are selected whose job periods at FC Barcelona overlap with that of Neymar. Here, detecting the temporal nature is a big challenge. A second challenge is how to decompose such questions and ensure that the execution contains an overlap test for the respective time periods. Approach and Contributions. The key idea of this paper is to judiciously decompose such temporal questions and rewrite the resulting sub-questions so that they can be separately evaluated by a standard KB-QA system. The answers for the full questions are then computed by combining and reasoning on the sub-question results. For example, Q3 should be decomposed and rewritten into Q3.1: \u201cWhich teams did Neymar play for?\u201d and Q3.2: \u201cWhen did Neymar join PSG?\u201d. For the results of Q3.1, we could then retrieve time scopes from the KB, and compare them with the date returned by Q3.2, using a BEFORE operator. Analogously, Q4 would require an OVERLAP comparison as a \ufb01nal step. With the exception of the work by [4], to which we experimentally compare our method, we are not aware of any KB-QA system for such composite questions. Our solution, called TEQUILA, is built on a rule-based framework that encompasses four stages of processing: (i) detecting temporal questions, (ii) decomposing questions and rewriting sub-questions, (iii) retrieving candidate answers for sub-questions, and (iv) temporal reasoning to combine and reconcile the results of the previous stage into \ufb01nal answers. For stage (iii), we leverage existing KB-QA systems (state-of-the-art systems QUINT [2] and AQQU [6] used in experiments), that are geared for answering simple questions. To the best of our knowledge, this is the \ufb01rst paper that presents a complete pipeline speci\ufb01c to temporal KB-QA. Novel contributions also include: (i) a method for decomposing complex questions, and (ii) the time-constraint-based reasoning for combining sub-question results into overall answers. All data and code are \fCIKM\u201918, October 2018, Turin, Italy Z. Jia et al. public at https://github.com/zhenjia2017/tequila, and a demo is available at https://tequila.mpi-inf.mpg.de/. 2 CONCEPTS In NLP, the markup language TimeML (www.timeml.org) is widely used for annotating temporal information in text documents. Our de\ufb01nition of temporalquestions is based on two of its concepts (tags for temporal expressions and temporal signals). Temporal expressions. TIMEX3 tags demarcate four types of temporal expressions. Dates and times refer to points in time of di\ufb00erent granularities (e.g., \u2018May 1, 2010\u2019 and \u20189 pm\u2019, respectively). They occur in fullyor under-speci\ufb01ed forms (e.g., \u2018May 1, 2010\u2019 vs. \u2018last year\u2019). Durations refer to intervals (e.g., \u2018two years\u2019), and sets to periodic events (e.g., \u2018every Monday\u2019). Going beyond TimeML, implicit expressions (e.g., \u2018the Champions League \ufb01nal\u2019) are used to capture events and their time scopes [14]. Expressions can be normalized into standard format (e.g., \u2018May 2\ud45b\ud451, 2016\u2019 into 2016-05-02). Temporal signals. SIGNAL tags mark textual elements that denote explicit temporal relations between two TimeML entities (i.e., events or temporal expressions), such as \u2018before\u2019 or \u2018during\u2019. We extend the TimeML de\ufb01nition to also include cues when an event is mentioned only implicitly, such as \u2018joining PSG\u2019. In addition, we consider ordinals like \u2018\ufb01rst\u2019, \u2018last\u2019, etc. These are frequent in questions when entities can be chronologically ordered, such as \u2018last\u2019 in \u201cNeymar\u2019s last club before joining PSG\u201d. Temporal questions. Based on these considerations, we can now de\ufb01ne a temporal question as any question that contains a temporal expression or a temporal signal, or whose answer type is temporal. Temporal relations. Allen [3] introduced 13 temporal relations between time intervals for temporal reasoning: EQUAL, BEFORE, MEETS, OVERLAPS, DURING, STARTS, FINISHES, and their inverses for all but EQUAL. However, for an input temporal question, it is not always straightforward to infer the proper relation. For example, in Q3 the relation should be BEFORE; but if we slightly vary Q3 to: Q5: \u201cWhich team did Neymar play for before joining PSG?\u201d, the singular form \u2018team\u2019 suggests that we are interested in the MEETS relation, that is, only the last team before the transfer. Frequent trigger words suggesting such relations are, for instance, the signals before, prior to (for BEFORE or MEETS), after, following (for AFTER), and during, while, when, in (for OVERLAP). 3 METHOD Given an input question, TEQUILA works in four stages: (i) detect if the question is temporal,(ii) decompose the question into simpler sub-questions with some form of rewriting, (iii) obtain candidate answers and dates for temporal constraints from a KB-QA system, and (iv) apply constraint-based reasoning on the candidates to produce \ufb01nal answers. Our method builds on ideas from the literature on question decomposition for general QA [2, 5, 19]. Standard NLP tasks like POS tagging, NER, and coreference resolution, are performed on the input question before passing it on to TEQUILA. Table 1: Decomposition and rewriting of questions. The constraint is the fragment after the SIGNAL word. wh\u2217is the question word (e.g., who), and \ud464\ud456are tokens in the question. Expected input: wh\u2217\ud4641 . . . \ud464\ud45bSIGNAL \ud464\ud45b+1 . . . \ud464\ud45d? Case 1: Constraint has both an entity and a relation Sub-question 1 pattern: wh\u2217\ud4641 . . . \ud464\ud45b? Sub-question 2 pattern: when \ud464\ud45b+1 . . . \ud464\ud45d? E.g.: \u201cwhere did neymar play before he joined barcelona?\u201d Sub-question 1: \u201cwhere did neymar play?\u201d Sub-question 2: \u201cwhen neymar joined barcelona?\u201d Case 2: Constraint has no entity but a relation Sub-question 1 pattern: wh\u2217\ud4641 . . . \ud464\ud45b? Sub-question 2 pattern: when sq1-entity \ud464\ud45b+1 . . . \ud464\ud45d? E.g.: \u201cwhere did neymar live before playing for clubs?\u201d Sub-question 1: \u201cwhere did neymar live?\u201d Sub-question 2: \u201cwhen neymar playing for clubs?\u201d Case 3: Constraint has no relation but an entity Sub-question 1 pattern: wh\u2217\ud4641 . . . \ud464\ud45b? Sub-question 2 pattern: when \ud464\ud45b+1 . . . \ud464\ud45d\ud4641 . . . \ud464\ud45b? E.g.: \u201cwho was the brazil team captain before neymar?\u201d Sub-question 1: \u201cwho was the brazil team captain?\u201d Sub-question 2: \u201cwhen neymar was the brazil team captain?\u201d Case 4: Constraint is an event name Sub-question 1 pattern: wh\u2217\ud4641 . . . \ud464\ud45b? Sub-question 2 pattern: when did \ud464\ud45b+1 . . . \ud464\ud45dhappen? E.g.: \u201cwhere did neymar play during south africa world cup?\u201d Sub-question 1: \u201cwhere did neymar play?\u201d Sub-question 2: \u201cwhen did south africa world cup happen?\u201d 3.1 Detecting temporal questions A question is identi\ufb01ed as temporal if it contains any of the following: (a) explicit or implicit temporal expressions (dates, times, events), (b) temporal signals (i.e., cue words for temporal relations), (c) ordinal words (e.g., \ufb01rst), (d) an indication that the answer type is temporal (e.g., the question starts with \u2018When\u2019). We use HeidelTime [22] to tag TIMEX3 expressions in questions. Named events are identi\ufb01ed using a dictionary curated from Freebase. Speci\ufb01cally, if the type of an entity is \u2018time.event\u2019, its surface forms are added to the event dictionary. SIGNAL words and ordinal words are detected using a small dictionary as per suggestions from Setzer [21], and a list of temporal prepositions. To spot questions whose answers are temporal, we use a small set of patterns like when, what date, in what year, and which century. 3.2 Decomposing and rewriting questions TEQUILA decomposes a composite temporal question into one or more non-temporalsub-questions (returning candidate answers), and one or more temporalsub-questions (returning temporal constraints). Results of sub-questions are combined by intersecting their answers. The constraints are applied to time scopes associated with \fTEQUILA: Temporal Qestion Answering over Knowledge Bases CIKM\u201918, October 2018, Turin, Italy results of the non-temporal sub-questions. For brevity, the following explanation focuses on the case with one non-temporal subquestion, and one temporal sub-question. We use a set of lexicosyntactic rules (Table 1) designed from \ufb01rst principles to decompose and rewrite a question into its components. Basic intuitions driving these rules are as follows: \u2022 The signal word separates the non-temporal and temporal subquestions, acting as a pivot for decomposition; \u2022 Each sub-question needs to have an entity and a relation (generally represented using verbs) to enable the underlying KBQA systems to handle sub-questions; \u2022 If the second sub-question lacks the entity or the relation, it is borrowed from the \ufb01rst sub-question; \u2022 KB-QA systems are robust to ungrammatical constructs, thus precluding the need for linguistically correct sub-questions. 3.3 Answering sub-questions Sub-questions are passed on to the underlying KB-QA system, which translates them into SPARQL queries and executes them on the KB. This produces a result set for each sub-question. Results from the non-temporal sub-question(s) are entities of the same type (e.g., football teams). These are candidate answers for the full question. With multiple sub-questions, the candidate sets are intersected. The temporal sub-questions, on the other hand, return temporal constraints such as dates, which act as constraints to \ufb01lter the nontemporal candidate set. Candidate answers need to be associated with time scopes, so that we can evaluate the temporal constraints. Retrieving time scopes. To obtain time scopes, we introduce additional KB lookups; details depend on the speci\ufb01cs of the underlying KB. Freebase, for example, often associates SPO triples with time scopes by means of compound value types (CVTs); other KBs may use \ud45b-tuples (\ud45b> 3) to attach spatio-temporal attributes to facts. For example, the Freebase predicate marriage is a CVT with attributes including marriage.spouse and marriage.date. When the predicate marriage.spouse is used to retrieve answers, the time scope is retrieved by looking up marriage.date in the KB. On the other hand, playing for a football club could be captured in a predicate like team.players without temporal information attached, and the job periods are represented as events in predicates like footballPlayer. team. joinedOnDate and footballPlayer. team. leftOnDate). In such cases, TEQUILA considers all kinds of temporal predicates for the candidate entity, and chooses one based on a similarity measure between the non-temporal predicate (team.players) and potentially relevant temporal predicates (footballPlayer. team. joinedOnDate, footballPlayer.award.date). The similarity measure is implemented by selecting tokens in predicate names (footballPlayer, team, etc.), contextualizing the tokens by computing word2vec embeddings for them, averaging per-token vectors to get a resultant vector for each predicate [24], and comparing the cosine distance between two predicate vectors. The best-matching temporal predicate is chosen for use. When time periods are needed (e.g., for a temporal constraint using OVERLAP), a pair of begin/end predicates is selected (e.g., footballPlayer. team. joinedOnDate and leftOnDate). Table 2: Temporal reasoning constraints. Relation Signal word(s) Constraint BEFORE \u2018before\u2019, \u2018prior to\u2019 \ud452\ud45b\ud451\ud44e\ud45b\ud460\u2264\ud44f\ud452\ud454\ud456\ud45b\ud450\ud45c\ud45b\ud460 AFTER \u2018after\u2019 \ud44f\ud452\ud454\ud456\ud45b\ud44e\ud45b\ud460\u2265\ud452\ud45b\ud451\ud450\ud45c\ud45b\ud460 OVERLAP \u2018during\u2019, \u2018while\u2019, \u2018when\u2019 \ud44f\ud452\ud454\ud456\ud45b\ud44e\ud45b\ud460\u2264\ud452\ud45b\ud451\ud450\ud45c\ud45b\ud460\u2264\ud452\ud45b\ud451\ud44e\ud45b\ud460 \u2018since\u2019, \u2018until\u2019, \u2018in\u2019 \ud44f\ud452\ud454\ud456\ud45b\ud44e\ud45b\ud460\u2264\ud44f\ud452\ud454\ud456\ud45b\ud450\ud45c\ud45b\ud460\u2264\ud452\ud45b\ud451\ud44e\ud45b\ud460 \u2018at the same time as\u2019 \ud44f\ud452\ud454\ud456\ud45b\ud450\ud45c\ud45b\ud460\u2264\ud44f\ud452\ud454\ud456\ud45b\ud44e\ud45b\ud460\u2264\ud452\ud45b\ud451\ud44e\ud45b\ud460\u2264\ud452\ud45b\ud451\ud450\ud45c\ud45b\ud460 3.4 Reasoning on temporal intervals For temporal sub-questions, the results are time points, time intervals, or sets of dates (e.g., a set of consecutive years during which someone played for a football team). We cast all these into intervals with start point \ud44f\ud452\ud454\ud456\ud45b\ud450\ud45c\ud45b\ud460and end point \ud452\ud45b\ud451\ud450\ud45c\ud45b\ud460. These form the temporal constraints against which we test the time scopes of the non-temporal candidate answers, also cast into intervals [\ud44f\ud452\ud454\ud456\ud45b\ud44e\ud45b\ud460,\ud452\ud45b\ud451\ud44e\ud45b\ud460]. The test itself depends on the temporal operator derived from the input question (e.g., BEFORE, OVERLAP, etc.) (Table 2). For questions with ordinal constraints (e.g., last), we sort the (possibly open) intervals to select the appropriate answer. 4 EXPERIMENTS 4.1 Setup We evaluate TEQUILA on the TempQuestions benchmark [13], which contains 1, 271 temporal questions labeled as questions with explicit, implicit, and ordinal constraints, and those with temporal answers. Questions are paired with their answers over Freebase. We use three state-of-the-art KB-QA systems as baselines: AQQU [6], QUINT [2] (code from authors for both), and Bao et al. [4] (detailed results from authors). The \ufb01rst two are geared for simple questions, while Bao et al. handle complex questions, including temporal ones. We use TEQUILA as a plug-in for the \ufb01rst two, and directly evaluate against the system of Bao et al. on 341 temporal questions from the ComplexQuestions test set [4]. For evaluating baselines, the full question was fed directly to the underlying system. We report precision, recall, and F1 scores of the retrieved answer sets w.r.t. the gold answer sets, and average them over all test questions. 4.2 Results and insights Results on TempQuestions and the 341 temporal questions in ComplexQuestions are shown in Table 3. AQQU + TEQUILA and QUINT + TEQUILA refer to the TEQUILA-enabled versions of the respective baseline systems. We make the following observations. TEQUILA enables KB-QA systems to answer composite questionswith temporalconditions. Overall and category-wise F1-scores show that TEQUILA-enabled systems signi\ufb01cantly outperform the baselines. Note that these systems neither have capabilities for handling compositional syntax nor speci\ufb01c support for temporal questions. Our decomposition and rewrite methods are crucial for compositionality, and constraint-based reasoning on answers is decisive for the temporal dimension. The improvement in F1-scores stems from a systematic boost in precision, across most categories. TEQUILA outperforms state-of-the-art baselines. Bao et al. [4] represents the state-of-the-art in KB-QA, with a generic \fCIKM\u201918, October 2018, Turin, Italy Z. Jia et al. Table 3: Detailed performance of TEQUILA-enabled systems on TempQuestions and ComplexQuestions. TempQuestions Aggregate results Explicit constraint Implicit constraint Temporal answer Ordinal constraint (1,271 questions) Prec Rec F1 Prec Rec F1 Prec Rec F1 Prec Rec F1 Prec Rec F1 AQQU [6] 24.6 48.0 27.2 27.6 60.7 31.1 12.9 34.9 14.5 26.1 33.5 27.4 28.4 57.4 32.7 AQQU+TEQUILA 36.0* 42.3 36.7* 43.8* 53.8 44.6* 29.1* 34.7 29.3* 27.3* 29.6 27.7* 38.0* 41.3 38.6* QUINT [2] 27.3 52.8 30.0 29.3 60.9 32.6 25.6 54.4 27.0 25.2 38.2 27.3 21.3 54.9 26.1 QUINT+TEQUILA 33.1* 44.6 34.0* 41.8* 51.3 42.2* 13.8 43.7 15.7 28.6* 34.5 29.4* 37.0* 42.2 37.7* ComplexQuestions Aggregate results Explicit constraint Implicit constraint Temporal answer Ordinal constraint (341 questions) Prec Rec F1 Prec Rec F1 Prec Rec F1 Prec Rec F1 Prec Rec F1 Bao et al. [4] 34.6 48.4 35.9 41.1 53.2 41.9 26.4 36.5 27.0 18.6 40.2 22.3 31.1 60.8 36.1 AQQU [6] 21.5 50.0 23.3 25.0 60.1 28.4 11.2 31.2 11.4 19.6 35.7 19.2 22.2 54.9 25.3 AQQU+TEQUILA 36.2* 45.9 37.5* 41.2* 54.7 43.5* 27.5* 32.6 27.0* 29.5* 32.1 29.9* 40.2* 45.1 40.8* QUINT [2] 22.0 50.3 24.5 24.7 54.7 27.5 18.8 47.9 19.0 16.6 37.5 20.7 20.9 51.3 26.0 QUINT+TEQUILA 29.6* 44.9 31.1* 34.6* 47.3 36.3* 12.3 42.1 13.9 33.4* 37.5 33.9* 44.9* 51.6* 45.8* Aggregate results are averaged over the four categories. The highest value in a column for each dataset is in bold. An asterisk (*) indicates statistical signi\ufb01cance of TEQUILA-enabled systems over their standalone counterparts, under the 2-tailed paired \ud461-test at \ud45d< 0.05 level. mechanism for handling constraints in questions. TEQUILA-enabled systems outperformBao et al. on the temporal slice of ComplexQuestions, showing that a tailored method for temporal information needs is worthwhile. TEQUILA enabled QUINT and AQQU to answer questions like: \u201cwho is the \ufb01rst husband of julia roberts?\u201d, \u201cwhen did francesco sabatini start working on the puerta de san vicente?\u201d, and \u201cwho was governor of oregon when shanghai noon was released?\u201d. Error analysis. Analyzing cases when TEQUILA fails yields insights towards future work: (i) Decomposition and rewriting were incorrect (for example, in \u201cwhere did the pilgrims come from before landing in america?\u201d, \u2018landing\u2019 is incorrectly labeled as a noun, triggering case 3 instead of case 1 in Table 1); (ii) The correct temporal predicate was not found due to limitations of the similarity function; and (iii) The temporal constraint or the time scope to use during reasoning was wrongly identi\ufb01ed. 5 RELATED WORK QA has a long tradition in IR and NLP, including benchmarking tasks in TREC, CLEF, and SemEval. This has predominantly focused on retrieving answers from textual sources. The recent TREC CAR (complex answer retrieval) resource [10], explores multi-faceted passage answers, but information needs are still simple. In IBM Watson [11], structured data played a role, but text was the main source for answers. Question decomposition was leveraged, for example, in [11, 19, 28] for QA over text. However, re-composition and reasoning over answers works very di\ufb00erently for textual sources [19], and are not directly applicable for KB-QA. Compositional semantics of natural language sentences has been addressed by [15] from a general linguistic perspective. Although applicable to QA, existing systems support only speci\ufb01c cases of composite questions. KB-QA is a more recent trend, starting with [7, 8, 12, 23, 26]. Most methods have focused on simple questions, whose SPARQL translations contain only a single variable (and a few triple patterns for a single set of qualifying entities). For popular benchmarks like WebQuestions [7], the best performing systems use templates and grammars [1, 2, 6, 18, 28], leverage additional text [20, 25], or learn end-to-end with extensive training data [25, 27]. These methods do not cope well with complex questions. Bao et al. [4] combined rules with deep learning to address a variety of complex questions. 6" + } + ], + "Rishiraj Saha Roy": [ + { + "url": "http://arxiv.org/abs/1111.1497v4", + "title": "An IR-based Evaluation Framework for Web Search Query Segmentation", + "abstract": "This paper presents the first evaluation framework for Web search query\nsegmentation based directly on IR performance. In the past, segmentation\nstrategies were mainly validated against manual annotations. Our work shows\nthat the goodness of a segmentation algorithm as judged through evaluation\nagainst a handful of human annotated segmentations hardly reflects its\neffectiveness in an IR-based setup. In fact, state-of the-art algorithms are\nshown to perform as good as, and sometimes even better than human annotations\n-- a fact masked by previous validations. The proposed framework also provides\nus an objective understanding of the gap between the present best and the best\npossible segmentation algorithm. We draw these conclusions based on an\nextensive evaluation of six segmentation strategies, including three most\nrecent algorithms, vis-a-vis segmentations from three human annotators. The\nevaluation framework also gives insights about which segments should be\nnecessarily detected by an algorithm for achieving the best retrieval results.\nThe meticulously constructed dataset used in our experiments has been made\npublic for use by the research community.", + "authors": "Rishiraj Saha Roy, Niloy Ganguly, Monojit Choudhury, Srivatsan Laxman", + "published": "2011-11-07", + "updated": "2012-09-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "H.3.3" + ], + "main_content": "INTRODUCTION Query segmentation is the process of dividing a query into individual semantic units [3]. For example, the query Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\ufb01t or commercial advantage and that copies bear this notice and the full citation on the \ufb01rst page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior speci\ufb01c permission and/or a fee. Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. singular value decomposition online demo can be broken into singular value decomposition and online demo. All documents containing the individual terms singular, value and decomposition are not necessarily relevant for this query. Rather, one can almost always expect to \ufb01nd the segment singular value decomposition in the relevant documents. In contrast, although online demo is a segment, \ufb01nding the phrase or some variant of it may not a\ufb00ect the relevance of the document. Hence, the potential of query segmentation goes beyond the detection of multiword named entities. Rather, segmentation leads to a better understanding of the query and is crucial to the search engine for improving Information Retrieval (IR) performance. There is broad consensus in the literature that query segmentation can lead to better retrieval performance [2, 3, 7, 9, 13]. However, most automatic segmentation techniques [3, 4, 7, 9, 13, 15] have so far been evaluated only against a small set of 500 queries segmented by human annotators. Such an approach implicitly assumes that a segmentation technique that scores better against human annotations will also automatically lead to better IR performance. We challenge this approach on multiple counts. First, there has been no systematic study that establishes the quality of human segmentations in the context of IR performance. Second, grammatical structure in queries is not as well-understood as natural language sentences where human annotations have proved useful for training and testing of various Natural Language Processing (NLP) tools. This leads to considerable inter-annotator disagreement when humans segment search queries. Third, good quality human annotations for segmentation can be di\ufb03cult and expensive to obtain for a large set of test queries. Thus, there is a need for a more direct IR-based evaluation framework for assessing query segmentation algorithms. This is the central motivation of the present work. We propose an IR-based evaluation framework for query segmentation that requires only human relevance judgments (RJs) for query-URL pairs for computing the performance of a segmentation algorithm \u2013 such relevance judgments are anyway needed for training and testing of any IR engine. A fundamental problem in designing an IR-based evaluation framework for segmentation algorithms is to decouple the effect of segmentation accuracy from the way segmentation is used for IR. This is because a query segmentation algorithm breaks the input query into, typically, a non-overlapping sequence of words (segments), but it does not prescribe how these segments should be used during the retrieval and ranking of the documents for that query. We resolve this problem \fby providing a formal model of query expansion for a given segmentation; the various queries obtained can then be issued to any standard IR engine, which we assume to be a black box. We conduct extensive experiments within our framework to understand the performance of several state-of-the-art query segmentation schemes [7, 9, 11] and segmentations by three human annotators. Our experiments reveal several interesting facts such as: (a) Segmentation is actively useful in improving IR performance, even though submitting all segments (detected by an algorithm) in double quotes to the IR engine degrades performance; (b) All segmentation strategies, including human segmentations, are yet to reach the best achievable limits in IR performance; (c) In terms of IR metrics, some of the segmentation algorithms perform as good as the best human annotator and better than the average/worst human annotator; (d) Current match-based metrics for comparing query segmentation against human annotations are only weakly correlated with the IR-based metrics, and cannot be used as a proxy for IR performance; and (e) There is scope for improvement for the matching metrics that compare segmentations against human annotations by di\ufb00erentially penalizing the straddling, splitting and joining of reference segments. In short, the proposed evaluation framework not only provides a formal way to compare segmentation algorithms and estimate their e\ufb00ectiveness in IR, but also helps us to understand the gaps in human annotation-based evaluation. The framework also provides valuable insights regarding the segmentations that can be used for improvement of the algorithms. The rest of the paper is organized as follows. Sec. 2 introduces our evaluation framework and its design philosophy. Sec. 3 presents the dataset and the segmentation algorithms compared on our framework. Sec. 4 discusses the experimental results and insights derived from them. In Sec. 5, we discuss a few related issues, and the next section (Sec. 6) gives a brief background of past approaches to evaluate query segmentation and their limitations. We conclude by summarizing our contributions and suggesting future work in Sec. 7. 2. THE EVALUATION FRAMEWORK In this section we present a framework for the evaluation of query segmentation algorithms based on IR performance. Let q denote a search query and let sq = \u27e8sq 1, . . . , sq n\u27e9denote a segmentation of q such that a simple concatenation of the n segments equals q, i.e., we have q = (sq 1 +\u00b7 \u00b7 \u00b7+sq n), where + represents the concatenation operator. We are given a segmentation algorithm A and the task is to evaluate its performance. We require the following resources: 1. A test set Q of unquoted search queries. 2. A set U of documents (or URLs) out of which search results will be retrieved. 3. Relevance judgments r(q, u) for query-URL pairs (q, u) \u2208Q \u00d7 U. The set of all relevance judgments are collectively denoted by R. 4. An IR engine that supports quoted queries as input. The resources needed by our evaluation framework are essentially the same as those needed for the training and testing of a standard IR engine, namely, queries, a document corpus and set of relevance judgments. Akin to the Table 1: Example of generation of quoted versions for a segmented query. Segmented query Quoted versions we are the people song lyrics we are the people \"song lyrics\" we are \"the people\" song lyrics we are | the people | song lyrics we are \"the people\" \"song lyrics\" \"we are\" the people song lyrics \"we are\" the people \"song lyrics\" \"we are\" \"the people\" song lyrics \"we are\" \"the people\" \"song lyrics\" training examples required for an IR engine, we only require relevance judgments for a small and appropriate subset of Q\u00d7U (each query needs only the documents in its own pool to be judged) [14]. It is useful to separate the evaluation of segmentation performance, from the question of how to best exploit the segments to retrieve the most relevant documents. From an IR perspective, a natural interpretation of a segment could be that it consists of words that must appear together, in the same order, in documents where the segment is deemed to match [3]. This can be referred to as ordered contiguity matching. While this can be easily enforced in modern IR engines through use of double quotes around segments, we observe that not all segments must be used this way (see [10] for related ideas and experiments in a di\ufb00erent context). Some segments may admit more general matching criteria, such as unordered or intruded contiguity (e.g., a segment a b may be allowed to match b a or a c b in the document). The case of unordered intruded matching may be restricted under linguistic dependence assumptions (e.g., a b can match a of b or b in a). Finally, some segments may even play non-matching roles (e.g., when the segment speci\ufb01es user intent, like how to and where is). Thus, there may be several di\ufb00erent ways to exploit the segments discovered by a segmentation algorithm. Even within the same query, di\ufb00erent segments may need to be treated di\ufb00erently. For instance, in the query cannot view | word files | windows 7, the \ufb01rst one might be matched using intruded ordered occurrence (cannot properly view), the second segment may be matched under a linguistic dependency model (files in word) and the last one under ordered contiguity. Intruded contiguity and linguistic dependency may be dif\ufb01cult to implement for the broad class of general Web search queries. Identifying how the various segments of a query should be ideally matched in the document is quite a challenging and unsolved research problem. On the other hand, an exhaustive expansion scheme, where every segment is expanded in every possible way, is computationally expensive and might introduce noise. Moreover, current commercial IR engines do not support any syntax to specify linguistic dependence or intruded or unordered occurrence based matching. Hence, in order to keep the evaluation framework in line with the current IR systems, we focus on ordered contiguity matching which is easily implemented through the use of double quotes around segments. However, we note that the philosophy of the framework does not change with increased sophistication in the retrieval system \u2013 only the expansion sets for the queries have to be appropriately modi\ufb01ed. We propose an evaluation framework for segmentation algorithms that generates all possible quoted versions of a \fsegmented query (see Table 1) and submits each quoted version to the IR engine. The corresponding ranked lists of retrieved documents are then assessed against relevance judgments available for the query-URL pairs. The IR quality of the best-performing quoted version is used to measure performance of the segmentation algorithm. We now formally specify our evaluation framework that computes what we call a Quoted Version Retrieval Score (QVRS) for the segmentation algorithm given the test set Q of queries, the document pool U and the relevance judgments R for queryURL pairs. Quoted query version generation Let the segmentation output by algorithm A be denoted by A(q) = sq = \u27e8sq 1, . . . , sq n\u27e9. We generate all possible quoted versions of the query q based on the segments in A(q). In particular, we de\ufb01ne A0(q) = (sq 1 + \u00b7 \u00b7 \u00b7 + sq n) with no quotes on any of the segments, A1(q) = (sq 1 + \u00b7 \u00b7 \u00b7 + \u201csq n\u201d) with quotes only around the last segment sq n, and so on. Since there are n segments in A(q), this process will generate 2n versions of the query, Ai(q), i = 0, . . . , 2n \u22121. We note that if bi = (bi1, . . . , bin) be the n-bit binary representation of i, then Ai(q) will apply quotes to the jth segment sq j i\ufb00bij = 1. We deduplicate this set, because {Ai(q) : i = 0, . . . , 2n \u22121} can contain multiple versions that essentially represent the same quoted query version (when single words are inside quotes). For example, the query versions \"harry potter\" \"game\" and \"harry potter\" game are equivalent in terms of the input semantics of an IR engine. The resulting set of unique quoted query versions is denoted QA(q). Document retrieval using IR engine For each Ai(q) \u2208QA(q) we use the IR engine to retrieve a ranked list Oi of documents out of the document pool U that matched the given quoted query version Ai(q). The number of documents retrieved in each case depends on the IR metrics we will want to use to assess the quality of retrieval. For example, to compute an IR metric at the top k positions, we would require that at least k documents be retrieved from the pool. Measuring retrieval against relevance judgments Since we have relevance judgments (R) for query-URL pairs in Q\u00d7U, we can now compute IR metrics such as normalized Discounted Cumulative Gain (nDCG), Mean Average Precision (MAP) or Mean Reciprocal Rank (MRR) to measure the quality of the retrieved ranked list Oi for query q. We use @k variants of each of these measures which are de\ufb01ned to be the usual metrics computed after examining only the top-k positions. For example, we can compute nDCG@k for query q and retrieved document-list Oi using the following formula: nDCG@k(q, Oi , R) = r(q, O1 i ) + k X j=2 r(q, Oj i ) log2 j (1) where Oj i , j = 1, . . . , k, denotes the jth document in the ranked-list Oi and r(q, Oj i ) denotes the associated relevance judgment from R. Oracle score using best quoted query version Di\ufb00erent quoted query versions Ai(q) (all derived from the same basic segmentation A(q) output by the segmentation algorithm A) retrieve di\ufb00erent ranked lists of documents Oi. As discussed earlier, automatic apriori selection of a good (or the best) quoted query version is a di\ufb03cult problem. While di\ufb00erent strategies may be used to select a quoted query version, we would like our evaluation of the segmentation algorithm A to be agnostic of the version-selection step. To this end, we select the best-performing Ai(q) from the entire set QA(q) of query versions generated and use it to de\ufb01ne our oracle score for q and A under the chosen IR metric [8]. For example, the oracle score for nDCG@k is as de\ufb01ned below: \u2126nDCG@k(q, A) = max Ai(q)\u2208QA(q) nDCG@k(q, Oi , R) (2) where Oi denotes the ranked list of documents retrieved by the IR engine when presented with Ai(q) as the input. We note that QA(q) always contains the original unsegmented version of the query. We refer to such an \u2126\u00b7(\u00b7, \u00b7) as the Oracle. This forms the basis of our evaluation framework. We note that there can also be other ways to de\ufb01ne this oracle score. For example, instead of seeking the best IR performance possible across the di\ufb00erent query versions, we could also seek the minimum performance achievable by A irrespective of what version-selection strategy is adopted. This would give us a lower bound on the performance of the segmentation algorithm. However, the main drawback of this approach is that the minimum performance is almost always achieved by the fully quoted version (where every segment is in double quotes) (see Table 7). Such a lower bound would not be useful in assessing the comparative performance of segmentation algorithms. QVRS computation Once the oracle scores are obtained for all queries in the test set Q, we can compute the average oracle score achieved by A. We refer to this as the Quoted Version Retrieval Score (QVRS) of A with respect to test set Q, document pool U and relevance judgments R. For example, using the oracle with the nDCG@k metric, we can de\ufb01ne the QVRS score as follows: QV RS(Q, A, nDCG@k) = 1 |Q| X q\u2208Q \u2126nDCG@k(q, A) (3) Similar QVRS scores can be computed using other IR metrics such as MAP@k and MRR@k. In our experiments section, we report results using nDCG@k, MAP@k, and MRR@k, for k = 5 and k = 10 as most Web users examine only the \ufb01rst \ufb01ve or ten search results. 3. DATASET AND ALGORITHMS In this section, we describe the dataset used and brie\ufb02y introduce the algorithms compared on our framework. 3.1 Test set of queries (Q) We selected a random subset of 500 queries from a slice of the query logs of Bing Australia1 containing 16.7 million queries issued over a period of one month (May 2010). We used the following criteria to \ufb01lter the logs before extracting a random sample: (1) Exclude queries with non-ASCII characters, (2) Exclude queries that occurred fewer than 5 times 1http://www.bing.com/?cc=au \fin the logs (rarer queries often contained spelling errors), and (3) Restrict query lengths to between \ufb01ve and eight words. Shorter queries rarely contain multiple multiword segments, and when they do, they are mostly named entities that can be easily detected using dictionaries. Moreover, traditional search engines usually give satisfactory results for short queries. On the other hand, queries longer than eight words (only 3.24% of all queries in our log) are usually error messages, complete NL sentences or song lyrics, that need to be addressed separately. We denote this set of 500 queries by Q, the test set of unsegmented queries needed for all our evaluation experiments. The average length of queries in Q (our dataset) is 5.29 words. The average query length was 4.31 words in the Bergsma and Wang 2007 Corpus2 (henceforth, BWC07) [3]. Each of these 500 queries were independently segmented by three human annotators (who issue around 20-30 search queries per day) who were asked to mark a contiguous chunk of words in a query as a segment if they thought that these words together formed a coherent semantic unit. The annotators were free to refer to other resources and Web search engines during the annotation process, especially for understanding the query and its possible context(s). We shall refer to the three sets of annotations (and also the corresponding annotators) as HA, HB and HC. It is important to mention that the queries in Q have some amount of word level overlap, even though all the queries have very distinct information needs. Thus, a document retrieved from the pool might exhibit good term level match for more than one query in Q. This makes our corpus an interesting testbed for experimenting with di\ufb00erent retrieval systems. There are existing datasets, including BWC07, that could have been used for this study. However, refer to Sec. 5.1 for an account of why building this new dataset was crucial for our research. 3.2 Document pool (U) and RJs (R) Each query in Q was segmented using all the nine segmentation strategies considered in our study (six algorithms and three humans). For every segmentation, all possible quoted versions were generated (total 4, 746) and then submitted to the Bing API3 and the top ten documents were retrieved. We then deduplicated these URLs to obtain 14, 171 unique URLs, forming U. On an average, adding the 9th strategy to a group of the remaining eight resulted in about one new quoted version for every two queries. These new versions may or may not introduce new documents to the pool. We observed that for 71.4% of the queries there is less than 50% overlap between the top ten URLs retrieved for the di\ufb00erent quoted versions. This indicates that di\ufb00erent ways of quoting the segments in a query does make a di\ufb00erence in the search results. By varying the pooling depth (ten in our case), one can roughly control the number of relevant and non-relevant documents entering the collection. For each query-URL pair, where the URL has been retrieved for at least one of the quoted versions of the query (approx. 28 per query), we obtained three independent sets of relevance judgments from human users. These users were di\ufb00erent from annotators HA, HB and HC who marked the segmentations, but having similar familiarity with search systems. For each query, the corresponding set of URLs was 2http://bit.ly/xoyT2c 3http://msdn.microsoft.com/en-us/library/dd251056.aspx Table 2: Segmentation algorithms compared on our framework. Algorithm Training data Li et al. [9] Click data, Web n-gram probabilities Hagen et al. [7] Web n-gram frequencies, Wikipedia titles Mishra et al. [11] Query logs [11] + Wiki Query logs, Wikipedia titles PMI-W [7] Web n-gram probabilities (used as baseline) PMI-Q [11] Query logs (used as baseline) shown to the users after deduplication and randomization (to prevent position bias for top results), and asked to mark whether the URL was irrelevant (score = 0), partially relevant (score = 1) or highly relevant (score = 2) to the query. We then computed the average rating for each query-URL pair (the entire set forming R), which has been used for subsequent nDCG, MAP and MRR computations. Please refer to Table 8 in Sec. 5.3 for inter-annotator agreement \ufb01gures and other related discussions. 3.3 Segmentation algorithms Table 2 lists the six segmentation algorithms that have been studied in this work. Li et al. [9] use the expectation maximization algorithm to arrive at the most probable segmentation, while Hagen et al. [7] show a simple frequencybased method produces a performance comparable to the state-of-the-art. The technique in Mishra et al. [11] uses only query logs for segmenting queries. In our experiments, we observed that the performance of Mishra et al. [11] can be improved if we used Wikipedia titles. We refer to this as \u201c[11] + Wiki\u201d in our experiments (see Appendix A for details). The Point-wise Mutual Information (PMI)-based algorithms are used as baselines. The thresholds for PMI-W and PMI-Q were chosen to be 8.141 and 0.156 respectively, that maximized the Seg-F (see Sec. 4.2) on our development set. 3.4 Public release of data The test set of search queries along with their manual and some of the algorithmic segmentations, the theoretical best segmentation output that can serve as an evaluation benchmark (BQVBF in Sec. 4.1), and the list of URLs whose contents serve as our document corpus is available for public use4. The relevance judgments for the query-URL pairs have also been made public which will enable the community to use this dataset for evaluation of any new segmentation algorithm. 4. EXPERIMENTS AND OBSERVATIONS In this section we present experiments, results and the key inferences made from them. 4.1 IR Experiments For the retrieval-based evaluation experiments, we use the Lucene5 text retrieval system, which is publicly available as a code library. In its default con\ufb01guration, Lucene does not perform any automatic query segmentation, which is very important for examining the e\ufb00ectiveness of segmentation algorithms in an IR-based scheme. Double quotes 4http://cse.iitkgp.ac.in/resgrp/cnerg/qa/querysegmentation.html 5http://lucene.apache.org/java/docs/index.html \fTable 3: Results of IR-based evaluation of segmentation algorithms using Lucene (mean oracle scores). Metric Unseg. [9] [7] [11] [11] + PMI-W PMI-Q HA HB HC BQVBF query Wiki nDCG@5 0.688 0.752* 0.763* 0.745 0.767* 0.691 0.766* 0.770 0.768 0.759 0.825 nDCG@10 0.701 0.756* 0.767* 0.751 0.768* 0.704 0.767* 0.770 0.768 0.763 0.832 MAP@5 0.882 0.930* 0.942* 0.930* 0.945* 0.884 0.932* 0.944 0.942 0.936 0.958 MAP@10 0.865 0.910* 0.921* 0.910* 0.923* 0.867 0.912* 0.923 0.921 0.916 0.944 MRR@5 0.538 0.632* 0.649* 0.609 0.650* 0.543 0.648* 0.656 0.648 0.632 0.711 MRR@10 0.549 0.640* 0.658* 0.619 0.658* 0.555 0.656* 0.665 0.656 0.640 0.717 The highest value in a row (excluding the BQVBF column) and those with no statistically signi\ufb01cant di\ufb00erence with the highest value are marked in boldface. The values for algorithms that perform better than or have no statistically signi\ufb01cant di\ufb00erence with the minimum of the human segmentations are marked with *. The paired t-test was performed and the null hypothesis was rejected if the p-value was less than 0.05. Table 4: Matching metrics for di\ufb00erent segmentation algorithms and human annotations with BQVBF as reference. Metric Unseg. [9] [7] [11] [11] + PMI-W PMI-Q HA HB HC BQVBF query Wiki Qry-Acc 0.044 0.056 0.082* 0.058 0.094* 0.046 0.104* 0.086 0.074 0.064 1.000 Seg-Prec 0.226* 0.176* 0.189* 0.206* 0.203* 0.229* 0.218* 0.176 0.166 0.178 1.000 Seg-Rec 0.325* 0.166* 0.162* 0.210* 0.174* 0.323* 0.196* 0.144 0.133 0.154 1.000 Seg-F 0.267* 0.171* 0.174* 0.208* 0.187* 0.268* 0.206* 0.158 0.148 0.165 1.000 Seg-Acc 0.470 0.624 0.661* 0.601 0.667* 0.474 0.660* 0.675 0.675 0.663 1.000 The highest value in a row (excluding the BQVBF column) and those with no statistically signi\ufb01cant di\ufb00erence with the highest value are marked in boldface. The values for algorithms that perform better than or have no statistically signi\ufb01cant di\ufb00erence with the minimum of the human segmentations are marked with *. The paired t-test was performed and the null hypothesis was rejected if the p-value was less than 0.05. can be used in a query to force Lucene to match the quoted phrase (in Lucene terms) exactly in the documents. Starting with the segmentations output by each of the six algorithms as well as the three human annotations, we generated all possible quoted query versions, which resulted in a total of 4, 746 versions for the 500 queries. In the notation of Sec. 2, this corresponds to generating QA(q) for each segmentation method A (including one for each human segmentation) and for every query q \u2208Q. These quoted versions were then passed through Lucene to retrieve documents from the pool. For each segmentation scheme, we then use the oracle described in Sec. 2 to obtain the query version yielding the best result (as determined by the IR metrics \u2013 nDCG, MAP and MRR computed according to the human relevance judgments). These oracle scores are then averaged over the query set to give us the QVRS measures. The results are summarized in Table 3. Di\ufb00erent rows represent the di\ufb00erent IR metrics that were used and columns correspond to di\ufb00erent segmentation strategies. The second column (marked \u201cUnseg. Query\u201d) refers to the original unsegmented query. This can be assumed to be generated by a trivial segmentation strategy where each word is always a separate segment. Columns 3-8 denote the six di\ufb00erent segmentation algorithms and 9-11 (marked HA, HB and HC) represent the human segmentations. The last column represents the performance of the best quoted versions (denoted by BQVBF in table) of the queries which are computed by brute force, i.e. an exhaustive search over all possible ways of quoting the parts of a query (2l\u22121 possible quoted versions for an l-word query) irrespective of any segmentation algorithm. The results are reported for two sizes of retrieved URL lists (k), namely \ufb01ve and ten. Since we needed to convert our graded relevance judgments to binary values for computing MAP@k, URLs with ratings of 1 and 2 were considered as relevant (responsible for the generally high values) and those with 0 as irrelevant. For MRR, only URLs with ratings of 2 were considered as relevant. The \ufb01rst observation we make from the results is that human as well as all algorithmic segmentation schemes consistently outperform unsegmented queries for all IR metrics. Second, we observe that the performance of some segmentation algorithms are comparable and sometime even marginally better than some of the human annotators. Finally, we observe that there is considerable scope for improving IR performance through better segmentation (all values less than BQVBF ). The inferences from these observations are stated later in this section. 4.2 Performance under traditional matching metrics In the next set of experiments we study the utility of traditional matching metrics that are used to evaluate query segmentation algorithms against a gold standard of human segmented queries (henceforth referred to as the reference segmentation). These metrics are listed below [7]: 1. Query accuracy (Qry-Acc): The fraction of queries where the output matches exactly with the reference segmentation. 2. Segment precision (Seg-Prec): The ratio of the number of segments that overlap in the output and reference segmentations to the number of output segments, averaged across all queries in the test set. \fTable 5: Performance of PMI-Q and [9] with respect to matching (mean of comparisons with HA, HB and HC as references) and IR metrics. Metric nDCG@10 MAP@10 MRR@10 Qry-Acc Seg-Prec Seg-Rec Seg-F Seg-Acc PMI-Q 0.767 0.912 0.656 0.341 0.448 0.487 0.467 0.810 [9] 0.756 0.910 0.640 0.375 0.524 0.588 0.554 0.810 The highest values in a column are marked in boldface. 3. Segment recall (Seg-Rec): The ratio of the number of segments that overlap in the output and reference segmentations to the number of reference segments, averaged across all queries in the test set. 4. Segment F-score (Seg-F): The harmonic mean of Seg-Prec and Seg-Rec. 5. Segmentation accuracy (Seg-Acc): The ratio of correctly predicted boundaries and non-boundaries in the output segmentation with respect to the reference, averaged across all queries in the test set. We computed the matching metrics for various segmentation algorithms against HA, HB and HC. According to these metrics, \u201cMishra et al. [11] + Wiki\u201d turns out to be the best algorithm which agrees with the results of IR evaluation. However, the average Kendall-Tau rank correlation coe\ufb03cient6 between the ranks of the strategies as obtained from the IR metrics (Table 3) and the matching metrics was only 0.75. This indicates that matching metrics are not perfect predictors for IR performance. In fact, we discovered some costly \ufb02aws in the relative ranking produced by matching metrics. One such case was rank inversions between Li et al. [9] and PMI-Q. The relevant results are shown in Table 5, which demonstrate that while PMI-Q consistently performs better than Li et al. [9] under IR-based measures, the opposite inference would have been drawn if we had used any of the matching metrics. In Bergsma and Wang [3], human annotators were asked to segment queries such that segments matched exactly in the relevant documents. This essentially corresponds to determining the best quoted versions for the query. Thus, it would be interesting to study how traditional matching metrics would perform if the humans actually marked the best quoted versions. In order to evaluate this, we used the matching metrics to compare the segmentation outputs by the algorithms and human annotations against BQVBF . The corresponding results are quoted in Table 4. The results show that matching metrics are very poor indicators of IR performance with respect to the BQVBF . For example, for three out of the \ufb01ve matching metrics, the unsegmented query is ranked the best. This shows that even if human annotators managed to correctly guess the best quoted versions, the matching metrics would fail to estimate the correct relative rankings of the segmentation algorithms with respect to IR performance. This fact is also borne out in the Kendall-Tau rank correlation coe\ufb03cients reported in Table 6. Another interesting observation from these experiments is that Seg-Acc emerges as the best matching metric with respect to IR performance, although its correlation coe\ufb03cient is still much below one. 6This coe\ufb03cient is 1 when there is perfect concordance between the rankings, and \u22121 if the trends are reversed. Table 6: Kendall-Tau coe\ufb03cients between IR and matching metrics with BQVBF as reference for the latter. Metric Qry-Acc Seg-Prec Seg-Rec Seg-F Seg-Acc nDCG@10 0.432 -0.854 -0.886 -0.854 0.674 MAP@10 0.322 -0.887 -0.920 -0.887 0.750 MRR@10 0.395 -0.782 -0.814 -0.782 0.598 The highest value in a row is marked in boldface. 4.3 Inferences Segmentation is helpful for IR. By de\ufb01nition, \u2126\u00b7(\u00b7, \u00b7) (i.e., the oracle) values for every IR metric for any segmentation scheme are at least as large as the corresponding values for the unsegmented query. Nevertheless, for every IR metrics, we observe signi\ufb01cant performance bene\ufb01ts for all the human and algorithmic segmentations (except for PMI-W) over the unsegmented query. This indicates that segmentation is indeed helpful for boosting IR performance. Thus, our results validate the prevailing notion and some of the earlier observations [2, 9] that segmentation can help improve IR. Human segmentations are a good proxy, but not a true gold standard. Our results indicate that human segmentations perform reasonably well in IR metrics. The best of the human annotators beats all the segmentation algorithms, on almost all the metrics. Therefore, evaluation against human annotations can indeed be considered as the second best alternative to an IR-based evaluation (though see below for criticisms of current matching metrics). However, if the objective is to improve IR performance, then human annotations cannot be considered a true gold standard. There are at least three reasons for this: First, in terms of IR metrics, some of the state-of-the-art segmentation algorithms are performing as well as human segmentations (no statistically signi\ufb01cant di\ufb00erence). Thus, further optimization of the matching metrics against human annotations is not going to improve the IR performance of the segmentation algorithms. Thus, evaluation on human annotations might become a limiting factor for the current segmentation algorithms. Second, the IR performance of the best quoted version of the queries derived through our framework is signi\ufb01cantly better than that of human annotations (last column, Table 3). This means that humans fail to predict the correct boundaries in many instances. Thus, there is scope for improvement for human annotations. Third, IR performance of at least one of the three human annotators (HC) is worse than some of the algorithms studied. In other words, while some annotators (such as HA) are good at guessing the \u201ccorrect\u201d segment boundaries that will help IR, not all annotators can do it well. Therefore, unless \fFigure 1: Distribution of multiword segments in queries across segmentation strategies. the annotators are chosen and guided properly, one cannot guarantee the quality of annotated data for query segmentation. If the queries in the test set have multiple intents, this issue becomes an even bigger concern. Matching metrics are misleading. As discussed earlier and demonstrated by Tables 4 and 6, the matching metrics provide unreliable ranking of the segmentation algorithms even when applied against a true gold standard, BQVBF , that by de\ufb01nition maximizes IR performance. This counter-intuitive observation can be explained in two ways. Either the matching metrics or the IR metrics (or probably both) are misleading. Given that IR metrics are well-tested and generally assumed to be acceptable, we are forced to conclude that the matching metrics do not really re\ufb02ect the quality of a segmentation with respect to a gold standard. Indeed, this can be illustrated by a simple example. Example. Let us consider the query the looney toons show cartoon network, whose best quoted version turns out to be \"the looney toons show\" \"cartoon network\". The underlying segmentation that can give rise to this and therefore can be assumed to be the reference is: Ref: the looney toons show | cartoon network The segmentations (1) the looney | toons show | cartoon | network (2) the | looney | toons show cartoon | network are equally bad if one considers the matching metrics of QryAcc, Seg-Prec, Seg-Rec and Seg-F (all values being zero) with respect to the reference segmentation. Seg-Acc values for the two segmentations are 3/5 and 1/5 respectively. However, the BQV for (1) (\"the looney\" \"toons show\" cartoon network) fetches better pages than the BQV of (2) (the looney toons show cartoon network). So the segmentation (2) provides no IR bene\ufb01t over the unsegmented query and hence performs worse than (1) on IR metrics. However, the matching metrics, except for Seg-Acc to some extent, fail to capture this di\ufb00erence between the segmentations. Distribution of multiword segments across queries gives insights about e\ufb00ectiveness of strategy. The limitation of the matching metrics can also be understood from the following analysis of the multiword segments in the queries. Fig. 1 shows the distribution of queries having a speci\ufb01c number of multiword segments (for example, 1 in the legend indicates the proportion of queries having one multiword segment) when segmented according to the various strategies. We note that for Hagen et al. [7], HB, HA and \u201cMishra et al. [11] + Wiki\u201d, almost all of the queries have two multiword segments. For HC, Li et al. [9], PMI-Q and Mishra et al. [11], the proportion of queries that have only one multiword segment increases. Finally, PMI-W has almost negligible queries with a multiword segment. BQVBF is di\ufb00erent from all of them and has a majority of queries with one multiword segment. Now given that the \ufb01rst group generally does the best in IR, followed by the second, we can say that out of the two multiword segments marked by these strategies, only one needs to be quoted. PMI-W as well as unsegmented queries are bad because these schemes cannot detect the one crucial multiword segment quoting which improves the performance. Nevertheless, these schemes do well for matching metrics against BQVBF because both have a large number of single word segments. Clearly this is not helpful for IR. Finally, Mishra et al. [11] performs poorly despite being able to identify a multiword segment in most of the cases because it is not identifying the one that is important for IR. Hence, the matching metrics are misleading due to two reasons. First, they do not take into account that splitting a useful segment (i.e., a segment which should be quoted to improve IR performance) is less harmful than joining two unrelated segments. Second, matching metrics are, by de\ufb01nition, agnostic to which segments are useful for IR. Therefore, they might unnecessarily penalize a segmentation for not agreeing on the segments which should not be quoted, but are present in the reference human segmentation. While the latter is an inherent problem with any evaluation against manually segmented datasets, the former can be resolved by introducing a new matching metric that di\ufb00erentially penalizes splitting and joining of segments. This is an important and interesting research problem that we would like to address in the future. However, we would like to emphasize here that with the IR system expected to grow in complexity in the future (supporting more \ufb02exible matching criteria), the need for an IR-based evaluation like ours\u2019 becomes imperative. Based on our new evaluation framework and corresponding experiments, we observe that \u201cMishra et al. [11] + Wiki\u201d has the best performance. Nevertheless, the algorithms are trained and tested on di\ufb00erent datasets, and therefore, a comparison amongst the algorithms might not be entirely fair. This is not a drawback of the framework and can be circumvented by appropriately tuning all the algorithms on similar datasets. However, the objective of the current work is not to compare segmentation algorithms; rather, it is to introduce the evaluation framework, gain insights from the experiments and highlight the drawbacks of human segmentation-based evaluation. 5. RELATED ISSUES In this section, we will brie\ufb02y discuss a few related issues that are essential for understanding certain design choices and decisions made during the course of this research. 5.1 Motivation for a new dataset TREC data has been a popular choice for conducting IRbased experiments throughout the past decade. Since there is no track speci\ufb01cally geared towards query segmentation, the queries and qrels (query-relevance sets) from the ad hoc retrieval task for the Web Track would seem the most rele\fTable 7: IR-based evaluation using Bing API. Metric Unseg. All quoted for Oracle for query [11] + Wiki [11] + Wiki nDCG@10 0.882 0.823 0.989* MAP@10 0.366 0.352 0.410* MRR@10 0.541 0.515 0.572* The highest value in a row is marked bold. Statistically signi\ufb01cant (p < 0.05 for paired t-test) improvement over the unsegmented query is marked with *. vant to our work. However, 74% of the 50 queries in the 2010 Web track ad hoc task had less than three words. Also, when these 50 queries were segmented using the six algorithms, half of the queries did not have a multiword segment. As discussed earlier, query segmentation is useful but not necessarily for all types of queries. The bene\ufb01t of segmentation may be observed only when there are multiple multiword segments in the queries. The TREC Million Query Track, last held in 2009, has a much larger set of 40, 000 queries, with a better coverage of longer queries. But since the goal of the track is to test the hypothesis that a test collection built from several incompletely judged topics is a better tool than a collection built using traditional TREC pooling, there are only about 35, 000 query-document relevance judgments for the 40, 000 queries. Such a sparse qrels is not suitable here \u2013 incomplete assessments, especially for documents near the top ranks, could cause crucial errors in system comparisons. Yet another option could have been to use BWC07 as Qand create the corresponding Uand R. However, this query set is known to su\ufb00er from several drawbacks [7]. A new dataset for query segmentation7 containing manual segment markups collected through crowdsourcing has been recently made publicly available (after we had completed construction of our set) by Hagen et al. [7], but it lacks query-document relevance judgments. These factors motivated us to create a new dataset suitable for our framework, which has been made publicly available (see Sec. 3.4). 5.2 Retrieval using Bing Bing is a large-scale commercial Web search engine that provides an API service. Instead of Lucene, which is too simplistic, we could have used Bing as the IR engine in our framework. However, such a choice su\ufb00ers from two drawbacks. First, Bing might already be segmenting the query with its own algorithm as a preprocessing step. Second, there is a serious replicability issue. The document pool that Bing uses, i.e. the Web, changes dynamically with documents added and removed from the pool on a regular basis. This makes it di\ufb03cult to publish a static gold standard dataset with relevance judgments for all appropriate queryURL pairs that the Bing API may retrieve even for the same set of queries. In view of this, the main results were reported in this paper using the Lucene text retrieval system. However, since we used Bing API to construct Uand corresponding R, we have the evaluation statistics using the Bing API as well. For paucity of space, in Table 7 we only present the results for nDCG@10, MRR@10 and MAP@10 for \u201cMishra et al. [11] + Wiki\u201d. The table reports results for three quoted version-selection strategies: (i) Unsegmented query only (equivalent to each word being within quotes) (ii) 7http://bit.ly/xIhSur Table 8: Inter-annotator agreement on features as observed from our experiments. Feature Pair 1 Pair 2 Pair 3 Mean Qry-Acc 0.728 0.644 0.534 0.635 Seg-Prec 0.750 0.732 0.632 0.705 Seg-Rec 0.756 0.775 0.671 0.734 Seg-F 0.753 0.753 0.651 0.719 Seg-Acc 0.911 0.914 0.872 0.899 Rel. judg. 0.962 0.959 0.969 0.963 For relevance judgments, only pairs of (0, 2) and (2, 0) were considered disagreements. All segments quoted and (iii) QVRS (oracle for \u201cMishra et al. [11] + Wiki\u201d). For all the three metrics, QVRS is statistically signi\ufb01cantly higher than results for the unsegmented query. Thus, segmentation can play an important role towards improving IR performance of the search engine. We note that the strategy of quoting all the segments is, in fact, detrimental to IR performance. This emphasizes the point that how the segments should be matched in the documents is a very important research challenge. Instead of quoting all the segments, our proposal here is to assume an oracle that will suggest which segments to quote and which are to be left unquoted for the best IR performance. Philosophically, this is a major departure from the previous ideas of using quoted segments, because re-issuing a query by quoting all the segments implies segmentation as a way to generate a fully quoted version of the query (all segments in double quotes). This de\ufb01nition severely limits the scope of segmentation, which ideally should be thought of as a step forward better query understanding. 5.3 Inter-annotator agreement Inter-annotator agreement (IAA) is an important indicator for reliability of manually created data. Table 8 reports the pairwise IAA statistics for HA, HB and HC. Since there are no universally accepted metrics for IAA, we report the values of the \ufb01ve matching metrics when one of the annotations (say HA) is assumed to be the reference and the remaining pair (HB and HC) is evaluated against it (average reported). As is evident from the table, the values of all the metrics, except for Seg-Acc, is less than 0.78 (similar values reported in [13]), which indicates a rather low IAA. The value for Seg-Acc is close to 0.9, which to the contrary, indicates reasonably high IAA (as in [13]). The last row of Table 8 reports the IAA for the three sets of relevance judgments (therefore, the actual pairs for this column are di\ufb00erent from that of the other rows). The agreement in this case is quite high. There might be several reasons for low IAA for segmentation, such as lack of proper guidelines and/or an inherent inability of human annotators to mark the correct segments of a query. Low IAA raises serious doubts about the reliability of human annotations for query segmentation. On the other hand, high IAA for relevance judgments naturally makes these annotations much more reliable for any evaluation, and strengthens the case for our IR-based evaluation framework which only relies on relevance judgments. We note that ideally, relevance judgments should be obtained from the user who has issued the query. This has been re\fferred to as gold annotations, as opposed to silver or bronze annotations which are obtained from expert and non-expert annotators respectively who have not issued the query [1]. Gold annotations are preferable over silver or bronze ones due to relatively higher IAA. Our annotations are silver standard, though very high IAA essentially indicates that they might be as reliable as gold standard. The high IAA might be due to the unambiguous nature of the queries. 6. RELATED WORK Since its inception in 2003 [12], many algorithms have been proposed for automatic segmentation of Web queries. The approaches vary from purely supervised [3] to fully unsupervised [7, 11] machine learning techniques. They differ widely in terms of resources usage (Table 2) and the underlying algorithmic techniques (e.g., expectation maximization [13] and eigenspace similarity [15]). 6.1 Evaluation on manual annotations Despite the diversity in approaches to the task, till date there has been only one standard approach for evaluation of query segmentation algorithms, which is to compare the machine output against a set of queries segmented by humans [3, 4, 7, 9, 11, 13, 15]. The basic assumption underlying this evaluation scheme is that humans are capable of segmenting a query in a \u201ccorrect\u201d or \u201cthe best possible\u201d way, which, if exploited appropriately, will result in maximum bene\ufb01ts in IR performance. This is probably motivated by the extensive use of human judgments and annotations as the gold standard in the \ufb01eld of NLP (e.g., parts-ofspeech labeling, phrase boundary identi\ufb01cation, etc.). However, this idea has several shortcomings, as pointed out in Sec. 4.3. Among those who validate query segmentation against human-labeled data, most [3, 4, 6, 7, 9, 13, 15] report accuracies on BWC07 [3]. The popularity of the BWC07 dataset is partly because it was one of the \ufb01rst human annotated datasets created for query segmentation, and partly because it is the only publicly available dataset of its kind. While BWC07 has provided a common benchmark for comparing various query segmentation algorithms, there are several limitations of this speci\ufb01c dataset. BWC07 only contains noun phrase queries and there is a non-trivial amount of noise in the annotations. See [7] for a detailed criticism of this dataset. 6.2 IR-based evaluation There has been only a handful of studies that explore some initial ideas about IR-based evaluation [2, 7, 9] for query segmentation. Bendersky et al. [2] were the \ufb01rst to study the e\ufb00ects of segmentation from an IR perspective. They wanted to see if retrieval quality could be improved by incorporating knowledge of query chunks into an MRF-based retrieval system [10]. Their experiments on di\ufb00erent TREC collections using popular IR metrics like MAP indicate that query segmentation can indeed boost IR performance. Li et al. [9] examined the usefulness of query segmentation when built into language models for retrieval, in a Web search setting. However, none of these studies propose an objective IR-based evaluation framework for query segmentation. Their scope is limited to the demonstration of one particular strategy for exploiting segmentations for improving IR, instead of evaluating and comparing a set of algorithms. As an excursus to their main work, Hagen et al. [7] examined if submitting fully quoted queries (generated from algorithm outputs) results in fetching better pages by the search engines. They study the top \ufb01fty retrieved documents when the following versions of the queries \u2013 unsegmented, manually quoted, quoted by the technique in Bergsma and Wang [3], and by their own method \u2013 are submitted to Bing. Assuming the pages retrieved by manual quotation as relevant, it was observed that the technique in Bergsma and Wang [3] achieves the highest average recall. However, the authors also state that such an assumption need not hold good in reality and emphasized the need for an in-depth retrieval-based evaluation. We would like to emphasize here that the aim of a segmentation technique is not to come up with the best quoted version of a query. While some past works have explicitly or implicitly assumed this de\ufb01nition, there are also other works that view segmentation as a purely structural analysis of a query that identi\ufb01es chunks or sequences of words that are semantically connected as a unit [9, 11]. By quoting all the segments we would be penalizing the latter philosophy of segmentation, which is a more productive and practically useful view. There have been a few studies on detection of noun phrases from queries [5, 16]. This task is similar to query segmentation in the sense that the phrase can be considered as a single unit in the query. Zhang et al. [16] has shown that such phrase detection schemes can actually help in retrieval, and therefore, is along the lines of the philosophy of the present evaluation framework. Nevertheless, as far as we know, this is the \ufb01rst time that a formal conceptual framework for an IR-based evaluation of query segmentation has been proposed. Our study, also for the \ufb01rst time, compares the e\ufb00ectiveness of human segmentation and related matching metrics to an IR-based evaluation. 7." + } + ], + "Philipp Christmann": [ + { + "url": "http://arxiv.org/abs/2306.12235v3", + "title": "CompMix: A Benchmark for Heterogeneous Question Answering", + "abstract": "Fact-centric question answering (QA) often requires access to multiple,\nheterogeneous, information sources. By jointly considering several sources like\na knowledge base (KB), a text collection, and tables from the web, QA systems\ncan enhance their answer coverage and confidence. However, existing QA\nbenchmarks are mostly constructed with a single source of knowledge in mind.\nThis limits capabilities of these benchmarks to fairly evaluate QA systems that\ncan tap into more than one information repository. To bridge this gap, we\nrelease CompMix, a crowdsourced QA benchmark which naturally demands the\nintegration of a mixture of input sources. CompMix has a total of 9,410\nquestions, and features several complex intents like joins and temporal\nconditions. Evaluation of a range of QA systems on CompMix highlights the need\nfor further research on leveraging information from heterogeneous sources.", + "authors": "Philipp Christmann, Rishiraj Saha Roy, Gerhard Weikum", + "published": "2023-06-21", + "updated": "2023-08-19", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "Introduction Motivation. The goal in factual question answering (QA) is to derive crisp answers to information needs issued by end users Roy and Anand (2022). There has been a long line of research on fact-based QA, that can largely be divided into three main directions: (i) methods that use a large curated knowledge base (KB) like Wikidata Vrande\u02c7 ci\u00b4 c and Kr\u00f6tzsch (2014), YAGO Suchanek et al. (2007) or DBpedia Auer et al. (2007) as information source (KB-QA) Abujabal et al. (2018); Bast and Haussmann (2015); Bhutani et al. (2019); Vakulenko et al. (2019), (ii) systems that retrieve information from a text corpus (text-QA) Izacard and Grave (2021); Chen et al. (2017); Zhang et al. (2021), and (iii) works that answer questions based on a set of web tables (table-QA) Jauhar et al. (2016); Chakrabarti et al. (2020); Herzig et al. (2021). Each of these directions has its own benchmarks that are frequently used for developing, testing and comparing QA systems Dubey et al. (2019); Talmor and Berant (2018); Berant et al. (2013); Bordes et al. (2015); Yang et al. (2018); Herzig et al. (2021); Kwiatkowski et al. (2019); Yih et al. (2016). However, using only a single information source limits the answer coverage of QA systems: the individual sources are not complete, and may fail to cover the knowledge required for answering a user question. Consider, as an example, the question below: Who was fouled before the first penalty in the 2022 FIFA final? This kind of detailed information on a sports event is rarely covered in a structured information source like a KB or table, but can be found in text discussing the content of the match. On the other hand, structured sources often include information that is not present in text. Tables often store match-specific details, and would contain, for instance, the answer to the following question: arXiv:2306.12235v3 [cs.IR] 19 Aug 2023 \fArgentina\u2019s ball possession in the 2022 WC final? For some questions, answers appear in multiple sources. Such answer redundancy can also be helpful for QA systems, and boost their confidence in predicted answers. For instance, consider: In which stadium was the 2022 soccer world cup final played? The answer to this question occurs in a Wikipedia infobox, text content, and Wikidata. It may even be necessary to join evidence from multiple sources for answering a more complex question: Which team was behind by two goals but still won a FIFA final? The list of FIFA World Cup finals and their winners could be looked up in a KB, but the goal deficit information associated with the match timeline would either be discussed in text, or could be reasoned over statistics in tables. These observations have triggered work on heterogeneous QA Sun et al. (2018, 2019); O\u02d8 guz et al. (2022); Savenkov and Agichtein (2016); Xu et al. (2016b,a); Xiong et al. (2019): jointly harnessing multiple sources for answering factual questions Roy and Anand (2022). Limitations of state-of-the-art. There are currently three strategies of evaluating heterogeneous QA: (i) using benchmarks for single-source QA but showing that using more sources improves performance Xu et al. (2016a,b); Savenkov and Agichtein (2016); O\u02d8 guz et al. (2022); (ii) using benchmarks for single-source QA, but artificially removing parts of the \u201cmain\u201d source before augmenting the benchmark with new sources Sun et al. (2018, 2019); and (iii) using dedicated benchmarks for heterogeneous QA Talmor and Berant (2018); Chen et al. (2020). The first approach usually leads to quick saturation on benchmarks: all answers are still available only in the primary source, which is what the methods primarily target, and auxiliary sources bring in incremental gains. The second approach is inherently flawed because considering heterogeneous sources obviously improves performance, as the main source is intentionally weakened. This creates an artificial situation and does not expose the true strengths and weaknesses of methods built for heterogeneous QA. Our contribution belongs to the third approach. There are a few existing benchmarks for multi-source QA Talmor and Berant (2018); Miller et al. (2016); Zhang et al. (2018), but these either contain synthetic questions and do not reflect idiosyncrasies in formulation and intent concerning real users, or cover only a narrow spectrum of sources and domains Chen et al. (2021b, 2020); Zhu et al. (2021); Li et al. (2021); Chen et al. (2021a). A new benchmark. We make the case for a benchmark that inherently requires the usage of a mixture of information sources, as a more natural testbed for evaluating heterogeneous QA systems. To this end, we release COMPMIX (Complete questions over a Mixture of sources), a crowdsourced QA benchmark with questions that require heterogeneous sources for answering (Wikidata KB, and Wikipedia text, tables and infoboxes). The dataset has 9,410 questions created by humans from five different domains: books, movies, music, TV series and soccer. The answers are grounded to the Wikidata KB, which allows use of consistent evaluation metrics for QA systems returning either entity IDs or simple strings. Contributions. This paper presents our benchmark COMPMIX, accompanied by an in-depth analysis. We identify complex phenomena in the questions, like temporal conditions, multiple entities and relations, aggregations and comparisons. We investigate the effect of combining multiple sources on answer coverage and redundancy, and show that heterogeneous sources are truly required. Finally, we evaluate multiple recent heterogeneous QA methods on COMPMIX, and identify questions for which none of these systems gives correct answers. Interestingly, the results for a recent GPT model show that even a large language model (LLM) can answer only half of the questions for this realistic and challenging benchmark. The COMPMIX benchmark is publicly available at https: //qa.mpi-inf.mpg.de/compmix. 2 \fTable 1: Comparing benchmarks for heterogeneous QA. Dataset KB Text Table Info OR HQ OD HYBRIDQA Chen et al. (2020) \u2717 \u2713 \u2713 \u2717 \u2717 \u2713 \u2713 MULTIMODALQA Talmor et al. (2021) \u2717 \u2713 \u2713 \u2717 \u2713 \u2717 \u2713 OTT-QA Chen et al. (2021a) \u2717 \u2713 \u2713 \u2717 \u2713 \u2713 \u2713 MANYMODALQA Hannan et al. (2020) \u2717 \u2713 \u2713 \u2717 \u2717 \u2713 \u2713 WIKIMOVIES Miller et al. (2016) \u2713 \u2713 \u2717 \u2717 \u2713 \u2717 \u2717 TAT-QA Zhu et al. (2021) \u2717 \u2713 \u2713 \u2717 \u2717 \u2713 \u2717 FINQA Chen et al. (2021b) \u2717 \u2713 \u2713 \u2717 \u2717 \u2713 \u2717 HETPQA Shen et al. (2022) \u2717 \u2713 \u2713 \u2713 \u2717 \u2713 \u2717 COMPMIX (ours) \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 OR: Open Retrieval; HQ: Human Questions; OD: Open Domain. Table 2: Basic statistics for the COMPMIX benchmark. Domains Books, Movies, Music, TV series, Soccer Questions 9,410 (train: 4,966, dev: 1,680, test: 2,764) Avg. question length 9.19 words (min=2, median=9, max=28) Avg. no. of question entities 1.11 (min=1, median=1, max=4) Avg. answer length (text) 2.17 words (min=1, median=2, max=21) Avg. no. of answers 1.02 (min=1, median=1, max=6) Entities covered 5,413 (long-tail: 2,511, with <50 KB-facts) 2 Benchmark description 2.1 Prior benchmarks and COMPMIX rationale There are many datasets for KB-QA (like WebQuestions Berant et al. (2013), SimpleQuestions Bordes et al. (2015), and CSQA Saha et al. (2018)), text-QA (like SQuAD Rajpurkar et al. (2016), HotpotQA Yang et al. (2018), and NaturalQuestions Kwiatkowski et al. (2019)), and table-QA (like WikiTableQuestions Pasupat and Liang (2015), NQ-Tables Herzig et al. (2021), and WikiSQL Zhong et al. (2017)). However, these benchmarks were created with the intention of having a specific underlying source for answering, which already contains almost all answers to the questions. This restricts their utility as a testbed for heterogeneous QA. Thus, existing work on heterogeneous QA, being forced to rely on these benchmarks, would often remove significant chunks of information from this \u201cmain\u201d information source (\u224350% of Freebase removed for evaluating on WebQuestions in Sun et al. (2019)), and add parts of other sources to simulate a setting with heterogeneous sources. All existing benchmarks for heterogeneous QA suffer from one or more of the following issues: (i) their questions are not fully human-generated, and hence lack the diverse formulations of real users Talmor and Berant (2018); Zhang et al. (2018); Miller et al. (2016); (ii) they are restricted to small or artificial KBs, orders of magnitude smaller than large curated knowledge bases like Wikidata or DBpedia Miller et al. (2016); Zhang et al. (2018); (iii) they span only two sources, like tables and text Chen et al. (2021b); Zhu et al. (2021); Chen et al. (2021a), or text and knowledge bases Talmor and Berant (2018); Miller et al. (2016); Zhang et al. (2018); (iv) they explore only one domain like finance Chen et al. (2021b); Zhu et al. (2021), geography Li et al. (2021), or e-commerce Shen et al. (2022); and (v) their questions are only in conversational form with implicit intent, unsuitable for evaluating stand-alone QA methods Christmann et al. (2022b); Deng et al. (2022); Nakamura et al. (2022). COMPMIX removes these shortcomings: (i) it is crowdsourced; (ii) it includes the full KB as one of the knowledge sources; (iii) it spans four sources; (iv) it covers five domains; and (v) it contains self-contained complete questions. A succinct comparison of salient properties across benchmarks is in Table 1. 3 \fFigure 1: Answer-type frequencies per domain in COMPMIX. 2.2 COMPMIX We create COMPMIX by collating the completed (intent-explicit) versions of the potentially incomplete (intent-implicit) questions in the CONVMIX Christmann et al. (2022b) benchmark, which is a dataset for conversational QA over heterogeneous sources. These completed questions are provided directly by crowdworkers on Amazon Mechanical Turk (AMT), i.e. are created by humans. The answers to the questions were derived from four sources: either the full Wikidata KB, or the text, tables or infoboxes from all of Wikipedia. The questions span five domains: movies, tv series, music, books, and soccer (a distribution of expected answer types for each domain is in Fig. 1). Overall, the benchmark comprises 9,410 questions, split into train set (4,966), development set (1,680), and test set (2,764). Basic statistics for COMPMIX can be found in Table 2. A notable property of our dataset is the presence of a significant fraction of questions with long-tail entities (last row), a major vulnerability of LLM methods. COMPMIX includes questions, their domains, and their corresponding answers. Answers are Wikidata entity identifiers (text labels are also provided), plaintext strings, or normalized dates. This enables consistent evaluation across extractive and generative answering models. In addition, entity markup in question formulations are provided by crowdworkers. Answer sources are given, too: \u201cKB\u201d, \u201ctext\u201d, \u201ctable\u201d, or \u201cinfobox\u201d. 3 Benchmark analysis 3.1 Answer coverage One key desideratum of the benchmark is that heterogeneous sources are actually required for answering the questions inside. To verify that this is the case, we analyzed the answer coverage of each information source, which is the number of questions that a source contains the answer for. In a good benchmark for heterogeneous QA, each source should have an answer coverage far less than 100%. 4 \fAt the time of benchmark creation, Turkers were given a domain, and they picked up an entity of choice from the domain, followed by asking a natural question using this entity, and then provided an answer to the question. They also provided the source they consulted for locating their answer. For computing coverage, we first consider these source annotations by the crowdworkers. However, this measurement only captures whether a specific information source has the desired information, without any implications concerning the other sources. Therefore, we also conducted an automatic analysis of the answer coverage using a recall-oriented retriever that, given a question, tries to obtain as many relevant pieces of evidence as possible from all our sources. This retriever is implemented as in Christmann et al. (2022b, 2023), and would first disambiguate KB-entities from the question (using CLOCQ Christmann et al. (2022a), a recent system), and then retrieve KB-facts, text-sentences, table-records and infobox-entries with these disambiguated KB-entities. For each evidence, mentions of entities are linked to the KB. We measure this automated answer coverage as the number of questions for which the gold answer is among this set of mentioned entities in the pool of retrieved evidence. As with any large-scale automated analysis, this statistic is a noisy proxy, because the mere presence of an answer does not necessarily mean that the surrounding evidence is question-relevant. The results of both analyses are in Table 3. First, we see that the AMT annotators used the KB, text and infoboxes almost equally often to answer their questions (tables also consulted \u226510% of times). This proves that COMPMIX is not biased towards any specific underlying source. Second, from the automated measurement, we learn that adding an information source always improves the answer coverage. Note that this is a natural expansion, as opposed to augmentation after artificial suppression of large parts of specific sources. By including all sources, the answer coverage goes up to about 87%. Note that our recall-oriented retriever only provides a loose upper bound: the performance of an actual retriever that balances recall and precision would currently reach a lower number (cf. Sec. 4). Thus, our benchmark leaves substantial room for the development of smart heterogeneous retrievers. Overall, these measurements suggest that all four sources are naturally required for answering the questions in COMPMIX, and different sources complement each other nicely. 3.2 Answer redundancy Answer redundancy creates scope to test a heterogeneous system\u2019s ability to boost confidence in its prediction when multiple matches happen across sources. For each question, we thus measured the number of sources touched by the retrieved pieces of evidence that actually contain the gold answer. Results are in Table 4. What we can see from here is that for a substantial proportion of questions, the answer is located in two (\u224317%) or three (\u224334%), out of four, sources. A reasonable chunk even has redundancy across all sources (\u224320%). This shows that COMPMIX has ample answer redundancy to be exploited by some appropriate heterogeneous QA model. 3.3 Anecdotal examples For each of our five domains, Table 5 shows representative examples from the COMPMIX benchmark. The examples illustrate that our dataset has a wide range of questions in terms of both syntactic structure \u2013 from well-formulated fluent questions (1, 4, 5, 9) to ad hoc telegraphic queries (6, 7), as well as semantic complexity \u2013 from simple intents (6, 8) to more complex ones requiring conjunction (2), temporal understanding (3, 5), or aggregations (9). 4 Evaluation with COMPMIX Metrics. We use standard QA metrics for evaluating models on COMPMIX: (i) Precision at 1 (P@1), which is either 1 or 0 according as the top-ranked system answer is correct or not; (ii) Mean reciprocal rank (MRR), which is the reciprocal of the first rank at which a correct answer is located; and, (iii) Hit at 5 (Hit@5), which is either 1 or 0 according as the first five system responses contains a gold answer or not. A system answer is considered correct if it exactly (case-insensitive) matches a Wikidata ID (if QA system returns IDs) or the accompanying plaintext string/entity label (if QA system returns simple text). Metrics are averaged over all questions. Models. To better understand the state-of-the-art in heterogeneous QA, we evaluate several recent QA models that incorporate heterogeneous sources on COMPMIX. We also include GPT in our 5 \fTable 3: Answer coverage across information sources. Source(s) Annotated Automated KB 0.308 0.807 Text 0.280 0.690 Tables 0.112 0.272 Infoboxes 0.299 0.545 KB+Text 0.588 0.853 KB+Tables 0.420 0.821 KB+Infoboxes 0.607 0.831 Text+Tables 0.393 0.702 Text+Infoboxes 0.580 0.734 Tables+Infoboxes 0.412 0.610 KB+Text+Tables 0.701 0.857 KB+Text+Infoboxes 0.888 0.861 KB+Tables+Infoboxes 0.720 0.841 Text+Tables+Infoboxes 0.692 0.743 All sources 1.000 0.865 Table 4: Answer redundancy across information sources. Answer found in 1 source 0.157 Answer found in 2 sources 0.168 Answer found in 3 sources 0.341 Answer found in all sources 0.199 model suite, to verify if LLMs trained on colossal web corpora are already sufficient for this task. We compare the following models: \u2022 UNIK-QA O\u02d8 guz et al. (2022) follows a retriever-reader pipeline, and verbalizes evidence from each source into text. DPR Karpukhin et al. (2020) retrieves relevant evidences from the verbalized text, and a Fusion-in-decoder (FiD) model Izacard and Grave (2021) generates the answer. Due to unavailability of end-to-end source code, we approximate UNIK-QA by replacing DPR with BM25 Robertson and Zaragoza (2009). FiD generates strings, that are mapped to a ranked list of KB items, by following Christmann et al. (2023). \u2022 CONVINSE Christmann et al. (2022b) is method for conversational QA over heterogeneous sources, but can also be applied to complete questions. It derives an intent-explicit structured representation for a question, and feeds this into a retriever-reader pipeline. \u2022 EXPLAIGNN Christmann et al. (2023) is another method for heterogeneous QA that makes use of iterative graph neural networks for deriving the answer instead of a generative reader model like FiD. \u2022 GPT-3. For evaluating GPT-3 Brown et al. (2020) (model: text-davinci-003), we use the following prompt, which performed the best among different alternatives: \u201cPlease answer the following question by providing the crisp answer entity, date, year, or numeric number. Q: \u201d. The generated answer string is then compared with the label and KB-aliases of the gold answer(s), to allow for potential synonymy (all strings lowercased). P@1 = 1 for exact matches, and zero otherwise. GPT-3 generates only a single answer, and thus metrics for ranked lists are inapplicable. Results. Findings in Table 6 reveal two key takeaways: (i) systems from the literature only reach about 45% P@1 on COMPMIX, showing substantial room for model improvement. Much higher numbers have been reported for the compared models in previous sub-optimal evaluation settings (UNIK-QA reaches 80% accuracy on WebQuestionsSP): this highlights challenges in COMPMIX; (ii) The task is far from solved for LLMs, with the P@1 reached by GPT-3 being merely 50%. We attribute this to a large number of rare and emerging entities in our benchmark (see Table 2). To put 6 \fTable 5: Representative questions from COMPMIX. Sources that can be used for answering these questions are in brackets. Books Movies Music TV series Soccer 1. What did Rayford Steele from Left Behind do as a job? 2. Which lead actress appeared in both Terms of Endearment and The Evening Star? 3. Who replaced Ozzy Osbourne in Black Sabbath the first time? 4. What TV show featured the character called Carrie Mathison? 5. Where did the Uruguay national football team play their first recorded match? Pilot Shirley MacLaine Ronnie James Dio Homelande Paso del Molino [KB, Text] [KB] [Text, Info] [KB, Text, Info] [Text] 6. Author of the book To Kill a Mockingbird? 7. Film in which Wallace Reid played the role of Walter Jarvis? 8. What is the singer Lemmy\u2019s birth name? 9. How many episodes of The 100 did Jason Rothenberg write? 10. Who was runner up in the 1998 World Cup? Harper Lee The Ghost Breaker Ian Fraser Kilmister 16 Brazil football team [KB, Text, Table, Info] [KB, Text] [KB, Text, Info] [KB, Text, Table] [KB, Text, Info] 11. Name the fifth book in Malory Towers series. 12. Which movie is longer, Hamlet or Gone with the Wind? 13. What year was Inna\u2019s Hot album released in the US? 14. Which season of Teen Wolf did Tyler Posey become a coproducer? 15. Which soccer player scored the most number of goals in the UEFA Euro 2004 tournament? In the Fifth at Malory Towers Hamlet 2009 5 Milan Baro\u0161 [KB, Table] [KB, Info] [Text] [Text, Info] [KB, Text, Info, Table] 16. What years were the two volumes of Little Women published? 17. What is the run time of Titanic? 18. What is the name of the second single in the album Arise? 19. What year was Matt Groening born? 20. Who was the kit manufacturer of Chelsea Football Club from 1981 to 1983? 1868, 1869 195 minutes Dead Embryonic Cells 1954 Le Coq sportif [KB, Info] [KB, Infobox] [Text, Table] [Text] [Text, Table] Table 6: Heterogeneous QA models on COMPMIX (test set). Method \u2193/ Metric \u2192 P@1 MRR Hit@5 UNIK-QA O\u02d8 guz et al. (2022) 0.440 0.467 0.494 CONVINSE Christmann et al. (2022b) 0.407 0.437 0.483 EXPLAIGNN Christmann et al. (2023) 0.442 0.518 0.617 GPT-3 Brown et al. (2020) (text-davinci-003) 0.502 \u2212 \u2212 aggregate performance in perspective, we found that for 2,764 questions (81.9%), at least one of the methods failed to produce a correct answer. On the other hand, for 759 (27.5%) none of the methods (including GPT-3) could find the correct answer. Table 7 shows one such unanswered question per domain. The second and fifth question make a perfect case for merging multiple sources, as subtle cues like \u201cadult Pi Patel\u201d or \u201ctwin brothers\u201d are likely to be mentioned in textual sources, while movie cast or club membership is more easily looked up via structured repositories. Table 7: Anecdotal questions for which none of the tested methods could derive the correct answer. What was the original title of the book Twilight? Who played as adult Pi Patel in Life of Pi movie? What album is the song Closing Time on? Who composed the theme music for the TV series Fury? Who were the twin brothers who played soccer for Manchester United? 7 \f5 Data Sharing and Ethics Licensing. The COMPMIX benchmark is licensed under a Creative Commons Attribution 4.0 International License1. Availability. The benchmark is released on our project website2, with inclusion of a leaderboard to keep track of the state-of-the-art. COMPMIX is also offered at Hugging Face for a broader audience3. The DOI of COMPMIX is https://doi.org/10.57967/hf/0707. Ethical considerations. COMPMIX collates completed questions from the CONVMIX benchmark. For collecting CONVMIX, human annotators from AMT asked factoid questions in a conversational setting. No personal or other critical data was collected or published. The COMPMIX benchmark does not contain any personal or other critical data. All questions are provided anonymously. The annotators for collecting the CONVMIX dataset were paid a fair compensation for their work, consistent with the German minimum wage (irrespective of their residential country). 6" + }, + { + "url": "http://arxiv.org/abs/2305.01548v2", + "title": "Explainable Conversational Question Answering over Heterogeneous Sources via Iterative Graph Neural Networks", + "abstract": "In conversational question answering, users express their information needs\nthrough a series of utterances with incomplete context. Typical ConvQA methods\nrely on a single source (a knowledge base (KB), or a text corpus, or a set of\ntables), thus being unable to benefit from increased answer coverage and\nredundancy of multiple sources. Our method EXPLAIGNN overcomes these\nlimitations by integrating information from a mixture of sources with\nuser-comprehensible explanations for answers. It constructs a heterogeneous\ngraph from entities and evidence snippets retrieved from a KB, a text corpus,\nweb tables, and infoboxes. This large graph is then iteratively reduced via\ngraph neural networks that incorporate question-level attention, until the best\nanswers and their explanations are distilled. Experiments show that EXPLAIGNN\nimproves performance over state-of-the-art baselines. A user study demonstrates\nthat derived answers are understandable by end users.", + "authors": "Philipp Christmann, Rishiraj Saha Roy, Gerhard Weikum", + "published": "2023-05-02", + "updated": "2023-07-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Motivation. In conversational question answering (ConvQA), users issue a sequence of questions, and the ConvQA system computes crisp answers [39, 45, 47]. The main challenge in ConvQA systems is that inferring answers requires understanding the current context, since incomplete, ungrammatical and informal follow-up questions make sense only when considering the conversation history so far. Existing ConvQA models mostly focused on using either (i) a curated knowledge base (KB) [22\u201324, 26, 31, 51], or (ii) a text corpus [5, 17, 38, 39, 41], or (iii) a set of web tables [18, 33] as source to compute answers. These methods are not geared for tapping into multiple sources jointly, which is often crucial as one source could compensate for gaps in others. Consider the conversation: \ud835\udc5e1: Who wrote the book Angels and Demons? \ud835\udc4e1: Dan Brown \ud835\udc5e2: the main character in his books? \ud835\udc4e2: Robert Langdon \ud835\udc5e3: who played him in the films? \ud835\udc4e3: Tom Hanks Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan \u00a9 2023 Copyright held by the owner/author(s). ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. https://doi.org/10.1145/nnnnnnn.nnnnnnn \ud835\udc5e4: to which headquarters was robert flown in the book? \ud835\udc4e4: CERN \ud835\udc5e5: how long is the novel? \ud835\udc4e5: 768 pages \ud835\udc5e6: what about the movie? \ud835\udc4e6: 2 h 18 min Some of these questions can be conveniently answered using a KB (\ud835\udc5e1, \ud835\udc5e3), tables (\ud835\udc5e5, \ud835\udc5e6), or infoboxes (\ud835\udc5e1, \ud835\udc5e3) as they ask about salient attributes of entities, and some via text sources (\ud835\udc5e2, \ud835\udc5e3, \ud835\udc5e4) as they are more likely to be contained in book contents and discussion. However, none of these individual sources represents the whole information required to answer all questions of this conversation. Recently, there has been preliminary work on ConvQA over a mixture of input sources [9, 10]. This improves the recall for the QA system with higher answer coverage, and the partial answer redundancy across sources helps improve precision. Limitations of state-of-the-art methods. Existing methods for ConvQA over heterogeneous sources rely on neural sequence-tosequence models to compute answers [9, 10]. However, this has two significant limitations: (i) sequence-to-sequence models are not explainable, as they only generate strings as outputs, making it infeasible for users to decide whether to trust the answer; (ii) sequence-to-sequence models require inputs to be cast into token sequences first. This loses insightful information on relationships between evidences [64]. Such inter-evidence connections can be helpful in separating relevant information from noise. Approach. We introduce Explaignn1 (EXPLAinable Conversational Question Answering over Heterogeneous Graphs via Iterative Graph Neural Networks), a flexible pipeline that can be configured for optimizing performance, efficiency, and explainability for ConvQA systems over heterogeneous sources. The proposed method operates in three stages: (i) Derivation of a self-contained structured representation (SR) of the user\u2019s information need (or intent) from the potentially incomplete input utterance and the conversational context, making the entities, relation, and expected answer type explicit. (ii) Retrieval of relevant evidences and answer candidates from heterogeneous information sources: a curated KB, a text corpus, a collection of web tables, and infoboxes. (iii) Construction of a graph from these evidences, as the basis for applying graph neural networks (GNNs). The GNNs are iteratively applied for computing the best answers and supporting evidences in a small number of steps. 1Code, data, and demo at https://explaignn.mpi-inf.mpg.de. 1 arXiv:2305.01548v2 [cs.IR] 18 Jul 2023 \fSIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Christmann et al. Figure 1: Toy heterogeneous graph for answering \ud835\udc5e3, showing two pruning iterations. The graph is iteratively reduced by GNN inference to identify the key evidences. The subgraph surrounded by the blue dotted line is the result of the first iteration, while the green line indicates the graph after the second. From this smaller subgraph, the final answer (Tom Hanks) is inferred. A key novelty is that each iteration reduces the graph in size, and only the final iteration yields the answer and a small usercomprehensible set of explanatory evidences. Our overarching goal is to provide end-user explainability to GNN inference by iteratively reducing the graph size, so that the final answers can indeed be claimed to be causal w.r.t. the remaining evidences, hence the name Explaignn. A toy example of such GNN-based reduction is in Fig. 1. Contributions. We make the following salient contributions: \u2022 Proposing a new method for ConvQA over heterogeneous sources, with a focus on computing explainable answers. \u2022 Devising a mechanism for iterative application of GNN inference to reduce such graphs until the best answers and their explanatory evidences are obtained. \u2022 Developing an attention mechanism which ensures that during message passing only question-relevant information is spread over the local neighborhoods. 2 CONCEPTS AND NOTATION We now introduce salient concepts and notation, that will help understand the remainder of the paper. Table 1 contains the important notation. Question. A question \ud835\udc5easks about factoid information, like Who wrote the book Angels and Demons? (intent is explicit), or How long is the novel? (intent is implicit). Answer. An answer \ud835\udc4eto \ud835\udc5ecan be an entity (like Tom Hanks), or a literal (like 768 pages). Conversation. A conversation is a sequence of questions and answers \u27e8\ud835\udc5e1,\ud835\udc4e1,\ud835\udc5e2,\ud835\udc4e2, . . . \u27e9. The initial question \ud835\udc5e1 is complete, i.e. makes the information need explicit. The follow-up questions \ud835\udc5e\ud835\udc61 (\ud835\udc61>1) may be incomplete, building upon the ongoing conversational history, therefore leaving context information implicit. Turn. A conversation turn \ud835\udc61comprises a \u27e8\ud835\udc5e\ud835\udc61,\ud835\udc4e\ud835\udc61\u27e9pair. Knowledge base. A curated knowledge base is defined as a set of facts. Each fact consists of a subject, a predicate, an object, and an optional series of \u27e8qualifier-predicate, qualifier-object\u27e9pairs: \u27e8\ud835\udc60, \ud835\udc5d, \ud835\udc5c; \ud835\udc5e\ud835\udc5d1, \ud835\udc5e\ud835\udc5c1; \ud835\udc5e\ud835\udc5d2, \ud835\udc5e\ud835\udc5c2; . . . \u27e9. An example fact is: \u27e8Angels and Demons, cast member, Tom Hanks; character, Robert Langdon\u27e9. Text corpus. A text corpus consists of a set of text documents. Table. A table is a structured form of representing information, and is typically organized into a grid of rows and columns. Individual rows usually record information corresponding to specific entities, while columns refer to specific attributes for these entities. The rowheader, column-header, and the cell hold the entity name, attribute name, and the attribute value, respectively. Infobox. An infobox consists of several entries that are \u27e8attribute name, attribute value\u27e9pairs, and provide salient information on a certain entity. An infobox can be perceived as a special instantiation of a table, recording information on a single entity, and consisting of exactly two columns and a variable number of rows. Evidence. An evidence \ud835\udf16is a short text snippet expressing factual information, and can be retrieved from a KB, a text corpus, a table, and an infobox. To be specific, evidences are verbalized KB-facts, text-sentences, table-records, or infobox-entries. 2 \fExplainable Conversational Question Answering over Heterogeneous Sources via Iterative Graph Neural Networks SIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Generate intentexplicit structured representation (SR) of information need \ud835\udc5e! \u27e8\ud835\udc5e!, \ud835\udc4e!, \u2026 , \ud835\udc5e\"#!, \ud835\udc4e\"#!\u27e9 Retrieve evidences from heterogeneous information sources \ud835\udc46\ud835\udc45! Heterogeneous Answering (HA) {\ud835\udf16} \ud835\udc4e\"#$% ! Evidence Retrieval (ER) Question Understanding (QU) GNN \u2026 GNN GNN {\ud835\udf16\"#$%}! Construct graph Conversational history and current question Predicted answer and explanatory evidences Iter. 1 Iter. (i-1) Iter. i Figure 2: An overview of the Explaignn pipeline, illustrating the three main stages of the approach. Table 1: Notation for salient concepts in Explaignn. Notation Concept \ud835\udc5e\ud835\udc61,\ud835\udc4e\ud835\udc61 Question and answer at turn \ud835\udc61 \ud835\udc46\ud835\udc45\ud835\udc61 Structured representation at turn \ud835\udc61 \ud835\udc52,\ud835\udf16 Entity, evidence \ud835\udc38, E Sets of entity nodes and evidence nodes in graph N (\ud835\udc52) Evidences in 1-hop neighborhood of entity \ud835\udc52 N (\ud835\udf16) Entities in 1-hop neighborhood of evidence \ud835\udf16 \ud835\udc86, \ud835\udf50, \ud835\udc7a\ud835\udc79 Encoding of \ud835\udc52/ \ud835\udf16/ \ud835\udc46\ud835\udc45 \ud835\udc86\ud835\udc59, \ud835\udf50\ud835\udc59 Encoding of \ud835\udc52/ \ud835\udf16after \ud835\udc59GNN layers \ud835\udc51 Encoding dimension \ud835\udefc\ud835\udc52,\ud835\udf16, \ud835\udefc\ud835\udf16,\ud835\udc52 SR-attention of \ud835\udf16(\ud835\udc52) for updating \ud835\udc52(\ud835\udf16) \ud835\udc5a\ud835\udc52,\ud835\udc5a\ud835\udf16 Aggregated messages passed to \ud835\udc52/ \ud835\udf16 \ud835\udc60\ud835\udc52,\ud835\udc60\ud835\udf16 Score for \ud835\udc52/ \ud835\udf16 \ud835\udc64\ud835\udc52,\ud835\udc64\ud835\udf16 Multi-task weight for answer / evidence score prediction L, L\ud835\udc52, L\ud835\udf16 Loss functions: total, entity relevance, evidence relevance Structured representation. The structured representation \ud835\udc46\ud835\udc45[9] for \ud835\udc5eis an intent-explicit version of the question. The SR represents the current question using four slots: (i) context entity, (ii) question entity, (iii) relation, (iv) expected answer type. This intent-explicit representation can be represented in linear form as a single string, using delimiters to separate slots (\u2018|\u2019 in our case). The SR for \ud835\udc5e3 is: \u27e8Angels and Demons | Robert Langdon | who played him in the films | human \u27e9 In this example, Angels and Demons is the context entity, Robert Langdon the question entity, who played him in the films the relation, and human the expected answer type. We consider a relaxed notion of relations, in the sense of not being tied to KB terminology that canonicalizes textual relations to predicates. This allows for softer matching in evidences, and answering questions for which the information cannot easily be represented by such predicates. 3 OVERVIEW The architecture of Explaignn (Fig. 2) follows the pipeline of Convinse [9]: (i) an intent-explicit structured representation (SR) of the current information need is generated, (ii) evidences are retrieved from heterogeneous sources, and (iii) this large set of relevant evidences is used for answering the question and providing explanatory evidences. 3.1 Question understanding We use [9] and generate a structured representation (SR) capturing the complete intent in the current question and the conversational context (the SR was loosely inspired by literature on quantity queries [15]). For generating SRs, we leverage a fine-tuned autoregressive sequence-to-sequence model (BART [27]). Preventing hallucination. We propose a novel mechanism to avoid hallucinations [32] in SRs. For \ud835\udc5e3 of the running example, the trained model could generate Robert de Niro as the (topically unrelated) question entity of the output SR \u27e8Dan Brown | Robert de Niro | who played him in the films | human \u27e9. This would lead the entire QA system astray. The SR is supposed to represent the information need on the surface level, and therefore expected to use the vocabulary present in the input (the conversational history and current question). This makes it possible to identify hallucinations: output words that are absent from the entire conversation so far, indicate such a situation. To fix this, we generate the top-\ud835\udc58SRs (\ud835\udc58=10 in experiments), and choose the highest-ranked SR that does not include any hallucinated words. Note that the expected answer type is an exception here: it may, by design, not be present in the input. So we remove this slot before performing the hallucination check. Answer types are often not made explicit but substantially help the QA system [47]. 3.2 Evidence retrieval KB evidences and entity disambiguations are obtained via running Clocq [7] on the SR (without delimiters). Text, table and infobox evidences are obtained by mapping the disambiguated KB-entities to Wikipedia pages, which are then parsed for extracting textsentences, table-records, and infobox-entries corresponding to the respective entities. Evidences from KB, web tables, or infoboxes, being natively in (semi-)structured form, are then verbalized [30, 35] into token sequences (as in [9]). Examples can be seen inside Fig. 1, where each evidence is tagged with its source. Use of SR slot labels. Convinse considers all entities in the SR during retrieval, regardless of the slot in which they appear. In contrast, Explaignn restricts the entities by retaining only those mentioned within the context entity or question entity slots. Evidences are then only retrieved for this restricted set of entities. This prunes noisy disambiguations in the relation and type slots. 4 HETEROGENEOUS ANSWERING We first describe heterogeneous answering graph construction (Sec. 4.1). Next, we present the proposed question-aware GNN architecture, consisting of the encoder (Sec. 4.2), the message passing procedure (Sec. 4.3), the answer candidate scoring (Sec. 4.4), and the multi-task learning mechanism for GNN training (Sec. 4.5). 4.1 Graph construction Given the evidences retrieved in the previous stage, we construct a heterogeneous answering graph that has two types of nodes: entities and evidences. The graph contains textual information as entity labels and evidence texts, as well as the connections between these 3 \fSIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Christmann et al. two kinds of nodes. Specifically, an entity node \ud835\udc52is connected to an evidence node \ud835\udf16, if \ud835\udc52is mentioned in \ud835\udf16. There are no direct edges between pairs of entities, or pairs of evidences. An example heterogeneous graph is shown in Fig. 1. Inducing connections between retrieved evidences. Shared entities are the key elements that induce relationships between the initial plain set of retrieved evidences. So one requires entity markup on the verbalized evidences coming from the different sources, that are grounded to a KB for canonicalization. Note that during evidence verbalization, original formats are not discarded. Thus, for KB-facts, entity mappings are already known. For text, table, and infobox evidences from Wikipedia, we link anchor texts to their referenced entity pages. These are then mapped to their corresponding KB-entities. In absence of anchor texts, named entity recognition and disambiguation systems can be used [16, 28, 58]. Dates and years that appear in evidences are detected using regular expressions, and are added as entities to the graph as well. In Fig. 1, entity mentions are underlined within evidences. 4.2 Node encodings GNNs incrementally update node encodings within local neighborhoods, leveraging message passing algorithms. However, these node encodings have to be initialized first using an encoder. Evidence encodings. For the initial encoding of the nodes, we make use of cross-encodings [46] (originally proposed for sentence pair classification tasks in [11]). The evidence text, concatenated with the SR, is fed into a pre-trained language model. By using SRspecific cross-encodings, we ensure that the node encodings capture the information relevant for the current question, as represented by the SR. The encodings obtained for the individual tokens are averaged, yielding the initial evidence encoding \ud835\udf500 \u2208R\ud835\udc51. Entity encodings. The entity encodings are derived analogously, using cross-encodings with the SR. We further append the KB-type of an entity to the entity label with a separator, before feeding the respective token sequence into the language model. Including entity types is beneficial, and often crucial, for the reasons below: (i) The cross-encoding can leverage the attention between the expected answer type in the SR and the entity type, which can be viewed as a soft-matching between the two. (ii) When there are multiple entities with the same label, the entity type may be a discriminating factor downstream for each of these entities (e.g., there are three entities of different types named \u201cRobert Langdon\u201d in Fig. 1). (iii) For long-tail entities, the entity type can add decisive informative value to the plain entity label. Analogous to evidence nodes, the encodings of individual tokens are averaged to obtain the entity encoding \ud835\udc860 \u2208R\ud835\udc51. SR encoding. The SR is also encoded via the language model, averaging the token encodings to obtain the SR encoding \ud835\udc7a\ud835\udc79\u2208R\ud835\udc51. Note that the same language model is used for the initial encodings of evidences, entities and the SR. The parameters of this language model are updated during GNN training, to ensure that the encoder adapts to the syntactic structure of the SR. 4.3 Message passing The core part of the GNN is the message passing [56, 63] procedure. In this step, information is propagated among neighboring nodes, leveraging the graph structure. Given our graph design, in each message passing step information is shared between evidences and the connected entities. Again, we aim to focus on question-relevant information [2, 13], as captured by the SR, instead of spreading general information within the graph. Therefore, we propose to weight the messages of neighboring entities using a novel attention mechanism, that re-weights the messages by their question-relevance, or equivalently, their SR-relevance. This SR-attention is computed by \ud835\udefc\ud835\udc59 \ud835\udf16,\ud835\udc52(\u2208R): \ud835\udefc\ud835\udc59 \ud835\udf16,\ud835\udc52= softmax N(\ud835\udf16) \u0010 lin\ud835\udc59 \ud835\udefc\ud835\udf16(\ud835\udc86\ud835\udc59\u22121) \u00b7 \ud835\udc7a\ud835\udc79 \u0011 = lin\ud835\udc59 \ud835\udefc\ud835\udf16(\ud835\udc86\ud835\udc59\u22121) \u00b7 \ud835\udc7a\ud835\udc79 \u00cd \ud835\udc52\ud835\udc56\u2208N(\ud835\udf16) lin\ud835\udc59 \ud835\udefc\ud835\udf16(\ud835\udc86\ud835\udc59\u22121 \ud835\udc56 ) \u00b7 \ud835\udc7a\ud835\udc79 (1) where we first project the entity encodings using a linear transformation (lin\ud835\udc59 \ud835\udefc\ud835\udf16: R\ud835\udc51\u2192R\ud835\udc51), and then multiply with the SR encoding to obtain a score. The softmax function is then applied over all entities neighboring a respective evidence (\ud835\udc52\ud835\udc56\u2208N (\ud835\udf16)). Thus, an entity can obtain different SR-attention scores for each evidence, depending on the scores of other neighboring entities. The messages passed to \ud835\udf16are then aggregated, weighted by the respective SR-attention, and projected using another linear layer: \ud835\udc8e\ud835\udc59 \ud835\udf16= lin\ud835\udc59 \ud835\udc5a\ud835\udf16 \u0012 \u2211\ufe01 \ud835\udc52\u2208N(\ud835\udf16) \ud835\udefc\ud835\udc59 \ud835\udf16,\ud835\udc52\u00b7 \ud835\udc86\ud835\udc59\u22121 \u0013 (2) where lin\ud835\udc59 \ud835\udc5a\ud835\udf16is the linear layer (lin\ud835\udc59 \ud835\udc5a\ud835\udf16: R\ud835\udc51\u2192R\ud835\udc51). The updated evidence encoding is then given by adding the evidence encoding from the previous layer (\ud835\udf50\ud835\udc59\u22121), and the messages passed from the neighbors (\ud835\udc8e\ud835\udc59 \ud835\udf16), activated by a ReLU function: \ud835\udf50\ud835\udc59= ReLU\u0000\ud835\udc8e\ud835\udc59 \ud835\udf16+ \ud835\udf50\ud835\udc59\u22121\u0001 (3) The intuition here is that in each evidence update, the questionrelevant information held by neighboring entities is passed on to an evidence, and then incorporated in its encoding. The process for updating the entity encodings is analogous, but makes use of different linear transformation functions. The SRattention \ud835\udefc\ud835\udc59 \ud835\udc52,\ud835\udf16of evidences for an entity \ud835\udc52is obtained as follows: \ud835\udefc\ud835\udc59 \ud835\udc52,\ud835\udf16= softmax N(\ud835\udc52) \u0010 lin\ud835\udc59 \ud835\udefc\ud835\udc52(\ud835\udf50\ud835\udc59\u22121) \u00b7 \ud835\udc7a\ud835\udc79 \u0011 (4) where lin\ud835\udc59 \ud835\udefc\ud835\udc52(R\ud835\udc51\u2192R\ud835\udc51) is the linear transformation function. Here, the softmax function is applied over all evidences surrounding the respective entity (i.e. \ud835\udf16\ud835\udc56\u2208N (\ud835\udc52)). Again, the messages passed to an entity \ud835\udc52are weighted by the respective SR-attention, and projected using a linear layer (lin\ud835\udc59 \ud835\udc5a\ud835\udc52: R\ud835\udc51\u2192R\ud835\udc51): \ud835\udc8e\ud835\udc59 \ud835\udc52= lin\ud835\udc59 \ud835\udc5a\ud835\udc52 \u0012 \u2211\ufe01 \ud835\udf16\u2208N(\ud835\udc52) \ud835\udefc\ud835\udc59 \ud835\udc52,\ud835\udf16\u00b7 \ud835\udf50\ud835\udc59\u22121 \u0013 (5) The updated entity encoding is then given by: \ud835\udc86\ud835\udc59= ReLU\u0000\ud835\udc8e\ud835\udc59 \ud835\udc52+ \ud835\udc86\ud835\udc59\u22121\u0001 (6) These message passing steps are repeated \ud835\udc3ftimes, i.e. the GNN has \ud835\udc3flayers. Within these layers the question-relevant information is spread over the graph. Basically, nodes in the graph learn about 4 \fExplainable Conversational Question Answering over Heterogeneous Sources via Iterative Graph Neural Networks SIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan GNN GNN GNN GNN Training data: \u27e8graph, answer\u27e9 pairs GNNs trained for answer and evidence score prediction Multi-task learning of evidence and answer score prediction Training Inference GNN GNN GNN \ud835\udc4e!\"#$ Pruning iterations Answer prediction Initialize Initialize Initialize Figure 3: Training of and inference with iterative GNNs. their question relevance, based on the surrounding nodes and their relevance, and capture this information in their node encodings. 4.4 Answer score prediction Scoring answer candidates makes use of the node encodings obtained after \ud835\udc3fmessage passing steps. We model the answer prediction as a node classification task [21, 54, 63], by computing an answer score for each entity node with consideration of their question relevance as captured within the node encodings. The computation of the answer score \ud835\udc60\ud835\udc52is similar to the technique used for computing the SR-attention of an entity: \ud835\udc60\ud835\udc52= softmax \ud835\udc38 \u0010 lin\ud835\udc52(\ud835\udc86\ud835\udc3f) \u00b7 \ud835\udc7a\ud835\udc79 \u0011 (7) We project the entity encoding using a linear layer (lin\ud835\udc52: R\ud835\udc51\u2192R\ud835\udc51), and multiply the projected encoding with the encoding of the SR. The softmax function is applied over all entity nodes (i.e. \ud835\udc52\ud835\udc56\u2208\ud835\udc38). 4.5 Multi-task learning Our training data for the GNN consists of \u27e8graph, answer\u27e9pairs. The gold answer is always an entity or a small set of entities. Consequently, the positive training data is sparse: there can be hundreds of entities in the graph but only one gold answer. To better use our training data, we propose a multi-task learning (MTL) [25, 51] approach. Given a GNN, we pose two complementary node classification tasks: (i) the answer prediction, and (ii) the prediction of evidence relevance. Evidences connected to gold answers are viewed as relevant, and others as irrelevant. Our method learns to predict a relevance score \ud835\udc60\ud835\udf16for each evidence node \ud835\udf16, analogous to the answer score prediction: \ud835\udc60\ud835\udf16= softmax \ud835\udf16\u2208E \u0010 lin\ud835\udf16(\ud835\udf50\ud835\udc3f) \u00b7 \ud835\udc7a\ud835\udc79 \u0011 (8) where E is the set of all evidence nodes (and lin\ud835\udf16: R\ud835\udc51\u2192R\ud835\udc51). For both tasks, answer prediction and evidence prediction, we use binary-cross-entropy over the predicted scores as loss functions: L\ud835\udc52and L\ud835\udf16, respectively. The final loss used for training the GNN is then defined as a weighted sum: L = \ud835\udc64\ud835\udc52\u00b7 L\ud835\udc52+ \ud835\udc64\ud835\udf16\u00b7 L\ud835\udf16 (9) where \ud835\udc64\ud835\udc52and \ud835\udc64\ud835\udf16are hyper-parameters to control the multi-task learning, and are chosen such that \ud835\udc64\ud835\udc52+ \ud835\udc64\ud835\udf16= 1. The described GNN architecture can then be trained for predicting scores of answer candidates and evidences, and used for inference on the whole input graphs in one shot. 5 ITERATIVE GRAPH NEURAL NETWORKS We now outline how we use trained GNNs for iteratively reducing the graph at inference time. Specifically, we comment on the benefits that such iterative GNNs have for robustness, explainability and efficiency. Drawbacks of a one-shot prediction. There are several drawbacks of predicting the answer from the full graph during inference: (i) Directly predicting the answer from hundreds of answer candidates is non-trivial, and node classification may struggle to manifest fine-grained differences between such candidate answers in their encodings. This can negatively impact the robustness of the method on unseen data. (ii) Further, if the answer is predicted from the whole input graph at inference time, this means that all nodes in the graph contribute towards the answer prediction. Showing the whole graph, consisting of hundreds of nodes, to explain how the answer was derived is not practical. The SR-attention scores could be an indicator as to which nodes were more relevant, but attention is not always sufficient as an explanation [20]. Hence, answer explainability would be limited. (iii) Finally, obtaining cross-encodings for hundreds of nodes (entities and evidences) can be computationally expensive, affecting the runtime efficiency of the system. A large fraction of these initial graph nodes might be rather irrelevant, which can often be identified using a more light-weight (i.e. more efficient) encoder. Iterative inference of GNNs. To overcome these drawbacks, we propose iterative GNNs: instead of predicting the answer in one shot, we iteratively apply trained GNNs of the outlined architecture during inference. The key idea is to shrink the graph after each iteration. This can be done via the evidence scores \ud835\udc60\ud835\udf16predicted by the GNNs, to identify the most relevant evidences. These evidences, and the connected entities, are used to initiate the graph given as input to the next iteration. In the final iteration, the answer is predicted from the reduced graph only. Fig. 3 illustrates training and inference with iterative GNNs. Note that these GNNs are still trained on the full graphs in the training data, and are run iteratively only at inference time. This iterative procedure is feasible as the proposed GNN architecture is inherently independent of the input graph size: the same GNN trained on 500 evidences can be applied on a graph with 100, 20, or 5 evidences. The GNNs essentially learn to spread question-relevant information within local neighborhoods, which is not only required for large graphs with hundreds of evidences, but also for smaller graphs with a handful of nodes. Further, this iterative procedure is tractable only with the flexibility of scoring both entities and evidences in the graph, using the same GNN architecture. Enhancing robustness. Within each iteration, the task complexity is decreased compared to the task complexity the original GNN was trained on. For example, the GNN was originally trained for predicting small answer sets and relevant evidences from several hundreds of nodes, but in the iterative setup, rather needs to identify the top-100 evidences during inference. Thus, the GNN model now 5 \fSIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Christmann et al. has to be less discriminatory at inference time than during training. This can help improve robustness. Facilitating explainability. A primary benefit of the iterative mechanism is that the intermediate graphs can be used to better understand how answer prediction works via a GNN. Showing all the information contained in the original input graph with hundreds of entities and evidences to the user is not practical: we can iteratively derive a small set of evidences (say five), from which the answer is predicted. These evidences can then be presented to the end user, enhancing user explainability. Improving efficiency. To facilitate the runtime efficiency of the answering process, we refrain from encoding entities via crossencodings in the shrinking (or pruning) iterations. Instead, we initiate the entity encodings, using a sum of the surrounding evidences, weighted by their question-relevance (i.e. their SR-attention): \ud835\udc860 = \u2211\ufe01 \ud835\udf16\u2208N(\ud835\udc52) \ud835\udefc\ud835\udc52,\ud835\udf16\u00b7 \ud835\udf500 (10) where the SR-attention \ud835\udefc\ud835\udc52,\ud835\udf16is computed as in Eq. 1, employing a different linear projection. This can be perceived as obtaining alternating encodings of entities: the initial evidence encodings are used to initialize entity encodings (inspired by [3]). The message passing would then proceed as outlined in Sec. 4.3. Instantiation. We train several one-shot GNNs of the architecture outlined above, using different weights on the answer prediction and evidence relevance prediction tasks (\ud835\udc64\ud835\udc52and \ud835\udc64\ud835\udf16respectively) in the MTL setup. Further, we train GNNs using either cross-encodings or alternating encodings for entities. Training is conducted on the full input graphs present in the training set. We then simply instantiate all pruning iterations with the GNN that obtains the best evidence prediction performance on the graphs in the development (dev) set. Similarly, we use the trained GNN that obtained the best answering performance on the dev set to initiate the final answering iteration. Finally, the answer predicted by the system is given by: \ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51= arg max \ud835\udc52\u2208\ud835\udc38\ud835\udc60\ud835\udc52 (11) This is shown to the end user, together with the explanatory evidences {\ud835\udf16\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51} (see outputs in Fig. 2), that are simply the set of evidences used in the answer prediction step. 6 RESULTS AND INSIGHTS 6.1 Experimental setup Dataset. We train and evaluate Explaignn on the ConvMix [9] benchmark, which was designed for ConvQA over heterogeneous information sources. The dataset has 16,000 questions (train: 8,400 questions, dev: 2,800 questions, test: 4,800 questions), within 3,000 conversations of five (2,800) or ten turns (200, only used for testing). Metrics. For accessing the answer performance, we use precision at 1 (P@1), as suggested for the ConvMix benchmark. To investigate the ranking capabilities of different methods in more detail, we also measure the mean reciprocal rank (MRR), and hit at 5 (Hit@5). The answer presence (Ans. pres.) is the fraction of questions for which a gold answer is present in a given set of evidences. Baselines. We compare Explaignn with the state-of-the-art method on the ConvMix dataset, Convinse [9]. Convinse leverages a Fusion-in-Decoder (FiD) [19] model for obtaining the top answer, which is designed to generate a single answer string. In [9], the ranked entity answers are then derived by collecting the top-\ud835\udc58answer candidates with the highest surface-form match with respect to Levenshtein distance with the generated answer string. This procedure is somewhat limiting when measuring metrics beyond the first rank (i.e. MRR or Hit@5). Therefore, we enhanced the FiD model to directly generate top-\ud835\udc58answer strings, and then consider the answer candidate with the highest surface-form match for each such generated string (top-\ud835\udc58FiD), for fair comparison. We further compare with baselines proposed in [9] using question completion and resolution, and use the values reported in [9] for consistency. Note that these methods transform a conversational question to a self-sufficient form, and still need to be coupled with retrieval and answer generation modules. Configurations. The QU and ER stages were initialized using the Convinse [9] code and data: we used Wikidata as the KB, and made use of the same version (31 January 2022) as earlier work. The Wikipedia evidences were taken from here2, whenever applicable, and retrieved on-the-fly otherwise. This ensures that results are comparable with the results provided in earlier work. ConvMix has \u22433% of yes/no questions: these are out of scope for Explaignn. To be able to report numbers on the full benchmark, we follow previous work [9] and detect such questions as starting with an auxiliary verb, and answer \u201cyes\u201d to these. The GNNs were implemented from scratch via PyTorch. We used DistilRoBERTa provided by Hugging Face3 as encoder, which we found to perform slightly better than DistilBERT [48] (the distillation procedure is the same). DistilRoBERTa has an embedding dimension of \ud835\udc51=768. The GNNs were trained for 5 epochs. We chose an epoch-wise evaluation strategy, and kept the model that achieved the best performance on the dev set. We found three layer GNNs (\ud835\udc3f=3) most effective. With our graph schema of alternating entities and evidences, using an odd number of layers allows for information from evidences to reach relevant entities in their immediate (one hop) and slightly distant neighborhoods (three or five hops, say). AdamW was used as optimizer, using a learning rate of 10\u22125, batch size of 1, and a weight decay of 0.01. We also used the dev set for choosing the number of GNN iterations, and MTL weights for pruning and answering. The number of iterations was set to \ud835\udc56=3. The GNN that maintains the highest answer presence among the top-5 evidences was chosen for instantiating the pruning iterations (alternating encodings for entities, \ud835\udc64\ud835\udc52=0.3, \ud835\udc64\ud835\udf16=0.7), and the GNN obtaining the highest P@1 for the answer prediction (cross-encodings for entities, \ud835\udc64\ud835\udc52=0.5, \ud835\udc64\ud835\udf16=0.5). In case there are more than 500 evidences, we retain only the top-500 obtained via BM25 scoring as input to the answering stage. A single GPU (NVIDIA Quadro RTX 8000, 48 GB GDDR6) was used to train and evaluate the models. 6.2 Key findings This section will present the main experimental results on the ConvMix test set. All provided metrics are averaged over all questions in the dataset. The best method for each column is shown in bold. Statistical significance over the best baseline is indicated by an asterisk 2http://qa.mpi-inf.mpg.de/convinse/convmix_data/wikipedia.zip 3https://huggingface.co/distilroberta-base 6 \fExplainable Conversational Question Answering over Heterogeneous Sources via Iterative Graph Neural Networks SIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Table 2: Comparison of answering performance on the ConvMix [9] test set, using gold answers {\ud835\udc4e\ud835\udc54\ud835\udc5c\ud835\udc59\ud835\udc51} in the history. Method \u2193 P@1 MRR Hit@5 Q. Resolution [59] + BM25 + FiD [19] 0.282 0.289 0.297 Q. Rewriting [44] + BM25 + FiD [19] 0.271 0.278 0.285 Convinse [9] (original) 0.342 0.365 0.386 Convinse [9] (top-\ud835\udc58FiD) 0.343 0.378 0.431 Explaignn (proposed) 0.406* 0.471* 0.561* Table 3: Comparison of answering performance on the ConvMix test set, using predicted answers {\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51} in the history. Method \u2193 P@1 MRR Hit@5 Q. Resolution [59]+ BM25 + FiD [19] 0.243 0.250 0.257 Q. Rewriting [44]+ BM25 + FiD [19] 0.221 0.227 0.235 Convinse [9] (original) 0.278 0.286 0.294 Convinse [9] (top-\ud835\udc58FiD) 0.279 0.308 0.351 Explaignn (proposed) 0.339* 0.398* 0.477* (*), and is measured via paired \ud835\udc61-tests for MRR, and McNemar\u2019s test for binary variables (P@1 or Hit@5), with \ud835\udc5d< 0.05 in both cases. Explaignn improves the answering performance. The main results in Table 2 demonstrate the performance benefits of Explaignn over the baselines. Explaignn significantly outperforms the best baseline on all metrics, illustrating the success of using iterative graph neural networks in ConvQA. As is clear from the method descriptions in Table 2, all baselines crucially rely on the generative reader model of FiD. FiD can ingest multiple evidences to produce the answer, but fails to capture their relationships explicitly, that is unique to our graph-based pipeline. Our adaptation of using top-\ud835\udc58FiD instead of the default top-1 improved the ranking capabilities (MRR, Hit@5) of the strongest baseline Convinse. However, Explaignn still substantially improved over Convinse with top-\ud835\udc58FiD. Explaignn is robust to wrong predictions in earlier turns. Unlike many existing works, we also evaluated the methods in a realistic scenario, in which the predicted answers \ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51are used as (noisy) input for the conversational history instead of the standard yet impractical choice of inserting gold answers from the benchmark. Results are shown in Table 3 (cf. Table 2 results, that show an evaluation with gold answers). While the performance of all methods drops in this more challenging setting, the trends are very similar, indicating that Explaignn can successfully overcome failures (i.e. incorrect answer predictions) in earlier turns. Explaignn again outperforms all baselines significantly, including a P@1 jump from 0.279 for the strongest baseline to 0.339. Heterogeneous sources improve performance. Analysis of source combinations is in Table 4. The first takeaway is that the answering performance was the best when the full spectrum of sources was used. Explaignn can make use of the enhanced answer presence in this case, and answer more questions correctly. Next, the results indicate that adding an information source is always beneficial: the performance of combinations of two sources is in all cases better than for the two sources individually. Table 4: Effect of varying source combinations at inference time (test set). Explaignn is still trained on all sources. Method \u2193 P@1 MRR Hit@5 Ans. pres. KB 0.363 0.427 0.511 0.617 Text 0.233 0.300 0.380 0.530 Tables 0.064 0.084 0.108 0.155 Infoboxes 0.256 0.302 0.362 0.409 KB+Text 0.399 0.464 0.549 0.672 KB+Tables 0.363 0.429 0.515 0.629 KB+Infoboxes 0.376 0.443 0.532 0.640 Text+Tables 0.235 0.305 0.392 0.540 Text+Infoboxes 0.309 0.369 0.445 0.572 Tables+Infoboxes 0.263 0.312 0.374 0.453 All sources 0.406 0.471 0.561 0.683 Table 5: Effect of varying the multi-task learning weights when training the one-shot GNN modules (on the dev set). P@1 MRR Hit@5 Ans. pres. HA runtime Explaignn (\ud835\udc56=1: 500\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51; cross-encodings for entities) \ud835\udc64\ud835\udc52=1.0, \ud835\udc64\ud835\udf16=0.0 0.440 0.501 0.578 0.229 1,018 ms \ud835\udc64\ud835\udc52=0.7, \ud835\udc64\ud835\udf16=0.3 0.439 0.499 0.573 0.573 1,029 ms \ud835\udc64\ud835\udc52=0.5, \ud835\udc64\ud835\udf16=0.5 0.442 0.502 0.581 0.583 1,017 ms \ud835\udc64\ud835\udc52=0.3, \ud835\udc64\ud835\udf16=0.7 0.431 0.495 0.572 0.586 1,013 ms \ud835\udc64\ud835\udc52=0.0, \ud835\udc64\ud835\udf16=1.0 0.033 0.041 0.044 0.579 1,008 ms Explaignn (\ud835\udc56=1: 500\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51; alternating encodings for entities) \ud835\udc64\ud835\udc52=1.0, \ud835\udc64\ud835\udf16=0.0 0.417 0.485 0.573 0.508 443 ms \ud835\udc64\ud835\udc52=0.7, \ud835\udc64\ud835\udf16=0.3 0.410 0.470 0.545 0.568 444 ms \ud835\udc64\ud835\udc52=0.5, \ud835\udc64\ud835\udf16=0.5 0.404 0.472 0.555 0.569 447 ms \ud835\udc64\ud835\udc52=0.3, \ud835\udc64\ud835\udf16=0.7 0.405 0.472 0.552 0.589 442 ms \ud835\udc64\ud835\udc52=0.0, \ud835\udc64\ud835\udf16=1.0 0.117 0.169 0.221 0.581 449 ms 6.3 In-depth analysis Multi-task learning enables flexibility. A systematic analysis of the effect of different multi-task learning weights is shown in Table 5. This analysis is conducted on the dev set, for choosing the best GNN for pruning and answering, respectively. Entities are either encoded via cross-encodings, or alternating encodings (see Eq. 10). The results indicate the runtime benefits of using alternating encodings for entities. Further, when optimized for evidence relevance prediction (\ud835\udc64\ud835\udf16> 0.5) it can maintain a high answer presence (measured within top-5 evidences), indicating that light-weight encoders are indeed sufficient for the pruning iterations. Further, we found putting equal weights on the answer and evidence relevance prediction to be beneficial for answering. Iterative GNNs do not compromise runtimes. Table 6 reports results of varying the number of iterations \ud835\udc56\u2208{1, 2, 3, 4}, and the graph size in the number of evidences the answer is predicted from (|E| \u2208{500, 100, 50, 20, 5}). For each row, the reduction in graph size in terms of the number of evidences considered was kept roughly constant for consistency. By our smart use of alternating encodings of entities in the pruning iterations, runtimes remain immune to the number of pruning iterations (times for \ud835\udc56=4 are not necessarily higher than those for \ud835\udc56=3, and so on). Rather, the runtime is primarily influenced by the size of the graph (in terms of the number of evidences) given to the final answer prediction 7 \fSIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Christmann et al. Table 6: Varying the no. of iterations, pruning factors, and evidences the answer is predicted from during inference (dev set). Method \u2193 P@1 MRR Hit@5 Ans. pres. and no. of evidences after pruning iteration HA runtime Explaignn (\ud835\udc56=1: 500\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51) 0.442 0.502 0.581 \u2212 \u2212 \u2212 1,017 ms Explaignn (\ud835\udc56=2: 500\u2192100\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51) 0.441 0.504 0.587 \u2212 \u2212 0.687 | 100 744 ms Explaignn (\ud835\udc56=2: 500\u219250\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51) 0.440 0.504 0.588 \u2212 \u2212 0.675 | 50 591 ms Explaignn (\ud835\udc56=2: 500\u219220\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51) 0.438 0.504 0.591 \u2212 \u2212 0.655 | 20 515 ms Explaignn (\ud835\udc56=2: 500\u21925\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51) 0.422 0.480 0.560 \u2212 \u2212 0.589 | 5 459 ms Explaignn (\ud835\udc56=3: 500\u2192200\u2192100\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51) 0.441 0.504 0.585 \u2212 0.694 | 200 0.685 | 100 995 ms Explaignn (\ud835\udc56=3: 500\u2192150\u219250\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51) 0.441 0.505 0.586 \u2212 0.693 | 150 0.678 | 50 741 ms Explaignn (\ud835\udc56=3: 500\u2192100\u219220\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51; proposed) 0.442 0.505 0.589 \u2212 0.687 | 100 0.654 | 20 601 ms Explaignn (\ud835\udc56=3: 500\u219250\u21925\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51) 0.419 0.475 0.556 \u2212 0.675 | 50 0.579 | 5 511 ms Explaignn (\ud835\udc56=4: 500\u2192300\u2192150\u2192100\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51) 0.441 0.504 0.587 0.696 | 300 0.691 | 150 0.686 | 100 1,232 ms Explaignn (\ud835\udc56=4: 500\u2192200\u2192100\u219250\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51) 0.440 0.504 0.585 0.694 | 200 0.685 | 100 0.677 | 50 945 ms Explaignn (\ud835\udc56=4: 500\u2192200\u219250\u219220\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51) 0.436 0.500 0.584 0.694 | 200 0.677 | 50 0.652 | 20 769 ms Explaignn (\ud835\udc56=4: 500\u2192100\u219220\u21925\u2192\ud835\udc4e\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51) 0.422 0.476 0.553 0.687 | 100 0.654 | 20 0.575 | 5 577 ms step (compare runtimes within each iteration group). Recall that this graph size for answer prediction can impact explainability to end users, if it is not small enough. Notably, performance remains reasonably stable in most cases. A key takeaway from these results is that the trained GNN models generalize well to graphs of different sizes. Concretely, while all of these models are trained on graphs established from 500 evidences, they can be applied to score nodes in graphs of variable sizes. Explaignn can be applied zero-shot. For testing the generalizability of Explaignn, we applied the pipeline trained on the ConvMix dataset out-of-the-box, without any training or finetuning, on the ConvQuestions [8] dataset. ConvQuestions is a competitive benchmark for ConvQA methods operating over KBs. We test the same Explaignn pipeline in two different modes: (i) using only facts from the KB, and (ii) using evidences from all information sources. Table 7 shows the results (\u2020 and \u2021 indicate statistical significance over the leaderboard toppers Krr [24] and Praline [22], respectively). In the KB-only setting, Explaignn obtains state-of-the-art performance, reaching the highest MRR score. Also, we found that integrating heterogeneous sources can improve the answer performance substantially for ConvQuestions, even though this benchmark was created with a KB in mind. Iterative GNNs improve robustness. In Sec. 4, we argued that iterative GNNs enhance the pipeline\u2019s robustness over a single GNN applied on the full graph in one shot. While the performance of the one-shot GNN on the ConvMix dev set is comparable (Table 6), we found that it cannot generalize as well to a different dataset. When applied on ConvQuestions, the respective performance of the one-shot GNN is significantly lower than for Explaignn, in both KB-only (P@1: 0.330 versus 0.281) and heterogeneous (P@1: 0.363 versus 0.318) settings. SR-attention, cross-encoding and entity types are crucial. Table 8 shows results (\u2020 indicates significant performance drop) of our ablation study. We found that each of these mechanisms helps the pipeline to improve QA performance. The most decisive factor is the SR-attention (Sec. 4.3), which ensures that only the questionrelevant information is spread within the local neighborhoods: without this component, the performance drops substantially (P@1 from 0.442 to 0.062). Similarly, the cross-encodings (Sec. 4.2) initialize the nodes with question-relevant encodings. Also notable is the Table 7: Out-of-the-box Explaignn, without further training or fine-tuning, on the ConvQuestions [8] benchmark. Method \u2193 P@1 MRR Hit@5 Convex [8] 0.184 0.200 0.219 Focal Entity Model [26] 0.248 0.248 0.248 Oat [31] 0.166 0.175 \u2212 Oat [31] (w/ gold seed entities) 0.250 0.260 \u2212 Conqer [23] 0.240 0.279 0.329 Praline [22] 0.294 0.373 0.464 Krr [24] (w/ gold seed entities) 0.397 0.397 0.397 Explaignn (KB-only) 0.330\u2021 0.399\u2021 0.480\u2020\u2021 Explaignn 0.363\u2021 0.447\u2020\u2021 0.546\u2020\u2021 Table 8: Ablation study of the Explaignn pipeline. Method \u2193 P@1 MRR Hit@5 Explaignn 0.442 0.505 0.589 w/o SR-attention 0.062\u2020 0.137\u2020 0.178\u2020 w/o cross-encoder 0.352\u2020 0.431\u2020 0.529\u2020 w/o entity type 0.420\u2020 0.492\u2020 0.584 w/o hallucination prevention 0.436 0.502 0.588 w/o use of SR slots in retrieval 0.427\u2020 0.493\u2020 0.578\u2020 crucial role of entity types (Sec. 4.2), that help suppress irrelevant answer candidates with mismatched answer types. Error analysis. We identified three key sources of error: (i) the answer is not present in the initial graph (53.9% of error cases), which can be mitigated by improving the QU and ER stages of the pipeline, (ii) the answer is dropped when shrinking the graph (8.1%), and (iii) the answer is present in the final graph but not identified as the correct answer (38.0%). The graph shrinking procedure is responsible for only a few errors (2.2% in the first iteration, 5.9% in second), demonstrating the viability of our iterative approach. 7 USER STUDY ON EXPLAINABILITY We evaluate the explainability of our pipeline by a user study on Amazon Mechanical Turk (AMT). The use case scrutinized here is that a user has an information need, obtains the answer predicted by the system, and is unsure whether to trust the provided answer. Thus, the key objective of the explanations in this work is to help 8 \fExplainable Conversational Question Answering over Heterogeneous Sources via Iterative Graph Neural Networks SIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Task 2 Predicted answer correctness: True Gold correctness: True User is correct! User certainty: certain Reasons: good_explanation Task 4 Are you certain about your decision? Yes (I am certain) No (I am uncertain) Answer is correct! Question 1 Answer 1 Question 2 Answer 2 Current question Provided answer System interpretation Supporting evidences Who is the creator of Game of thrones? D. B. Weiss Who is the actor behind Jon Snow? Kit Harington Who is the actor behind Tormund Giantsbane? Kristofer Hivju Context entity: Game of thrones Current entity: Tormund Giantsbane Relation: Who is the actor behind Expected answer type: human 1: Game of Thrones, cast member, Kristofer Hivju, character role, Tormund Giantsbane. 2: Tormund Giantsbane, present in work, Game of Thrones, performer, Kristofer Hivju. 3: Game of Thrones, award received, Primetime Emmy Award for Outstanding Supporting Actor in a Drama Series, point in time, 2011, winner, Peter Dinklage. 4: Satellite Award for Best Supporting Actor Series, Miniseries or Television Film, winner, Peter Dinklage, point in time, 2011, for work, Game of Thrones. 5: Peter Dinklage, notable work, Game of Thrones. Do you think that the provided answer is correct? Yes (the answer is correct) No (the answer is incorrect) Are you certain about your decision? Yes (I am certain) No (I am uncertain) Question 1 Answer 1 Question 2 Answer 2 Question 3 Answer 3 Current question Provided answer System interpretation Supporting evidences What is the country of origin of the TV series Mom? United States No. of seasons? 8 Episodes in first season? 22 Who played role of Christy Plunkett? Allison Janney Current entity: Mom Relation: Who played role of Christy Plunkett Expected answer type: human 1: Mom (TV series), Allison Janney as Bonnie Plunkett: Christy's self${task1_html} Task 1 Task 2 Question 1 Answer 1 Question 2 Answer 2 Current question Provided answer System interpretation Supporting evidences What country was the band Tears for Fears from? England What year was the band formed? 1981 What's the name of their first album? The Hurting Current entity: Tears for Fears Relation: Whats the name of their first album Expected answer type: album 1: Shout: The Very Best of Tears for Fears, The album contains the greatest hits of the band from their first album, The Hurting, to the much later Elemental. 2: Tears for Fears, To commemorate the 30th anniversary of the band's debut album The Hurting, Universal Music reissued it in October 2013 in two deluxe editions. 3: The Hurting, performer, Tears for Fears. 4: Pale Shelter, performer, Tears for Fears, has quality, re-recording, publication date, 1 March 1983, published in, The Hurting. 5: Tears for Fears, Towards the end of 1983, the band released a new, slightly more experimental single, 'The Way You Are', intended as a stopgap while they worked on their second album. Do you think that the provided answer is correct? Yes (the answer is correct) No (the answer is incorrect) Are you certain about your decision? Yes (I am certain) No (I am uncertain) Answer is correct! Question 1 Answer 1 Question 2 Answer 2 Current question Provided answer System interpretation Supporting evidences Who is the creator of Game of thrones? D. B. Weiss Who is the actor behind Jon Snow? Kit Harington Who is the actor behind Tormund Giantsbane? Kristofer Hivju Context entity: Game of thrones Current entity: Tormund Giantsbane Relation: Who is the actor behind Expected answer type: human 1: Game of Thrones, cast member, Kristofer Hivju, character role, Tormund Giantsbane. Answer correctness (Correctness of the provided answer) \u2713 Answer correctness (Correctness of the provided answer) \u2713 Predicted correctness (User assessment of the answer correctness) \u2713 Predicted correctness (User assessment of the answer correctness) \u2713 User correctness (Correctness of the user assessment) \u2713 User correctness (Correctness of the user assessment) \u2713 User certainty (User certainty about her assessment) \u2713 User certainty (User certainty about her assessment) \u2713 Table 9: Representative examples from our user study, illustrating typical outputs of Explaignn. In both cases, Explaignn obtained the correct answer, and the users were certain about their decision. the user decide whether to trust the answer. If the user is able to make this decision easily, the explanations serve their purpose. User study design. For a predicted answer to a conversational question, the user is given the conversation history, the current question, the SR, and the explaining evidences. Examples of the input presented to the user are shown in Fig. 9 and Fig. 10 (we used five explanatory evidences). The user then has to decide whether the provided answer is correct. We randomly sample 1,200 instances on which we measure user accuracy. One of our main considerations during sampling was that one half (600) was correctly answered and the other half (600) incorrectly answered by Explaignn. User study interface. For each instance, we then ask Turkers the following questions: (i) \u201cDo you think that the provided answer is correct?\u201d (User correctness), (ii) \u201cAre you certain about your decision?\u201d (User certainty), and (iii) \u201cWhy are you certain about the decision?\u201d or \u201cWhy are you uncertain about the decision?\u201d. The first two questions can be answered by either \u201cYes\u201d, or \u201cNo\u201d. Depending on the answer to the second question, the third question asks for reasons for their (un)certainty. The user can select multiple provided options. If the user is certain, these options are either good explanation, prior knowledge, and common sense. If the user is uncertain, then she must choose between bad explanation or question/conversation unclear. The idea is to remove confounding cases in which users make a decision regardless of the provided explanation, since we cannot infer much from these. Note that Turkers were not allowed to access external sources like web search. Quality control. For quality control, we restrict the participation in our user study to Master Turkers with an approval rate of \u226595%. Further, we added honeypot questions, for which the answer and provided information are clearly irrelevant with respect to the question (domain and answer type mismatch), We excluded submissions from workers who gave incorrect responses to the honeypots. Explaignn provides explainable answers. Findings are presented as a confusion matrix in Table 11 (values computed after removing confounding assessments). In the 771 of the 1,200 observations that remain, we found that the user can accurately decide 9 \fSIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Christmann et al. Predicted answer correctness: True Gold correctness: True User is correct! User certainty: certain Reasons: good_explanation Task 4 Task 3 Yes (I am certain) No (I am uncertain) Question 1 Answer 1 Question 2 Answer 2 Question 3 Answer 3 Current question Provided answer System interpretation Supporting evidences What is the country of origin of the TV series Mom? United States No. of seasons? 8 Episodes in first season? 22 Who played role of Christy Plunkett? Allison Janney Current entity: Mom Relation: Who played role of Christy Plunkett Expected answer type: human 1: Mom (TV series), Allison Janney as Bonnie Plunkett: Christy's selfcentered mother, a joyful if cynical recovering addict. 2: Mom (TV series), Actor is Allison Janney, Character is Bonnie Plunkett, Seasons is 8. 3: Mom (TV series), Mom has been met with widespread critical acclaim, with praise for its writing and performances, especially by Allison Janney and Anna Faris. 4: Allison Janney, nominated for, Primetime Emmy Award for Outstanding Supporting Actress in a Comedy Series, for work, Mom, point in time, 2014. 5: Allison Janney, award received, Primetime Emmy Award for Outstanding Supporting Actress in a Comedy Series, for work, Mom, statement is subject of, 67th Primetime Emmy Awards, point in time, 2015. Do you think that the provided answer is correct? Yes (the answer is correct) No (the answer is incorrect) Are you certain about your decision? Yes (I am certain) No (I am uncertain) Question 1 Answer 1 Current question Provided answer System interpretation Supporting evidences Which football player was awarded FIFA world player of the year in 1999? Rivaldo club he played for in the same year? Santa Cruz Futebol Clube Current entity: Rivaldo Relation: club he played for in the same year Expected answer type: association football club 1: Rivaldo, Rivaldo started his career in 1991 with Brazilian club Santa Cruz, going on to have spells at Mogi Mirim, a loan spell at Corinthians, and Palmeiras. 2: Rivaldo, Club is Santa Cruz, Season is 1991, Division is Serie B, Apps is 18, Goals is 8. 3: Rivaldo, He went on to play for Santa Cruz in 1991. 4: Rivaldo, Rivaldo signed a three-year deal with the Italian Serie A club Milan in 2002. 5: Rivaldo, member of sports team, Brazil national under-20 football team, number of matches played/races/starts, +13, number of points/goals/set scored, +2, end time, 1993, start time, 1991. Do you think that the provided answer is correct? Task 3 Predicted answer correctness: False Gold correctness: False User is correct! User certainty: uncertain Reasons: bad_explanation Task 4 No (the answer is incorrect) Are you certain about your decision? Yes (I am certain) No (I am uncertain) Question 1 Answer 1 Current question Provided answer System interpretation Supporting evidences Which football player was awarded FIFA world player of the year in 1999? Rivaldo club he played for in the same year? Santa Cruz Futebol Clube Current entity: Rivaldo Relation: club he played for in the same year Expected answer type: association football club 1: Rivaldo, Rivaldo started his career in 1991 with Brazilian club Santa Cruz, going on to have spells at Mogi Mirim, a loan spell at Corinthians, and Palmeiras. 2: Rivaldo, Club is Santa Cruz, Season is 1991, Division is Serie B, Apps is 18, Goals is 8. 3: Rivaldo, He went on to play for Santa Cruz in 1991. 4: Rivaldo, Rivaldo signed a three-year deal with the Italian Serie A club Milan in 2002. 5: Rivaldo, member of sports team, Brazil national under-20 football team, number of matches played/races/starts, +13, number of points/goals/set scored, +2, end time, 1993, start time, 1991. Do you think that the provided answer is correct? Yes (the answer is correct) No (the answer is incorrect) Are you certain about your decision? Yes (I am certain) No (I am uncertain) Question 1 Answer 1 Question 2 Answer 2 Current question Provided answer System interpretation What's the publication year of Cosmos? 1980 author of the book? Carl Sagan the last publication released by Sagan when he was still alive? Cosmos Context entity: Cosmos Current entity: Sagan Relation: the last publication released by when he was still alive Expected answer type: literary work Answer correctness (Correctness of the provided answer) \u2717 Answer correctness (Correctness of the provided answer) \u2717 Predicted correctness (User assessment of the answer correctness) \u2717 Predicted correctness (User assessment of the answer correctness) \u2717 User correctness (Correctness of the user assessment) \u2713 User correctness (Correctness of the user assessment) \u2713 User certainty (User certainty about her assessment) \u2713 User certainty (User certainty about her assessment) \u2717 Table 10: Representative examples from our user study, illustrating cases in which Explaignn obtained an incorrect answer. In the first case, the user could certainly tell that the answer is incorrect from the provided evidences. In the second case, the provided evidences were not sufficient for deciding the correctness of the answer (user was uncertain). Table 11: Confusion matrix for the user study, showing the probabilities of user correctness against user certainty. User correct User incorrect User certain 0.632 0.166 0.798 User uncertain 0.129 0.073 0.202 0.761 0.239 the correctness of the system answer (76.1%) and is certain about their assessment (79.8%) most of the time. This proves that our explanations are indeed useful to end users. If the user is certain about their assessment, then we observed that the accuracy in deciding the correctness was higher: P(User correct | User certain) = 0.792. Anecdotal examples. Fig. 9 and Fig. 10 show example outputs of Explaignn, and the corresponding results from the user study. For the examples in Fig. 9 the Explaignn answer was correct. Based on the explanation (system interpretation and supporting evidences) the users could certainly tell the correctness of the answer. For the examples in Fig. 10, the answer provided by Explaignn was incorrect. In the first case, the user was able to identify this incorrectness with certainty from the explanation (the first and second supporting evidences mention that Allison Janney played Bonnie Plunkett, and not Christy Plunkett). The user could then try to reformulate [23] the question to obtain the correct answer. In the second case, the provided information was not sufficient, since there is no information on the club Rivaldo played for in 1999. So the user was uncertain about the answer correctness. Note that even in this scenario the end user would understand that the provided answer needs to be verified. This is in contrast to incorrect answers generated by large language models, which often look perfect on surface, giving the user a false sense of correctness. The guidelines, code, and the results for the user study are publicly available4. 4https://qa.mpi-inf.mpg.de/explaignn/explaignn_user_study.zip 10 \fExplainable Conversational Question Answering over Heterogeneous Sources via Iterative Graph Neural Networks SIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan 8 RELATED WORK Conversational question answering. There has been extensive research on ConvQA [45, 47] in recent years, which can largely be divided into methods using a KB [24, 26, 31], methods using a text corpus [38\u201340], and methods integrating tables [18, 33]. In ConvQA over KBs, the (potentially completed) question is often mapped to logical forms that are run over the KB to obtain the answer [14, 24, 31, 36, 51]. A different type of approach is to search for the answer in the local neighborhoods of an ongoing context graph, which captures the relevant entities of the conversation [8, 23, 26]. Early ConvQA systems over textual sources assumed the relevant information (i.e. text passage or document) to be given [6, 17, 45], and modeled the problem as a machine reading comprehension (MRC) task [43]. This assumption was challenged in [39], which proposed ORConvQA introducing a retrieval stage. Recent works follow similar ideas, and mostly rely on question rewriting [44, 57] or question resolution [59], and then employ a MRC model. In related work on ConvQA over tables, the answer is either derived via logical forms [18], or via pointer-networks operating on graph-encodings of the tables [33]. All of these methods rely on a single information source for answering questions, inherently limiting their answer coverage. Recently, there has been preliminary work on ConvQA using a mixture of the sources [9, 10]. The method proposed in [10] appends incoming questions to the conversational history, and then generates a program to derive the answer from a table using a sequence-tosequence model. In [9], evidences from heterogeneous sources are concatenated and fed into a sequence-to-sequence model to generate the answer. Both methods heavily rely on sequence-to-sequence models where the generated outputs are not explainable, and may even be hallucinated. QA over heterogeneous sources. In addition to work on ConvQA over heterogeneous sources, there is a long line of work on answering one-off questions using such mixtures of sources [4, 12, 37, 49, 53, 54, 60, 61]. More recently, UniK-QA [35] proposed the verbalization of evidences, and then applied FiD [19] for the answering task. UDT-QA [30] improved over UniK-QA by implementing more sophisticated mechanisms for evidence verbalization, and use T5 [42] to generate the answer. Similarly, Shen et al. [52] propose a dataset and method for answering questions on products from heterogeneous sources, leveraging BART [27]. These approaches, being sequence-to-sequence models at their core, face similar problems as mentioned before. HeteroQA [13] explore heterogeneity in the context of community QA where retrieval sources could be posts, comments, or even other questions. Explainable QA. Existing work is mostly on single-turn methods operating over a single source, with template [1, 50] and graphbased derivation sequences [21, 29, 37] as mechanisms for ensuring explainability. Works on text-QA provide end users with actual passages and reasoning paths used for answering [34, 62]. Posthoc explainability for QA over KBs and text is investigated in [55]. These methods cannot be easily adapted to a conversational setting with incomplete follow-up questions. Explainability in GNNs. Explainability for GNNs is an active field of research [65], devising general techniques to identify important features for the nodes in the graphs, or provide model-level explanations. Such approaches are mostly designed for developers, and therefore hardly applicable in our scenario. Through our iterative model, we propose a QA-specific method for deriving explanations of GNN predictions that can be understood by average web users. 9" + }, + { + "url": "http://arxiv.org/abs/2204.11677v2", + "title": "Conversational Question Answering on Heterogeneous Sources", + "abstract": "Conversational question answering (ConvQA) tackles sequential information\nneeds where contexts in follow-up questions are left implicit. Current ConvQA\nsystems operate over homogeneous sources of information: either a knowledge\nbase (KB), or a text corpus, or a collection of tables. This paper addresses\nthe novel issue of jointly tapping into all of these together, this way\nboosting answer coverage and confidence. We present CONVINSE, an end-to-end\npipeline for ConvQA over heterogeneous sources, operating in three stages: i)\nlearning an explicit structured representation of an incoming question and its\nconversational context, ii) harnessing this frame-like representation to\nuniformly capture relevant evidences from KB, text, and tables, and iii)\nrunning a fusion-in-decoder model to generate the answer. We construct and\nrelease the first benchmark, ConvMix, for ConvQA over heterogeneous sources,\ncomprising 3000 real-user conversations with 16000 questions, along with entity\nannotations, completed question utterances, and question paraphrases.\nExperiments demonstrate the viability and advantages of our method, compared to\nstate-of-the-art baselines.", + "authors": "Philipp Christmann, Rishiraj Saha Roy, Gerhard Weikum", + "published": "2022-04-25", + "updated": "2023-06-30", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Motivation. Conversational question answering (ConvQA) [6, 33, 38, 41] is is a popular mode of communication with digital personal assistants like Alexa, Cortana, Siri, or the Google Assistant, that are ubiquitous in today\u2019s devices. In ConvQA, users pose questions to the system sequentially, over multiple turns. In conversations between two humans, follow-up questions usually contain implicit Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGIR \u201922, July 11\u201315, 2022, Madrid, Spain \u00a9 2023 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn Table 1: Question understanding approaches for ConvQA. Original \ud835\udc5e3 Release date of first season? Question resolution Release date of first season? in GoT Question rewriting What was the release date of the first season of GoT? Convinse SR \u27e8GoT | first season | release date | date \u27e9 context. The ConvQA system is expected to resolve such implicit information from the conversational history. Consider, for example, a typical ConvQA session on factual knowledge below: \ud835\udc5e0: Who played Jaime Lannister in GoT? \ud835\udc4e0: Nikolaj Coster-Waldau \ud835\udc5e1: What about the dwarf? \ud835\udc4e1: Peter Dinklage \ud835\udc5e2: When was he born? \ud835\udc4e2: 11 June 1969 \ud835\udc5e3: Release date of first season? \ud835\udc4e3: 17 April 2011 \ud835\udc5e4: Duration of an episode? \ud835\udc4e4: 50-82 minutes State-of-the-art works on ConvQA make use of single information sources: either curated knowledge bases (KB) [8, 10, 19, 21, 23, 27, 30, 42, 45], or unstructured text collections [5, 14, 32, 34, 35], or Web tables [15, 28], but only one of these. Questions \ud835\udc5e0 and \ud835\udc5e2 can be answered more conveniently using KBs like Wikidata [54], YAGO [47], or DBpedia [1], that store factual world knowledge in compact RDF triples. However, answering \ud835\udc5e1, \ud835\udc5e3 or \ud835\udc5e4 via a KB requires complex reasoning efforts. For \ud835\udc5e1, even with named entity disambiguation (NED) in conversations [18, 44], it is unlikely that the correct KB entity (Tyrion Lannister) can be inferred, which means that the resulting answer search space [7] will not have the answer. For \ud835\udc5e3, answering via a KB requires a two-step lookup involving the first season, and then the corresponding release date. For \ud835\udc5e4, a KB might have the details for each individual episode, but collecting this information and aggregating for the final answer can be quite cumbersome. In contrast, the answers to these three questions are much more easily spotted in content of text documents, or Web tables. In addition, there are obviously many information needs where answers are present only in text form, as KBs and Web tables have inherently limited coverage. An example would be a question like: What did Brienne call Jaime? (\u201cKingslayer\u201d). A smart ConvQA system should, therefore, be able to tap into more than one kind of knowledge repository, to improve answer recall and to boost answer confidence by leveraging multiple kinds of evidence across sources. Limitations of state-of-the-art. Existing research on ConvQA has considered solely one kind of information source for deriving answers. Further, when specializing on a given source, methods often adopt source-specific design choices that do not generalize 1 arXiv:2204.11677v2 [cs.IR] 30 Jun 2023 \fSIGIR \u201922, July 11\u201315, 2022, Madrid, Spain P. Christmann et al. well [8, 10, 14]. For example, representations of the conversational context, like KB subgraphs or text passages, are often specifically modeled for the knowledge repository at hand, making these heterogeneous sources apparently incompatible. Methods for question rewriting [9, 37, 52] and question resolution [22, 53] convert short user utterances into full-fledged questions where the intent is made completely explicit. However, this adds major complexity and may lose valuable cues from the conversation flow. Further, these methods face evidence retrieval problems arising from long and potentially verbose questions [11]. There has been substantial work on single-question QA over heterogeneous sources [3, 12, 29, 48\u201350, 55, 58], with complete questions as input. Among these, only O\u011fuz et al. [29] and Ma et al. [26] try to deal with KBs, text, and tables. Their approach is designed for simple questions, though, and cannot easily be extended to the challenging ConvQA setting [41]. Finally, none of the prior works on ConvQA produce human-interpretable structures that could assist end users in case of erroneous system responses. Approach. To overcome these limitations, we propose Convinse (CONVQA with INtermediate Representations on Heterogeneous Sources for Explainability), an end-to-end framework for conversational QA on a mixture of sources. Convinse consists of three main stages: i) question understanding (QU), ii) evidence retrieval and scoring (ERS), and iii) heterogeneous answering (HA). The first stage, QU, is our primary contribution in this work. It addresses the challenges of incomplete user utterances introduced by the conversational setting. We derive an intent-explicit structured representation (SR) that captures the complete information need. Table 1 shows such an SR for \ud835\udc5e3 of our running example. SRs are frame-like structures for a question that contain designated slots for open-vocabulary lexical representations of entities in the conversational context (marked gray) and the current question (red), relational predicates (blue), and expected answer types (cyan). SRs can be viewed as concise gists of user intents, intended to be in a form independent of any specific answering source. They are selfcontained interpretable representations of the user\u2019s information need, and are inferred using fine-tuned transformer models trained on data generated by distant supervision from plain sequences of QA pairs. We further propose a conversational flow graph (CFG), which can be inferred from the SR, and enhances the explainability of the derivation process. The second stage, ERS, exploits recent developments in entitybased retrieval [7] to judiciously retrieve question-relevant evidences (KB-facts, text-sentences, table-records, or infobox-entries) from each information source. These heterogeneous evidences are verbalized [17, 29, 31] on-the-fly and run through a scoring model. The top-\ud835\udc58pieces of evidence are passed to the answering stage. The third and final stage, HA, consists of a fusion-in-decoder (FiD) model [16], that is state-of-the-art in the retrieve-and-read paradigm for open-domain QA. FiD acts as a \u201cgenerative reader\u201d, creating a crisp answer from the top-\ud835\udc58evidences, that is returned to the end user. Benchmark. Another novel contribution is the construction of ConvMix, the first benchmark for ConvQA over heterogeneous sources. ConvMix is a crowdsourced dataset that contains questions with answers emanating from the Wikidata KB, the full text of Wikipedia articles, and the collection of Wikipedia tables and infoboxes. ConvMix contains 2800 conversations with five turns (14k utterances), and 200 conversations with ten turns (2k utterances), their gold answers and respective knowledge sources for answering. Conversations are accompanied by metadata like entity annotations, completed questions, and paraphrases. The collected dataset ConvMix, and all our code and data for Convinse can be accessed at https://convinse.mpi-inf.mpg.de. Contributions. Our salient contributions are the following: \u2022 The paper proposes Convinse, the first end-to-end method for ConvQA over heterogeneous sources. \u2022 It introduces structured representations to capture user intents in a structured and explainable manner, a key element for seamless answering over a mixture of heterogeneous sources. \u2022 It presents distant supervision mechanisms to automatically annotate conversations with structured representations. \u2022 It provides ConvMix, the first benchmark for ConvQA over heterogeneous sources. 2 CONCEPTS AND NOTATION Question. A natural language question \ud835\udc5eis a sequence of words expressing an interrogative intent. A question can be complete (i.e., self-contained/full-fledged/intent-explicit), or incomplete (i.e., context-dependent/partial/intent-implicit). Incomplete questions require context from previous questions and answers in the conversation to be answered correctly. Answer. An answer \ud835\udc4eto \ud835\udc5eis a crisp phrase (or a list) that satisfies the intent in \ud835\udc5e. In a heterogeneous scenario, the answer phrase \ud835\udc4e can be an entity or literal (constant) coming out of the KB or a table or infobox, or any span of short text from the document corpus. Conversation. A conversation \ud835\udc36consists of a sequence of questions (\ud835\udc5e0,\ud835\udc5e1, . . .) and corresponding answers (\ud835\udc4e0,\ud835\udc4e1, . . .) (see Sec. 1 for an example). The first question \ud835\udc5e0 in\ud835\udc36is complete, while followup questions are usually incomplete. Turn. A turn in \ud835\udc36consists of a specific \u27e8\ud835\udc5e\ud835\udc56,\ud835\udc4e\ud835\udc56\u27e9pair. For example, the second turn refers to \u27e8\ud835\udc5e1,\ud835\udc4e1\u27e9. Knowledge base. A knowledge base is a set of facts, where each fact is a (SPO) triple, optionally augmented by pairs which specify additional information for the main triple (e.g. ). Subjects are entities (Game of Thrones), while objects can be entities, types (human) or literals (constants such as numbers with or without units, dates like 11 June 1969, etc.). Predicates (cast member) denote relationships. Text collection. A text collection is a set of documents, where each document consists of a sequence of sentences. Table. A table is a structured relational construct consisting of cells organized into rows and columns, with optional row and column headers. Cell values are typically entities or literals, while headers are often predicates. Infobox. An infobox is a list of salient attribute-value pairs about an entity. A Wikipedia infobox appears on the top right corner of the entity\u2019s Wikipedia page. Infobox entries resemble KB-facts, but they are not necessarily clean in terms of entity linkage (e.g., 2 \fConversational Question Answering on Heterogeneous Sources SIGIR \u201922, July 11\u201315, 2022, Madrid, Spain Question Understanding (QU) Heterogeneous Answering (HA) Evidence Retrieval and Scoring (ERS) Text-snippets Table-records Infobox-entries KB-facts KB-entities KB-predicates Pre-trained Language Models Conversation History Structured Representation Current Question Top-e Evidences Answer Figure 1: An overview of the Convinse system. a birthplace could be given as a string with city, country or other regional variations). Evidence. An evidence is a unit of retrieval that can come from any of the heterogeneous sources above: a KB-fact, a text-sentence, a table-record (row), or an infobox-entry. Evidences form the sources of answer candidates. Answering evidence. An answering evidence is an evidence which contains at least one correct answer \ud835\udc4e. It can either mention the answer as a string, or include an answer entity (in case of KB-facts). 3 THE CONVINSE METHOD Fig. 1 shows an overview of the Convinse architecture. The following subsections discuss the three steps: question understanding, evidence retrieval and scoring, and heterogeneous answering. 3.1 Question understanding (QU) Follow-up questions in a conversation (\ud835\udc5e1,\ud835\udc5e2, . . .) usually contain implicit intent. A key challenge in ConvQA, therefore, is to understand the flow of the conversation, towards deriving intent-explicit structured representations (SR) of the user\u2019s information need in \ud835\udc5e\ud835\udc56. Instead of trying to generate a full-fledged question, we rather aim to capture the semantic structure, using a syntax-agnostic approach. This can be perceived as a \u201clogical form\u201d for heterogeneous QA, where no explicit grounding or canonicalization is possible. The representation is purely on the question-level, and thus agnostic to the information sources that are used during the answering process. However, it can readily be matched with different kinds of evidences, which often take the form of keyword phrases (e.g. text snippets or verbalized table records). The SR is loosely inspired by work on quantity queries [13]. Specifically, an SR is a 4-tuple holding a slot for each of: \u2022 Context entities (depicted in gray in Table 1), \u2022 Question entities (in red), \u2022 Question predicates (in blue), and, \u2022 Expected answer types (in cyan). Context and question entities. As an example, consider the gold SR for \ud835\udc5e1 of the running example: \u27e8GoT | the dwarf | who played | human \u27e9. The context entity (GoT in this case) is an entity mention from the conversational context. The question entity is the entity mention targeted in the current question (e.g. the dwarf). Here, the context entity makes the question entity explicit, indicating that the question is on the dwarf in Game of Thrones. Inferring the question entity may need to take the history into account (e.g., for \ud835\udc5e3 in Table 1). The context entity and question entity can consist of multiple such mentions. This is required for questions such as \u201cWhere did Dany and Jon first meet?\u201d, with the gold SR being \u27e8GoT | Dany and Jon Snow | first meet | location \u27e9. Question predicates. The question predicate is the counterpart to the relation or attribute of interest in a logical form. However, it is merely a surface phrase, without any normalization or mapping to a KB. This way, it is easy to match it against any kind of information source. For example, the question predicates who played or first meet can be matched with evidences from KB or text-snippets from documents or table headers, alike. Answer types. Expected answer types assist the answering model in detecting and eliminating spurious answer candidates [41, 59]. In general, multiple types could be inferred here. The question predicate first meet alone could imply the answer type to be either \u201cdate\u201d or \u201clocation\u201d. Stopwords like \u201cwhere\u201d are often disregarded by downstream QA models [8]; in contrast, the SR answer type retains this information and would infer only the correct \u201clocation\u201d. Further, this type can help in identifying the expected granularity of the answer. For the question \u201cWhen is his birthdate?\u201d, one would expect a complete date with day, month and year as the answer, but for \u201cWhen did they win their last world cup?\u201d the corresponding year would be enough and actually desired. \u201cDate\u201d and \u201cyear\u201d would respectively populate the fourth slot in these cases. Specific slots in the SR can be left blank. For \ud835\udc5e4, the question entity GoT is already explicit, and thus no context entity is required: \u27e8_ | GoT | duration of an episode | number \u27e9. The SR generation is implemented by fine-tuning a pre-trained sequence generation model. We tried BART [25] and T5 [36] in preliminary experiments, and found BART to perform better. BART is particularly effective when information is copied and manipulated from the input to generate the output autoregressively [25], which is exactly the setting here. The conversation history and the current question concatenated with a delimiter constitute the input, and the SR is the output. When encoding the history and the current question, the model considers cross-attention between turns, identifying relevant parts from the conversation history. SRs and explainability. One of our primary goals in Convinse was to produce intermediate representations for end users as we proceed through the QA pipeline. Concretely, understanding the flow within the conversation is an essential problem in ConvQA [8, 14]. While the SR itself is human-readable, when presented only with the SR (or some rewritten/resolved question), certain decisions of the ConvQA system might not be immediately obvious to a real user. Here, we propose an intuitive mechanism to infer and present the conversational flow to a user: given the generated SR, we identify the source turn for each word, using exact match in the history, and consider such source turns as relevant for the current question. If there is no source turn, we consider the question at hand as self-sufficient. A conversational flow graph (CFG) is established as follows: questions and answers are nodes, and an edge connects a question to its relevant history. Due to the potential dependence of a turn on multiple preceding ones, the CFG for a conversation may not strictly be a tree, but rather a directed acyclic graph (DAG). The CFG can be presented to the interested end user together with 3 \fSIGIR \u201922, July 11\u201315, 2022, Madrid, Spain P. Christmann et al. Who played Jaime Lannister in GoT? Nikolaj Coster-Waldau Peter Dinklage 17 April 2011 What about the dwarf? When was he born? Release date of first season? 50-82 minutes Duration of an episode? 11 June 1969 0 Conversation Flow Graph (CFG) at \ud835\udc5e4 Structured Representation (SR) for \ud835\udc5e4 \u27e8_ | GoT | duration of an episode | number \u27e9 1 2 3 4 Figure 2: Conversation flow graph for our running example. Figure 3: An overview of the ERS stage. the SR, as depicted for \ud835\udc5e4 in Fig. 2, either for gaining confidence in final answers, or for scrutinizing error cases. 3.2 Evidence retrieval and scoring (ERS) Evidence retrieval. At this stage, the goal is to retrieve relevant evidences given the generated SR. We convert retrieved evidences on-the-fly to verbalized NL forms [29]. This is done for harnessing the state-of-the-art fusion-in-decoder (FiD) model downstream in our pipeline\u2019s final step (Sec. 3.3): FiD generates answers, given the question and a number of NL sentences as input. Table 2 shows example evidences from different sources for the context entity Game of Thrones. For evidences from the KB, we make use of Clocq [7] (available publicly at https://clocq.mpi-inf.mpg.de/), which is a recent method for retrieving relevant KB-facts and providing top-\ud835\udc58disambiguations (mappings to KB-items) for relevant cue words (entity, predicate, and type mentions), given an explicit user question. Since questions are treated as keyword queries, the SR can directly be fed into Clocq (removing the separator \u2018|\u2019 during retrieval). KB-facts are verbalized [17, 29, 31], separating the individual constituents of a fact by a comma. These mappings between KB-item mentions in verbalized facts and KB-item IDs are retained in-memory to help us later during evaluation (Sec. 5.3). For text-corpus evidences, we take an entity-centric view of Wikipedia: for each of the disambiguated entities from Clocq, we retrieve the corresponding Wikipedia page. For example, for the SR \u27e8GoT | Jaime Lannister | who played | human \u27e9, we would consider the Wikipedia pages for Jaime Lannister, Game of Thrones (TV Series), Game of Thrones (A Song of Ice and Fire), and so on (Clocq allows for multiple disambiguations per question token). The text within the page is split into single sentences. We extract tables and infoboxes from the retrieved Wikipedia pages. Every table row is transformed to text individually, concatenating cell values with the respective column headers with an \u201cis\u201d in between, and separating such \u27e8cell, header\u27e9pairs with a comma [29]. For each infobox attribute (similar to predicates in a KB), we concatenate the corresponding lines, having again, a comma as separator. The page title is prepended to all evidences from Wikipedia as additional context. In addition, we exploit anchor links in evidences to other Wikipedia pages to map the corresponding entity mentions (anchor texts) to KB-items. Wikipedia links are transformed to Wikidata IDs using a dictionary. Similarly, dates and years in evidences are identified and normalized to the standard KB format using simple patterns. We keep such \u27e8entity mention, KB-item\u27e9pairs for all evidences in memory, as this helps us later in grounding answers to the KB during evaluation (Sec. 5.3). For example, for the text evidence in Table 2, \u201cTyrion\u201d might link to the Wikipedia page of Tyrion Lannister: so we add the pair \u27e8\u201cTyrion\u201d, Tyrion Lannister\u27e9as metadata to the corresponding evidence. Evidence scoring. The set of evidences compiled by the previous stage can be quite large (several thousands), which can affect the efficiency and effectiveness of the answering phase. Therefore, we first reduce this set, keeping only the most relevant information. Since all evidences are verbalized at this stage, each individual evidence can be treated as a document, with the SR as the query. We then use the standard IR model BM25 [39] for retrieving the top-\ud835\udc52relevant pieces of information. 3.3 Heterogeneous answering (HA) Given the top-\ud835\udc52relevant evidences per question, we make use of a state-of-the-art fusion-in-decoder [16] (FiD) model. Different from the typical span prediction in the retrieve-and-read paradigm [2], FiD generates the answer, following a sequence-to-sequence approach. FiD first pairs every evidence with the question, which is the SR in our case, and encodes these pairs, leveraging crossattention to identify relevant information. The concatenation of these encodings is then fed into a decoder, which generates the answer autoregressively. 3.4 Distantly supervised labeling Intuition. To train the QU phase of Convinse, we need a collection of \u27e8history, question\u27e9-pairs, with the corresponding gold SRs. Annotating conversations with such information is tricky with crowdworkers, and expensive too (c.f. Sec. 4). Even the annotation of conversations with completed questions is harder and much more expensive than collecting plain sequences of question-answer pairs. Therefore, we devise a mechanism to automatically generate the gold SRs from pure conversations. Our technique is based on the following intuition: if a piece of information (e.g. an entity or relation phrase) in previous turns is essential for the understanding of the current incomplete question and this information has been left implicit by the user, then it should be included in a completed version of the question. Consider this example: 4 \fConversational Question Answering on Heterogeneous Sources SIGIR \u201922, July 11\u201315, 2022, Madrid, Spain \ud835\udc5e0: Who played Jaime Lannister in GoT? \ud835\udc4e0: Nikolaj Coster-Waldau \ud835\udc5e1: What about the dwarf? It is unlikely that proper evidences (on Game of Thrones, given \ud835\udc5e0), can be found for the incomplete question \ud835\udc5e1. However, once \u201cGoT\u201d is added to \ud835\udc5e1 (e.g., \u201cWhat about the dwarf in GoT?\u201d), answering evidences (defined in Sec. 2) can be retrieved. This suggests that the phrase \u201cGoT\u201d should feature in the SR for \ud835\udc5e1. Implementation. Based on this idea, we create data for training the QU model as follows: starting with the complete question \ud835\udc5e0, we retrieve all evidences (Sec. 3.2). Since our retrieval is entity-centric, we can identify entity mentions which bring in answering evidences: for each evidence, the retriever returns the text span from the input question that the evidence was retrieved for. Such entity mentions are considered relevant for the respective conversation turn. For the incomplete follow-up questions, we iteratively add such relevant entity mentions from previous turns, and then retrieve evidences for the current question at hand. When adding the entity mention results in answering evidences being retrieved, we consider the entity as relevant for the current turn. Similarly, entity mentions in the current turn are identified as relevant, if answering evidences are retrieved for them. The gold SR is then constructed heuristically: i) if there are relevant entities from the current turn, then these feature in the question entity slot, and relevant entities (if any) from previous turns become context entities in the SR, and, ii) if there are only relevant entities from previous turns, then these become question entities. The remaining words in the current question (except for stopwords) fill up the question predicate slot. The expected answer type is directly looked up from the KB, using the gold answer. Since the KB may have several types for the same entity, we take the most frequent one to avoid very specific types. For example, Tyrion Lannister has types fictional human (more frequent) and GoT character (less frequent) in Wikidata: so we only consider fictional human in our SR. 3.5 Training the Convinse framework We train the QU model first, using data generated in the manner discussed in Sec. 3.4. After training, we directly apply the QU model on the whole data (including training data), to generate SRs. In the remainder of the approach, we utilize only the SRs generated by our model. When training the FiD model, we skip instances for which the top-\ud835\udc52evidences do not have the answer, since the model would not be able to find the answer within the evidences, and could hallucinate at inference time [40]. This way, the model is taught to predict the answer from the input. Since we would like to treat evidences from different sources in a unified way, only one model is trained on the top-\ud835\udc52evidences after retrieval from all sources. To demonstrate the robustness of Convinse, this same model is subsequently used for all combinations of input sources, including inputs from single sources. 4 THE CONVMIX BENCHMARK Limitations of existing ConvQA benchmarks. Notable efforts at ConvQA benchmarking like QuAC [6] (text), CoQA [38] (text), Table 2: Verbalized evidences from different input sources. KB Game of Thrones, cast member, Nikolaj CosterWaldau, character role, Jaime Lannister Text Game of Thrones, The third and youngest Lannister sibling is the dwarf Tyrion (Peter Dinklage) (...). Table Game of Thrones, Season is Season 1, (...), First aired is April 17, 2011 (...). Infobox Game of Thrones, Running time, 50\u201382 minutes Table 3: Basic statistics for the ConvMix benchmark. Title Generate 5 conversations for question answering Description Choose entities of your choice from five domains, generate questions about them, and find answers from Wikidata and Wikipedia Participants 32 unique Master Turkers Time allotted (HIT) 4 hours maximum Time taken (HIT) 1.5 hours on average Payment per HIT 15 USD Domains Books, Movies, Music, TV series, Soccer Conversations 3000 Questions 16000 Question length 8.78 words (initial), 5.19 (follow-ups), 5.87 (all) Answer size 1.02 entities/strings on average Entities covered 5418 (long-tail: 2511, with <50 KB-facts) Heterogeneity 2626 conversations (>1 source used by Turker) SQA [15] (tables), ConvQuestions [8] (KB), and CSQA [42] (KB) assume a single answering source. Rather than the easier option of artificially augmenting any of these with heterogeneous inputs, we believe that it is much more natural and worthwhile to build a new resource from scratch by users browsing through a mixture of sources, as they would do in a typical information seeking session on the Web. Initiating conversations. How a user initiates a conversation is a key conceptual challenge that needs to be overcome in creating good ConvQA benchmarks. One could, for example, provide users with passages or documents, and ask them to create a sequence of questions from there [6, 38]. Alternatively, one could also provide annotators with some conversation from a benchmark so far, and request their continuation in some fashion [21]. Large-scale synthetic benchmarks would try to automate this as far as possible using rules and templates [42]. In keeping with our philosophy of natural conversations, we asked users to start with an entity of their choice (instead of spoonfeeding them with one, which could be counterproductive if the user has no interest or knowledge about the provided entity). Real conversations between humans, or several search sessions, often start when users have queries about such seed or topical entities. With the first question initiating the conversation, we collected four follow-up questions (total of five turns) that build upon the ongoing conversational context. Quality control. The study was conducted on the popular crowdsourcing platform Amazon Mechanical Turk (AMT), where we allowed only Master Workers to participate, for quality assurance. We also blocked single and sets of annotators who demonstrated evidence of excessive repetition or collusion in their annotations. 5 \fSIGIR \u201922, July 11\u201315, 2022, Madrid, Spain P. Christmann et al. Since the task is non-trivial for an average Turker (requires understanding of factoid questions, and familiarity of knowledge sources like Wikidata and Wikipedia, along with entities and literal answers), we also included quite a few internal checks and warnings that could prompt users for unintentional mistakes before task submission. Guidelines are given as text, but we also provided a video illustrating the conversation generation process by some examples. Workers with notably diverse and interesting conversations were awarded with a 5 USD bonus. Interestingly, several Turkers providing high-quality conversations seemingly found the task engaging (we provided a free-text feedback box), and submitted more than 20 HITs. The authors conducted semi-automatic post-processing, validation and cleaning of the benchmark. Several issues were also resolved by meticulous manual inspection. For example, we ran Clocq [7] on the initial questions, and manually inspected cases for which no answering evidences were found, to identify and rectify cases in which the initial questions themselves were unanswerable. Such cases are specifically problematic, because the whole conversation might become unanswerable. Ensuring heterogeneity. Last but not the least, ensuring answer coverage over heterogeneous sources was a key concern. Here, we again kept it natural, and encouraged users not to forcibly stick to any particular source during their conversation. Interestingly, out of 3000 conversations, only 374 used exactly one source. A majority (1280) touched three sources, 572 touched four, while 774 used two inputs. Finally, note that this is only the source that the annotator used during her search process: it is quite possible that the answer can be located in other information sources (see the field [\u00b7] below answers in Table 4), thereby enabling future benchmark users to exploit answer redundancy. Collecting longer conversations. We initially collected 2800 conversations with five turns (referred to as ConvMix-5T). However, there can also be cases in which users wish to dive deep into a specific topic, or other curiosities arise as the conversation continues. In such situations, conversations can easily go beyond five turns, making the understanding of the conversational flow even more challenging for the ConvQA system. Therefore, we collected 200 additional conversations with ten turns (denoted ConvMix-10T), to test the generalizability of ConvQA systems over longer conversations. On manual investigation, we found that there are naturally more topic drifts within these conversations. These 2k (200 \u00d7 10) questions are only used as an additional test set to serve as a robustness check for pre-trained models. Thus, our complete benchmark ConvMix (3000 conversations in total) is made up of subsets ConvMix-5T (2800 conversations, 5 turns each) and ConvMix-10T (200 conversations, 10 turns each). Collected fields. We collected the following annotations from crowdworkers: i) conversational questions, ii) intent-explicit versions of follow-up questions, iii) gold answers as plain texts and Wikidata URLs, iv) question entities, v) question paraphrases, and vi) sources used for answer retrieval. We believe that this additional metadata will make our resource useful beyond QA (in question rewriting and paraphrasing, for example). Most questions had exactly one correct answer, with the maximum being six. Table 3 summarizes notable properties of our study and benchmark, while Table 4 reports interesting representative examples. Note that HIT specific entries, like the payment per HIT, are given for the collection of conversations with five turns. The respective numbers were doubled for the collection of conversations with ten turns. 5 EXPERIMENTAL SETUP We conduct all experiments on the ConvMix benchmark. We split the part of ConvMix with five turns, ConvMix-5T, into train, development and test sets with the ratio 60:20:20. ConvMix-10T is used only as a separate test set. 5.1 Heterogeneous sources Convinse and all baselines run on the same data collections. As our knowledge base, we take the 31 January 2022 complete NTriples dump1 of Wikidata, one of the largest and best curated KBs today. It consists of about 17B triples, consuming about 2 TB disk space. We access the KB via the recently proposed Clocq [7] interface, that reduces the memory overhead and efficiently returns KB-facts for queried entities. The text collection is chosen to be the English Wikipedia (April 2022). The benchmark-relevant subset of Wikipedia is comprised of the pages of entities detected via Clocq. Documents are split into sentences using spaCy2. All tables and infoboxes originating from the retrieved Wikipedia pages together constitute the respective answering sources. We parse Wikipedia tables using WikiTables3, and concatenate entries in the obtained JSON-dictionary for verbalization. This procedure also includes conversions for tables with nested structure. Infoboxes are detected using Beautiful Soup4. 5.2 Baselines There are no prior works for ConvQA over heterogeneous sources. Thus, to compare the proposed Convinse pipeline with alternative choices, we adapt state-of-the-art question understanding (in this case, rewriting and resolution) methods from the IR and NLP literature. These serve as competitors for our SR generation phase. We then provide these baselines with exactly the same ERS and HA phases that Convinse has, to complete end-to-end QA pipelines. Prepending history turns. Adding turns from the history to the beginning of the current question is still considered a simple yet tough-to-beat baseline in almost all ConvQA tasks [8, 21, 34, 52], and so we investigate the same here as well. Specifically, we consider four variants: i) add only the initial turn \u27e8\ud835\udc5e0,\ud835\udc4e0\u27e9, as it often establishes the topic of the conversation (Prepend init); ii) add only the previous turn \u27e8\ud835\udc5e\ud835\udc56\u22121,\ud835\udc4e\ud835\udc56\u22121\u27e9, as it sets immediate context for the current information need (Prepend prev); iii) add both initial and previous turns (Prepend init+prev); and iv) add all turns {\u27e8\ud835\udc5e\ud835\udc61,\ud835\udc4e\ud835\udc61\u27e9}\ud835\udc56\u22121 \ud835\udc61=0 (Prepend all). Question rewriting. We choose a very recent T5-based rewriting model [37]. The method is trained on the Canard question rewriting benchmark [9]. The model is fine-tuned on ConvMix, using the \u27e8full history, current question\u27e9-pairs as input, and the respective completed questions (available in the benchmark) as the gold label. 1https://dumps.wikimedia.org/wikidatawiki/entities/ 2https://spacy.io/api/sentencizer 3https://github.com/bcicen/wikitables 4https://beautiful-soup-4.readthedocs.io/en/latest/ 6 \fConversational Question Answering on Heterogeneous Sources SIGIR \u201922, July 11\u201315, 2022, Madrid, Spain Table 4: Representative conversations in ConvMix. The types of sources which can be used for answering are given in brackets. Turn Books Movies Music TV series Soccer \ud835\udc5e0 Who wrote SlaughterhouseFive? Who played Ron in the Harry Potter movies? What was the last album recorded by the Beatles? Who is the actor of Rick Grimes in The Walking Dead? Which national team does Kylian Mbapp\u00e9 play soccer for? \ud835\udc4e0 Kurt Vonnegut Rupert Grint Let It Be Andrew Lincoln France football team [KB, Text, Info] [KB, Text] [KB, Text, Table] [KB, Text, Table] [KB, Text, Info, Table] \ud835\udc5e1 Which war is discussed in the book? Who played Dumbledore? Where was their last paying concert held? What about Daryl Dixon? How many goals did he score for his home country in 2018? \ud835\udc4e1 World War II R. Harris, M. Gambon Candlestick Park Norman Reedus 9 [KB, Text] [Text, Table] [Text] [KB, Text, Table] [Table] \ud835\udc5e2 What year was it\u2019s first film adaptation released? What\u2019s the run time for all the movies combined? What year did they break up? did he also play in Saturday night live? place of his birth? \ud835\udc4e2 1972 1179 minutes 1970 Yes Paris [KB, Text, Table, Info] [KB, Info] [KB, Text, Info] [Text] [KB, Text, Info] \ud835\udc5e3 Who directed it? Who was the production designer for the films? Who was their manager? whom did he play? award he got in 2017?\" \ud835\udc4e3 George Roy Hill Stuart Craig Brian Epstein Daryl Dixon Golden Boy [KB, Text, Table, Info] [KB, Text, Table] [KB, Text] [Text] [KB, Table] \ud835\udc5e4 What was the final film that he made? Which movie did he win an award for working on in 1980? What was their nickname? production company of the series? Who is the award conferred by? \ud835\udc4e4 Funny Farm The Elephant Man Fab Four NBC Studios Tuttosport [KB, Text, Table] [Text] [KB, Text] [KB, Text, Info] [KB, Text, Info] Question resolution. We use QuReTeC [53] as a question resolution baseline, treating context disambiguation as a term classification problem. A BERT-encoder is augmented with a term classification head, and predicts for each history term whether the word should be added to the current question. The same distant supervision strategy (Sec. 3.4) as used by Convinse is employed for generating annotated data for QuReTeC (trained on ConvMix). 5.3 Metrics Measuring retrieval effectiveness. To evaluate retrieval quality, we use answer presence as our metric. It is a binary measure of whether one of the gold answers is present in the top-\ud835\udc52evidences (\ud835\udc52= 100 in all experiments). Measuring answering effectiveness. We use one of the standard metrics for factoid QA [41], precision@1 (P@1), since FiD generates a unique answer. FiD generates plain strings as answers: evaluation for such strings with exact match for computing P@1 can often be problematic [46], since the correct answer could be expressed in different ways (e.g. {\u201cEddard Stark\u201d, \u201cNed Stark\u201d}, or {\u201c11 June 1969\u201d, \u201c11-06-1969\u201d, \u201cJune 11, 1969\u201d}). Therefore, we try to normalize the answer to the KB, whenever possible, to allow for a fair comparison across systems. We search through \u27e8entity mention, KB-item\u27e9pairs coming from the evidence retrieval phase (Sec. 3.2). If there is a perfect match between the entity mention and the predicted answer string, we return the corresponding KBitem as the answer. If there is no such perfect match, we compute the Levenshtein distance [24] between the predicted answer and entity mentions from \u27e8entity mention, KB-item\u27e9pairs. The KB-item for the entity mention with the smallest edit distance is used as the answer in such cases. Note that such KB-items may also be normalized strings, dates, years, or numbers. These normalized KB-items are compared to the gold answers in the benchmark for the computation of P@1. 5.4 Configurations Convinse uses a fine-tuned BART-base model for generating structured representations. The default hyperparameters from the Hugging Face library were used5. The maximum sequence length was set to 20, and early stopping was enabled. Three epochs were used during training, with a batch size of ten. 500 warmup steps with a weight decay of 0.01 turned out to be the most effective. Clocq, that was used for evidence retrieval inside Convinse and all baselines, has two parameters: \ud835\udc58(number of disambiguations to consider for each question word), and \ud835\udc5d(a pruning threshold). In this paper, we set \ud835\udc58= Auto (Clocq dynamically sets the number of disambiguations), and \ud835\udc5d= 1000, as these performed the best on our dev set. We used a Python implementation of BM25, with default parameters6. Code for FiD is publicly available7. FiD was trained on ConvMix for its use in this work. The number of input passages (\ud835\udc52in this paper) was retained at 100 as in the original work [16]. The maximum length of an answer was set to 10 words. A learning rate of 5\u00d710\u22125 really proved effective, with a weight decay of 0.01 and AdamW as the optimizer. All systems were trained on the ConvMix train set, and all hyperparameters were tuned on the dev set. All code was scripted using Python, making use of the popular PyTorch library8. 5https://huggingface.co/facebook/bart-base 6https://pypi.org/project/rank-bm25/ 7https://github.com/facebookresearch/FiD 8https://pytorch.org 7 \fSIGIR \u201922, July 11\u201315, 2022, Madrid, Spain P. Christmann et al. Whenever a neural model was used, code was run on a GPU (single GPU, NVIDIA Quadro RTX 8000, 48 GB GDDR6). 6 RESULTS AND INSIGHTS We run Convinse and the baselines on the test set of ConvMix (combined test sets of -5T and -10T), and report results in Tables 5 and 6. All metrics are micro-averaged over each conversational question that the systems handle, i.e. we measure the performance at a question-level. Throughout this section, best performing variants in columns are marked in bold. An asterisk (*) denotes statistical significance of Convinse over the nearest baseline. The McNemar\u2019s test was performed for binary variables like P@1, and the paired \ud835\udc61-test otherwise, with \ud835\udc5d< 0.05. All results are reported on the test set, except the ablation study, that was naturally conducted on the dev set. If not stated otherwise, we make use of gold answers for the previous turns in the conversation. For example, for answering \ud835\udc5e4 we assume gold answers \ud835\udc4e0-\ud835\udc4e3 to be known. 6.1 Key findings Convinse is viable for heterogeneous QA. The first and foremost takeaway is that our proposed pipeline is a viable approach for handling incomplete questions in conversations, given heterogeneous input sources. Convinse and most of the baselines consistently reach 40-50% on answer presence in their evidence pools after QU and ERS, with Convinse leading with 54% (see Table 5). These numbers set the upper bounds for end-to-end QA after the answering phase, which are in the ballpark of 25-34% (see Table 6). It is noteworthy that even the basic prepending baselines, despite generating fairly verbose question formulations (15 \u221226 words, Table 9) have very good answer presence in the top-100 evidences. This is largely due to the Clocq entity-based retrieval module, which turned to be quite robust even for long queries. Subsequently, BM25 scoring serves as a necessary filter, to prune the evidence sets. Sizes of these evidence sets varied from 2.3k evidences (Convinse) to 7.5k (Prepend all), which would have posed efficiency challenges to the final answering stage had the BM25 filter not been applied. Convinse outperforms baselines. We observe that the SRs in Convinse are significantly more effective than question rewriting/resolution and the prepend baselines. SRs provide the right balance between conciseness and coverage. Conciseness (SRs are just 6-7 words on average, Table 9) helps effective retrieval with the IR model at the ERS stage, while expressive representation of the conversational context is crucial, too. The expressivity in the SRs helps achieve the highest answer presence after QU and ERS (0.542 in Table 5), outperforming all baselines by substantial margins. This advantage is carried forward to the answering stage and results in the best end-to-end QA performance (0.342 in Table 6). The simple prepend baselines sometimes come surprisingly close, though, and perform better than the sophisticated rewriting or resolution methods. Nevertheless, it is noteworthy that SRs and question rewriting/resolution have clear advantages in terms of interpretability. When a user wonders what went wrong upon receiving an incorrect (if not outright embarrassing) answer, the system could show the CFG, SRs or the inferred complete questions as informative explanations \u2013 helping the user to better understand and cope with the system\u2019s limitations and strengths. Altogether, the absolute P@1 numbers still leave huge room for improvement. This shows that ConvMix is indeed a challenging benchmark, with difficult user inputs where inferring the intent is hard, for example: Which war is discussed in the book? or What was the final film he made? (see Table 4 for more examples). Combining heterogeneous sources helps. Another across-theboard observation is that combining knowledge sources is a strong asset for ConvQA. Consider the values in the \u201cAll\u201d columns of Tables 5 and 6. These numbers are systematically and substantially higher than those in the columns for individual source types and even for pair-wise combinations. To inspect whether these gains are not only from enhanced coverage, but also leverage redundancy of information across source types, we measured the average P@1 for all cases where the questions had 1, 2, 3, or 4 evidences (among the top-\ud835\udc52) containing the gold answer in the output of the ERS stage. The P@1 improves steadily as the answer can be found several times, i.e. as information becomes redundant, being 0.428 for one, 0.658 for two, 0.713 for three, and 0.763 across instances with four answering evidences. Convinse excels in realistic setup with predicted answers. The experiments so far are conducted assuming the gold answers for the previous turns in the conversation to be given. However, in a realistic setup, the ConvQA system would not know these answers. Therefore, we conducted an additional experiment, in which we used the predicted answers by the system for the previous turns, when generating the outputs of the QU phase. The results of this experiment are shown in Table 7. Convinse outperforms all methods significantly on conversations of length both five and ten, even though it has never seen such 10-length conversations during training. Another observation is that the performance for the \u201cPrepend all\u201d baseline drops for ConvMix-10T, which might be due to the much longer QU outputs generated as the conversation continues longer. In general, the performance of all methods drops a bit (about 0.057 P@1 on average, c.f. Tables 6 and 7) when changing from gold answers to predicted answers. 6.2 In-depth analysis Convinse is stable over turns. One striking finding when drilling down into the results, is that the performance of Convinse stays fairly stable as the conversation continues \u2013 see Table 8. In contrast, as one would naturally expect, the baselines exhibit systematic degradation from turn to turn, as it becomes harder to capture implicit cues about the user intents with progressing conversation (this was the main reason for collecting ConvMix-10T). This is most pronounced for the \u201cPrepend all\u201d model. Note, that different FiD models were trained for each method individually, which results in different performances even for the first turn. For most methods, there is a significant performance drop for the last three turns, indicating that further investigation of generalization to longer conversations might be worthwhile. SRs are compact. SRs are indeed succinct representations of user intents, as indicated by Table 9 on question lengths. Also, for interpretability, SRs are an easy-to-inspect gist that are comprehensible to a human user and at the same time being amenable for use with 8 \fConversational Question Answering on Heterogeneous Sources SIGIR \u201922, July 11\u201315, 2022, Madrid, Spain Table 5: Comparison of answer presence within top-100 retrieved evidences after QU + ER on the ConvMix test set. QU + ERS Method KB Text Table Info KB+Text KB+Table KB+Info Text+Table Text+Info Table+Info All Prepend init + BM25 0.380 0.298 0.120 0.331 0.415 0.386 0.406 0.297 0.329 0.331 0.419 Prepend prev + BM25 0.342 0.284 0.095 0.295 0.382 0.347 0.372 0.284 0.317 0.306 0.392 Prepend init+prev + BM25 0.440 0.366 0.137 0.420 0.486 0.443 0.479 0.359 0.407 0.409 0.495 Prepend all + BM25 0.431 0.367 0.148 0.430 0.476 0.437 0.468 0.361 0.411 0.419 0.482 Q. Resolution [53] + BM25 + FiD 0.414 0.311 0.115 0.329 0.445 0.419 0.437 0.312 0.356 0.341 0.453 Q. Rewriting [37] + BM25 + FiD 0.434 0.315 0.114 0.347 0.460 0.435 0.461 0.319 0.362 0.336 0.465 Convinse (Proposed) 0.475* 0.352 0.117 0.369 0.528* 0.486* 0.507* 0.353 0.408 0.381 0.542* Table 6: Comparison of end-to-end (QU + ERS + HA) answering performance (P@1) on the ConvMix test set. QU + ERS + HA Method KB Text Table Info KB+Text KB+Table KB+Info Text+Table Text+Info Table+Info All Prepend init + BM25 + FiD 0.211 0.174 0.065 0.200 0.246 0.211 0.240 0.174 0.203 0.195 0.254 Prepend prev + BM25 + FiD 0.179 0.190 0.052 0.212 0.238 0.184 0.233 0.185 0.224 0.211 0.257 Prepend init+prev + BM25 + FiD 0.234 0.233 0.074 0.276 0.290 0.238 0.292 0.229 0.274 0.272 0.312 Prepend all + BM25 + FiD 0.230 0.234 0.074 0.282 0.290 0.238 0.282 0.224 0.267 0.265 0.300 Q. Resolution [53] + BM25 + FiD 0.222 0.190 0.063 0.219 0.261 0.227 0.257 0.185 0.241 0.221 0.282 Q. Rewriting [37] + BM25 + FiD 0.216 0.183 0.062 0.219 0.252 0.221 0.261 0.187 0.227 0.223 0.271 Convinse (Proposed) 0.251* 0.220 0.062 0.258 0.317* 0.257* 0.310* 0.220 0.276 0.253 0.342* Table 7: P@1 of ConvQA systems when using the predicted answers for the previous turns in the ongoing conversation. Method ConvMix-5T ConvMix-10T ConvMix (+ BM25 + FiD) (2800 questions) (2000 questions) (4800 questions) Prepend init 0.276 0.178 0.235 Prepend prev 0.190 0.123 0.162 Prepend init+prev 0.277 0.195 0.243 Prepend all 0.284 0.168 0.236 Q. Resolution [53] 0.283 0.188 0.243 Q. Rewriting [37] 0.258 0.168 0.221 Convinse 0.321* 0.217* 0.278* Table 8: P@1 of ConvQA systems over turns. ConvMix-5T ConvMix-10T Method (+ BM25 + FiD) 1 2\u20135 1 2\u20134 5\u20137 8\u201310 Prepend init 0.388 0.271 0.320 0.235 0.185 0.128 Prepend prev 0.371 0.243 0.325 0.243 0.232 0.218 Prepend init+prev 0.370 0.315 0.335 0.310 0.308 0.245 Prepend all 0.375 0.323 0.305 0.278 0.270 0.195 Q. Resolution [53] 0.391 0.300 0.315 0.232 0.213 0.220 Q. Rewriting [37] 0.366 0.280 0.295 0.220 0.245 0.215 Convinse 0.395 0.368* 0.320 0.298 0.338* 0.252 Table 9: Average QU output length in words on test sets. QU Method ConvMix-5T ConvMix-10T ConvMix Original 5.73 5.34 5.57 Prepend init 14.39 14.84 14.58 Prepend prev 12.23 12.17 12.21 Prepend init+prev 18.73 20.61 19.52 Prepend all 23.03 40.70 30.39 Q. Resolution [53] 8.24 8.54 8.36 Q. Rewriting [37] 9.07 9.04 9.06 Convinse 6.43* 6.53* 6.48* most standard IR models. Notably, for the more sophisticated models, the output length of the QU phase is almost stable on the two test sets with five turns and ten turns. Convinse is stable over domains. Zooming into the results over the five thematic domains in ConvMix, we find that performance is relatively stable (working best for the movies domain) \u2013 see Table 10 columns. The same holds when contrasting this with distributions over source types (Table 10 rows). Infoboxes consistently provide knowledge easily harnessed, while tables turn out to be the trickiest to handle, with proper verbalizations being a likely issue [29, 51]. Slots in SR are vital. A systematic ablation study shows that each of the SR slots plays an important role \u2013 see Table 11. We blanked out the contents of the respective slots during retrieval, and proceeded with these weaker SRs to the answering phase. Question entities clearly are the most pivotal; answer types do not help much at retrieval time, but justify their importance during answering (a shared insight w.r.t. expected answer types in many QA models [41]). We also examined the effect of our proposed ordering of the SR slots (e.g. predicates first). As expected, there is hardly an effect during ERS (both Clocq and BM25 are word-order-agnostic), but ordering proves beneficial when generating answers from evidences using sequence-aware models like FiD. Error analysis. Convinse cannot answer the question correctly 65.8% of the time, arising from three error cases: i) the evidence retriever cannot retrieve any answering evidence (42.4%), which can be due to the QU phase missing important cues in the conversation, failures within the evidence retriever, or the information sources not containing necessary information for answering, ii) the evidence retriever retrieves at least one answering evidence but none of these is among the top-\ud835\udc52after ERS (27.2%), calling for more informed evidence-scoring mechanisms; or iii) FiD fails to detect the correct answer (30.4%), which indicates that more sophisticated reasoning over the set of evidences might be required. Anecdotal results. For a better understanding of how our SRs look like when contrasted with rewritten and resolved questions, we provide a few representative examples in Table 12. 7 RELATED WORK Conversational question answering. Some methods for ConvQA over text use a hard history selection to obtain a question-relevant 9 \fSIGIR \u201922, July 11\u201315, 2022, Madrid, Spain P. Christmann et al. Table 10: Domainand source-wise results (Convinse P@1). Source Books Movies Music TV Series Soccer KB 0.255 0.273 0.219 0.245 0.264 Text 0.226 0.229 0.244 0.234 0.165 Table 0.021 0.106 0.052 0.058 0.073 Info 0.282 0.272 0.226 0.265 0.243 All 0.329 0.357 0.353 0.338 0.333 Table 11: Ablation study of the Convinse SR. Method Answer presence (ERS) P@1 (HA) Convinse 0.559 0.371 w/o Context entity slot 0.546 0.362 w/o Question entity slot 0.078 0.054 w/o Predicate slot 0.421 0.176 w/o Answer type slot 0.572 0.350 w/o Ordering 0.560 0.361 Table 12: Cases where only Convinse answered correctly. Domain Books Original country of origin? Q. Res. country of origin? expanse leviathan Q. Rew. What country was it taken from? SR \u27e8Leviathan Wakes | Expanse | country origin | sovereign state \u27e9 Domain Movies Original What actor portrayed Magneto? Q. Res. What actor portrayed Magneto? x-men movie Q. Rew. Which actor played the character Magneto in the X-Men movie? SR \u27e8X-Men | Magneto | actor portrayed | human \u27e9 Domain Music Original Her date of birth? Q. Res. Her date of birth? shakira Q. Rew. When was Shakira born? SR \u27e8Waka Waka (This Time for Africa) | Shakira | date birth | date \u27e9 Domain TV Series Original What TV series did he first appear on? Q. Res. What TV series did he first appear on? tv show appeared the shows george Q. Rew. What TV series did George Grizzard first appear on? SR \u27e8_ | George Grizzard | tv series first appear | television series \u27e9 Domain Soccer Original Who won? Q. Res. Who won? shakira Q. Rew. Who won the FIFA FIFA World Cup in France? SR \u27e8_ | 2010 FIFA World Cup | won | national association football team \u27e9 subset of the conversation history [20, 32, 34]. However, the more prevalent approach is to construct a question-aware encoding with attention on the conversational history (soft history selection) [5, 14, 35]. This is then used by a neural reader for predicting the answer in the given passage. While this works well in ConvQA over text, the whole pipeline does not easily generalize to the heterogeneous setting, since this would require carefully designed mechanisms for representing inputs from different sources in a shared latent space. It may seem that hard history selection can be generalized to the heterogeneous setting more easily, but we found that such an approach hurts end-to-end answering performance in preliminary experiments. We employ attention over conversation history in the QU phase, which can be seen as a soft history selection. In ConvQA over KBs, the conversational flow is typically captured in a context graph, either explicity [8] using iterative expansion, or implicitly [10, 21, 27] by maintaining a set of context entities and predicates. Conversational KB history can be incorporated using encodings [10, 27, 45] or heuristic subsets of the history [21], similar to ConvQA over text. All existing methods consider only one information source for answering conversations, which limits their answer coverage. Moreover, such works often adopt sourcespecific modeling of the conversational flow, like KB subgraphs, and cannot easily be extended to capture the conversational flow in a heterogeneous setting. Question completion. One line of work that suggests a more general approach to the ConvQA problem aims to create a selfcontained question from the incomplete utterance that can be answered by standalone QA systems [9, 37, 52, 53, 57]. Such approaches can either take the form of question rewriting, which generates a complete question from scratch [37, 52], or question resolution, which adds relevant terms from the conversational history to the question [53]. Question rewriting can entail unnecessary complexity since recent QA systems may not require the question to have perfect syntax or grammar [16, 17]. Question resolution [53] operates on the surface level, without aiming for grammatically correct questions, creating a completed question similar to a keyword query. However, a potential downside is that there is no structure in the completed form. We show in experiments (Sec. 6.1) that both, completed forms via question rewriting, and question resolution, perform worse than our generated structured representations, which can be perceived as logical forms for matching with heterogeneous sources. Heterogeneous answering. There has been work on combining text and KBs [31, 43, 48, 49, 55, 56] for conventional QA with single self-contained questions. O\u011fuz et al. [29] and Ma et al. [26] integrate tables as an additional type of information source. There has also been work on combining text and tables [3, 4, 58], or text, tables and images [12, 50] for answering full-fledged questions. However, these approaches cannot easily be extended to a conversational setting with incomplete follow-up questions. 8" + }, + { + "url": "http://arxiv.org/abs/2108.08597v9", + "title": "Beyond NED: Fast and Effective Search Space Reduction for Complex Question Answering over Knowledge Bases", + "abstract": "Answering complex questions over knowledge bases (KB-QA) faces huge input\ndata with billions of facts, involving millions of entities and thousands of\npredicates. For efficiency, QA systems first reduce the answer search space by\nidentifying a set of facts that is likely to contain all answers and relevant\ncues. The most common technique for doing this is to apply named entity\ndisambiguation (NED) systems to the question, and retrieve KB facts for the\ndisambiguated entities. This work presents CLOCQ, an efficient method that\nprunes irrelevant parts of the search space using KB-aware signals. CLOCQ uses\na top-k query processor over score-ordered lists of KB items that combine\nsignals about lexical matching, relevance to the question, coherence among\ncandidate items, and connectivity in the KB graph. Experiments with two recent\nQA benchmarks for complex questions demonstrate the superiority of CLOCQ over\nstate-of-the-art baselines with respect to answer presence, size of the search\nspace, and runtimes.", + "authors": "Philipp Christmann, Rishiraj Saha Roy, Gerhard Weikum", + "published": "2021-08-19", + "updated": "2022-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Motivation. Large knowledge bases (KBs) like Wikidata [55], DBpedia [4], YAGO [48], and Freebase [10] are ideal sources for answering factual questions that have crisp entity lists as answers [1, 6, 8, 27, 42, 58, 61, 61]. Such KBs are comprised of facts, structured as triples, often augmented with qualifier predicates and objects for context [20, 24, 30, 37]. Answering complex questions with multiple entities and predicates is one of the most actively researched topics in QA over knowledge bases (KB-QA) today [9, 23, 40, 44, 53], and this is the setting for this paper. Systems answering such complex questions either build explicit structured queries [9, 13, 33, 44] or perform approximate Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. WSDM\u201922, February 2022, Phoenix, Arizona \u00a9 2022 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn graph search [32, 49, 53] to arrive at the answer. To this end, systems learn models for mapping question words to KB items, where the huge size of the KB poses a stiff challenge. Concretely, whole KBs are often more than 2 Terabytes in size: this makes the development of QA systems over them a rather daunting task. Most KB-QA systems thus prune the search space for candidate answers using Named Entity Disambiguation (NED). Limitations of the state-of-the-art. NED methods [15, 19, 25, 31, 47, 54] map mentions in questions (single words or short phrases) to KB entities, and the QA system subsequently uses only the facts containing these entities as its search space for locating the answer(s). However, general-purpose NED tools have major limitations in this context: i) they are not tailored for downstream use by KB-QA systems; ii) they usually disambiguate only named entities and disregard words and phrases that denote general concepts, types or predicates; and iii) they typically output merely the top-1 entity per mention, missing out on further candidates that can serve as relevant cues. Even methods designed for short input texts, like Tagme [19] and Elq [31], have such limitations. Approach. To address these concerns, we propose Clocq (Contracting answer spaces with scored Lists and top-\ud835\udc58Operators for Complex QA, pronounced \u201cClock\u201d), a timeand space-efficient method that operates over all KB items to produce top-\ud835\udc58candidates for entities, types, concepts and predicates. Consider the complex question on the FIFA World Cup 2018: Who scored in the 2018 final between France and Croatia? Most systems for complex KB-QA tackle the answering process in two phases [44]. First, they disambiguate question tokens to entities in the KB. These entities establish a reduced search space for the QA system, that can either be an explicit set of facts containing these KB entities [32, 49, 50, 53], or involve implicit grounding to a small zone in the KB via structured queries containing these entities [9, 29, 58, 61]. Second, depending upon the approach in the first phase, KB-QA systems either search for the answer in the retrieved facts, or build a complex query in SPARQL-like syntax that would return the answer when executed. Clocq tries to improve the effectiveness and the efficiency of the first phase above. Therefore, the output of Clocq is a small set of disambiguated KB items and facts containing these items, and this is fed into the second phase. Answer presence in the KB subspace inherently sets an upper bound to the performance of the downstream KB-QA system, making fast and effective search space reduction a vital step in the QA pipeline. Method. Clocq first builds inverted lists of KB items per question word with term matching scores based on TF-IDF. Top-ranked items from these lists, up to a certain depth, are then scored and 1 arXiv:2108.08597v9 [cs.IR] 4 Apr 2022 \fWSDM\u201922, February 2022, Phoenix, Arizona P. Christmann et al. Table 1: Notation for salient concepts in Clocq. Notation Concept \ud835\udc3e Knowledge base \ud835\udc65 KB item \u27e8\ud835\udc60, \ud835\udc5d,\ud835\udc5c,\ud835\udc5e\ud835\udc5d1,\ud835\udc5e\ud835\udc5c1, . . .\ud835\udc5e\ud835\udc5c\ud835\udc5f\u27e9 Fact in \ud835\udc3e \ud835\udc41\ud835\udc39(\ud835\udc65) 1-hop neighborhood of \ud835\udc65(set of facts) \ud835\udc41\ud835\udc3c(\ud835\udc65) 1-hop neighbors of \ud835\udc65(set of KB items) \ud835\udc5e= \u27e8\ud835\udc5e1 . . .\ud835\udc5e\ud835\udc5a\u27e9 Question, keywords/phrases in question \ud835\udc60 Scoring signal \ud835\udc59\ud835\udc56\ud835\udc60 Score-ordered KB-item list for \ud835\udc5e\ud835\udc56and \ud835\udc60 S(\ud835\udc5e) Search space of facts for question \ud835\udc5e ranked by a combination of global signals, like semantic coherence between items and connectivity in the KB graph, and local signals like relatedness to the question and term-matching score. These scoring signals are computed at question time: this is made feasible with Clocq\u2019s novel KB representation and storage model, that substantially speeds up lookups with respect to existing solutions. The threshold algorithm (TA) [17] is applied for extracting the top\ud835\udc58candidates for each question term separately. Since it may not always be obvious how to choose \ud835\udc58for every term, we also have an entropy-based mechanism for making this choice automatically. The union of the per-term top-\ud835\udc58items forms a pool of relevant KB items, and their KB facts is the output of Clocq that would be passed on to the answering phase of a KB-QA system. Experiments with two recent KB-QA benchmarks and a suite of NED-based competitors [15, 19, 25, 31, 54] show the benefits of Clocq: it obtains the highest answer presence in the retained subset of the KB, with tractable search space size and sub-second runtimes. We show a proof-of-concept of Clocq\u2019s impact on KB-QA by feeding the output of Clocq into the popular QA system GRAFT-Net [50], and obtain significant boosts in answering performance. Contributions. We make the following salient contributions: \u2022 identifying answer search space reduction as a critical task in KB-QA pipelines; \u2022 proposing the Clocq method for computing answer-containing KB subsets with scored lists and the threshold algorithm; \u2022 conducting extensive experiments that show the superiority of Clocq over a number of baselines using NED; \u2022 devising a novel KB indexing scheme that is shown to notably improve runtimes for all methods, including baselines; \u2022 releasing the complete Clocq code1 along with a Web API that any QA system developer can use for quickly exploring algorithms over much smaller KB subsets. 2 CONCEPTS AND NOTATION We now introduce concepts necessary for understanding Clocq. Knowledge base. A knowledge base \ud835\udc3eis a compilation of facts. Fact. A fact is a triple, that is optionally augmented by pairs which specify context information for the main triple. For example, <2018 FIFA World Cup Final, participating team, France national football team; location, Luzhniki Stadium; point in time, 15 July 2018> is such a fact, where the first three items 1https://clocq.mpi-inf.mpg.de 2018 FIFA World Cup Final Moscow location Paul Pogba goal scored by for team France football team 1-hop Graph-based definition of KB distance Fact-based definition of KB distance (Proposed) KB-distance(France football team, 2018 FIFA World Cup Final) = 3 KB-distance(France football team, Moscow ) = 5 2-hop 3-hop 4-hop 5-hop In 1 hop of \u201cFrance football team\u201d In 2 hops of \u201cFrance football team\u201d 2018 FIFA World Cup Final Moscow location Paul Pogba for team France football team KB-distance(France football team, 2018 FIFA World Cup Final) = 1 KB-distance(France football team, Moscow ) = 2 goal scored by Figure 1: Fact-based definition of KB neighborhoods. constitute the main triple, and the last four make up two qualifier predicate-qualifier object tuples. Subjects of facts are entities, while objects are entities, types or literals. Predicates denote relationships between the other categories. KB item. We refer to entities (2018 FIFA World Cup Final), predicates (participating team), types (footballer) and literals (15 July 2018) as KB items \ud835\udc65. 1-hop neighborhood. This is defined as \ud835\udc41\ud835\udc39(\ud835\udc65) of a KB item \ud835\udc65and is given by all facts in which \ud835\udc65occurs. The set of KB items \ud835\udc41\ud835\udc3c(\ud835\udc65) in the 1-hop neighborhood of \ud835\udc65is termed as its 1-hop neighbors. Question. A question \ud835\udc5eis specified by a sequence of keywords \u27e8\ud835\udc5e1,\ud835\udc5e2, . . .\ud835\udc5e\ud835\udc5a\u27e9, where stopwords are not considered. For our running example Who scored in the 2018 final between France and Croatia?, we would have\ud835\udc5e= . Without loss of generality, \ud835\udc5e\ud835\udc56may also be a phrase (\u201c2018 final\u201d). Answer. An answer {\ud835\udc4e} to \ud835\udc5eis a small set of KB entities or literals that satisfy the intent in \ud835\udc5e({Paul Pogba, Ivan Perisic, . . . }). Score-ordered list. These are lists {\ud835\udc59} that hold KB items {\ud835\udc65}, sorted in descending order of some relevance score. Depending upon the situation, we can have one list \ud835\udc59\ud835\udc56per question term \ud835\udc5e\ud835\udc56, or one list \ud835\udc59\ud835\udc56\ud835\udc60per score \ud835\udc60per \ud835\udc5e\ud835\udc56. Search space. A search space S(\ud835\udc5e) for a question \ud835\udc5eis a set of facts S(\ud835\udc5e) \u2286\ud835\udc3e, that is expected to contain each {\ud835\udc4e}. For example, {\u27e8 2018 FIFA World Cup Final, goal scored by, Paul Pogba; for team, France football team \u27e9, \u27e82018 FIFA World Cup Final, goal scored by, Ivan Perisic; for team, Croatia football team \u27e9, . . .} comprise a search space for the running example question, where the answers are shown in bold. 3 KB REPRESENTATION AND STORAGE One of the recurrent requirements in QA and specifically in answer search space reduction is to retrieve the facts of a given KB item (like entities returned by an NED system). Existing KBs are stored as collections of RDF triples. One can query these triple stores in SPARQL-like languages: however, the functionality of fact-retrieval is not built-in, and getting all facts of a single item may often entail issuing a substantial volume of queries (explained later). The consequence is that the total time taken for this step can often be too high, and this is detrimental to any system that relies on these retrieval results. As a result, we devise our own KB representation and storage, that are detailed in this section. 2 \fBeyond NED: Fast and Effective Search Space Reduction for Complex Question Answering over Knowledge Bases WSDM\u201922, February 2022, Phoenix, Arizona \ud835\udc8d\ud835\udfd0 KB item 1 2018 FIFA WC Final 2 2018 EuroLeague Final 3 Final Battle (event) 4 2018 FA Cup Final \ud835\udc8d\ud835\udfd0: 2018 final \ud835\udc8d\ud835\udfcf KB item 1 score (music) 2 no. of goals scored 3 goal scored by 4 film score \ud835\udc8d\ud835\udfcf: scored \ud835\udc8d\ud835\udfd1 KB item 1 France (state) 2 Kingdom of France 9 France basketball team 11 France football team \ud835\udc8d\ud835\udfd1: France \ud835\udc8d\ud835\udfd2 KB item 1 Croatia (state) 2 589 Croatia (asteroid) 15 Croatia football team 16 Croatia basketball team \ud835\udc8d\ud835\udfd2: Croatia Term match lists Scoreordered lists \ud835\udc8d\ud835\udfcf\ud835\udfcf KB item coh 1 no. of goals scored 0.93 2 goal scored by 0.91 3 score (music) 0.81 4 film score 0.71 \ud835\udc8d\ud835\udfd1\ud835\udfcf KB item coh 1 France football team 0.92 2 France basketball team 0.83 7 France (state) 0.54 10 Kingdom of France 0.49 \ud835\udc8d\ud835\udfd2\ud835\udfcf KB item coh 1 Croatia (state) 0.93 2 Croatia football team 0.91 6 Croatia basketball team 0.71 14 589 Croatia (asteroid) 0.47 \ud835\udc8d\ud835\udfcf\ud835\udfd0 KB item conn 1 goal scored by 0.85 2 no. of goals scored 0.81 3 score (music) 0.72 4 film score 0.60 \ud835\udc8d\ud835\udfcf\ud835\udfd1 KB item rel 1 goal scored by 0.76 2 score (music) 0.71 3 no. of goals scored 0.65 4 film score 0.55 \ud835\udc8d\ud835\udfcf\ud835\udfd2 KB item match 1 score (music) 0.90 2 no. of goals scored 0.81 3 goal scored by 0.75 4 film score 0.73 CLOCQ Input: All KB facts + Question: Who scored in the 2018 final between France and Croatia? \ud835\udc8d\ud835\udfd2\ud835\udfd0 KB item conn 1 Croatia football team 0.71 2 Croatia (state) 0.63 3 Croatia basketball team 0.60 16 589 Croatia (asteroid) 0.49 \ud835\udc8d\ud835\udfd2\ud835\udfd1 KB item rel 1 Croatia football team 0.67 2 Croatia basketball team 0.61 4 Croatia (state) 0.56 9 589 Croatia (asteroid) 0.48 \ud835\udc8d\ud835\udfd2\ud835\udfd2 KB item match 1 Croatia (state) 0.88 2 589 Croatia (asteroid) 0.85 15 Croatia football team 0.71 16 Croatia basketball team 0.65 \ud835\udc8d\ud835\udfd1\ud835\udfd1 KB item rel 1 France (state) 0.88 2 France basketball team 0.78 4 France football team 0.67 8 Kingdom of France 0.56 \ud835\udc8d\ud835\udfd1\ud835\udfd2 KB item match 1 France (state) 0.86 2 Kingdom of France 0.78 9 France basketball team 0.58 11 France football team 0.56 \ud835\udc8d\ud835\udfd1\ud835\udfd0 KB item conn 1 France football team 0.84 2 France basketball team 0.82 5 France (state) 0.73 10 Kingdom of France 0.67 Top-1 KB items: {goal scored by, 2018 FIFA WC Final, France football team, Croatia football team} Top-2 KB items: {no. of goals scored, 2018 EuroLeague Final, France basketball team, Croatia basketball team} \u2026 CLOCQ Output: KB facts of union of top-k items only Input to any KB-QA system (search space reduced from all KB facts) \ud835\udc8d\ud835\udfcf\ud835\udfd0 \u2026 Figure 2: Illustrating the workflow of Clocq for our running example. Input of Clocq: Question + All KB facts; Output of Clocq: Disambiguated question-relevant KB items + Only KB facts with these items. Concerns with a triple-based KB view. In standard triple stores, facts containing qualifiers are stored in a reified form. Qualifiers are conceptually modeled as pairs that are appended to the main triple. However, this is not amenable to store in a uniform triple store. Reification is a technical trick that stitches the main triple with its qualifiers using fact-specific identifiers, also referred to as dummy nodes, and at the same time achieves a \u201ctriplified\u201d format for all nuggets of information. For example, the single fact <2018 FIFA World Cup Final, participating team, France football team; location, Luzhniki Stadium; point in time, 15 July 2018> in reified form would be represented as a set four triples: <2018 FIFA World Cup Final, participating team, fact-id>, , , . Joining reified triples into their original facts is more amenable to downstream use. However, such an aggregation requires the execution of thousands of structured queries over the KB (equivalently, matching a large number of triple patterns). For example, querying for the triples of France football team with this item in the object position will only match the second reified triple above; the whole fact needs to be reconstituted using sequential lookups. Moreover, this needs to be done for every reified fact that the KB item is a part of, which are often several thousands, and additional lookups are also necessary to get facts with the item as a subject. A fact-based view as a solution. This motivates us to adopt a fact-based view of the KB, that we instantiate as follows. We start with a standard RDF triple dump. We aggregate all reified triples by their fact-id upfront, remove the respective dummy nodes, and postprocess them to the form shown in Table 1 (third row). Two different indexes are then established: one stores the 1-hop neighborhood of every item (\ud835\udc65\u21a6\u2192\ud835\udc41\ud835\udc39(\ud835\udc65)), and the other stores the set of 1-hop neighbors of each KB item (\ud835\udc65\u21a6\u2192\ud835\udc41\ud835\udc3c(\ud835\udc65)). Instead of using alpha-numeric strings that are typical of most raw dumps, KB items are integer-encoded [18, 51]. To reduce the memory footprint, both indexes use appropriate pointers inside their representations. The final set of facts obtained this way is referred to as our KB \ud835\udc3e. With a fact-based indexing, at runtime, the 1-hop neighborhood of an item can simply be looked up, eliminating the need for expensive querying or joining. Further, the index of 1-hop neighbors allows for very fast computation of KB distances: two KB items \ud835\udc651,\ud835\udc652 are within 1-hop distance if \ud835\udc651 \u2208\ud835\udc41\ud835\udc3c(\ud835\udc652), and in 2-hop distance if \ud835\udc41\ud835\udc3c(\ud835\udc651) \u00d0 \ud835\udc41\ud835\udc3c(\ud835\udc652) \u2260\u2205(via set-overlap tests). This proves decisive for connectivity checks later on (Sec. 4.2). Additional benefits of a fact-based view. When a postprocessed fact is directly modeled as a graph (Figure 1 top), traditional distance conventions in graphs would imply that even KB items that are part of the same fact may be at a high distance of three (France football team and 2018 FIFA World Cup Final). Distances to KB items in connected facts may be even higher, like five (France football team and Moscow). 1-hop and 2-hop neighborhoods are vital intuitions of close proximity in KB-QA and such arbitrary distance conventions are far from ideal. In a fact-centric view, France football team and 2018 FIFA World Cup Final are now at a distance of 1, while France football team and Moscow are 2 hops apart (Figure 1 bottom): this is more practical in terms of several KB-related applications. Our approach lifts qualifiers to first-class citizens, this way enhancing the expressiveness of the QA method within limited neighborhoods. The concept of a KB neighborhood in the literature is primarily entity-centric. An ideal representation should enable definitions that uniformly apply to entities, predicates, types and literals. Predicates are often modeled as edge labels, and this precludes a seamless notion of neighborhood. A fact-based neighborhood can easily be envisioned for all types of KB items. 4 THE CLOCQ METHOD We now explain the complete Clocq workflow (illustrated in Fig. 2). 4.1 Retrieving candidate KB items per term Creating term match lists. Consider our running example question: Who scored in the 2018 final between France and Croatia? As our goal is to disambiguate keywords or phrases in the question (\u201cscored\u201d, \u201c2018 final\u201d, \u201cFrance\u201d, \u201cCroatia\u201d) to items in a KB, we first collect candidates from the KB using a standard lexical matching score (like TF-IDF or BM25) for each question term \ud835\udc5e1 . . .\ud835\udc5e\ud835\udc5a(\ud835\udc5a= 4 in our example, stopwords are dropped). Here \ud835\udc5e\ud835\udc56is analogous to a search query, while each item \ud835\udc65in the KB resembles a document in a corpus. This \u201cdocument\u201d is created 3 \fWSDM\u201922, February 2022, Phoenix, Arizona P. Christmann et al. by concatenating the item label with textual aliases and descriptions available in most KBs [10, 55]. This results in \ud835\udc5aranked lists \u27e8\ud835\udc591 = {\ud835\udc6511,\ud835\udc6512, . . .};\ud835\udc592 = {\ud835\udc6521,\ud835\udc6522, . . .}; . . .\ud835\udc59\ud835\udc5a= {\ud835\udc65\ud835\udc5a1,\ud835\udc65\ud835\udc5a2, . . .}\u27e9of KB items \ud835\udc65\ud835\udc56\ud835\udc57, one list \ud835\udc59\ud835\udc56for each \ud835\udc5e\ud835\udc56, scored by degree of match between question tokens and KB items. A ranked lexical match list (ideal disambiguation in bold) for \u201cscored\u201d could look like: \ud835\udc591 =\u27e81: score (music), 2: no. of goals scored, 3: goal scored by, 4: film score, ...\u27e9, while that for \u201cCroatia\u201d could be:\ud835\udc594 =\u27e81: Croatia (state), 2: 589 Croatia (asteroid), ..., 15: Croatia football team, ..., 19: Croatia basketball team, ...\u27e9. Note that the best matching KB item \ud835\udc65\u2217 \ud835\udc56for\ud835\udc5e\ud835\udc56can sometimes be very deep in individual lists \ud835\udc59\ud835\udc56(Croatia football team is at rank 15 in \ud835\udc594). Next, each list \ud835\udc59\ud835\udc56is traversed up to a depth \ud835\udc51to fetch the top-\ud835\udc51 items (computational cost \ud835\udc42(\ud835\udc5a\u00b7 \ud835\udc51)), that are per-term questionrelevant KB candidates for the next phase of Clocq. The goal is to find combinations of KB items \u27e8\ud835\udc65\ud835\udc56\u27e9\ud835\udc5a \ud835\udc56=1 that best match the question, since these items have a high likelihood of having the answer within their facts \u00d0\ud835\udc5a \ud835\udc56=1 \ud835\udc41\ud835\udc39(\ud835\udc65\ud835\udc56). For instance, an ideal combination for us would be: {goal scored by, 2018 FIFA WC final, France football team, Croatia football team}. These combinations come from the Cartesian product of items in the \ud835\udc5alists, and would have \ud835\udc51\ud835\udc5apossibilities if each combination is explicitly enumerated and scored. This is cost-prohibitive: since we are only interested in some top-\ud835\udc58combinations, as opposed to a full or even extended partial ordering, a more efficient way of doing this would be to apply top-\ud835\udc58 algorithms [3, 28, 34]. These prevent complete scans and return the top-\ud835\udc58best combinations efficiently. 4.2 Computing relevance signals for each item To go beyond shallow lexical matching, our proposal is to construct multiple lists per question token, each reflecting a different relevance signal, and to apply top-\ud835\udc58algorithms on these lists to obtain the disambiguation of each question token individually. Unlike prior works on NED that are restricted to individual named entities [19, 25, 31], Clocq includes mentions of types, predicates and general concepts in the input question and maps them to KB items. A candidate KB item combination that fits well with the intent in the question is expected to have high semantic coherence and high graph connectivity (these can be viewed as proximity in latent and symbolic spaces) within its constituents, as well as match the question well at global and term-levels. These motivate our four indicators of relevance for each item \ud835\udc65\ud835\udc56\ud835\udc57in list \ud835\udc59\ud835\udc56below (the cost of this scoring is \ud835\udc42(\ud835\udc5a2 \u00b7 \ud835\udc512): while this looks expensive, it is still fast with a parallelized implementation). Coherence. Clocq targets a joint disambiguation of questionrelevant KB items. It thus considers semantic coherence and graph connectivity, which are inherently defined for KB item pairs, instead of single items. Therefore, we need a technique to convert these signals into item-level scores. The first signal, semantic coherence, is transformed to an item-level score using the max operator. More precisely, the coherence score of an item \ud835\udc65\ud835\udc56\ud835\udc57is defined in Eq. 1 as the maximum item-item similarity (averaged over pairs of lists) this item can contribute to the combination, where pairwise similarity is obtained by the cosine value between the embedding vectors of two KB items (min-max normalized from [\u22121, +1] to [0, 1]): \ud835\udc50\ud835\udc5c\u210e(\ud835\udc65\ud835\udc56\ud835\udc57) = 1 \ud835\udc5a\u22121 \u2211\ufe01 \ud835\udc58\u2260\ud835\udc56 max \ud835\udc59 \ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udc52( \u00ae \ud835\udc65\ud835\udc56\ud835\udc57, \u00ae \ud835\udc65\ud835\udc58\ud835\udc59) (1) Connectivity. This is the second context-level signal in Clocq, and captures a very different form of proximity. We assign items in 1-hop of each other to have a distance of 1 (recall KB-distance computations from Sec. 3), those in 2-hops to have a distance of 2, and \u221eotherwise (most KB items are in 3 hops of each other, and thus distance > 2 hops ceases to be a discriminating factor). We define connectivity scores as the inverse of this KB distance, thereby obtaining 1, 0.5, and 0, respectively for 1-, 2-, and > 2-hop neighbors. Connectivity as a context-level signal is converted to an item-level score analogously using max aggregation over pairs. We thus define connectivity (\u2208[0, 1]) of \ud835\udc65\ud835\udc56\ud835\udc57in Eq. 2: \ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b(\ud835\udc65\ud835\udc56\ud835\udc57) = 1 \ud835\udc5a\u22121 \u2211\ufe01 \ud835\udc58\u2260\ud835\udc56 max \ud835\udc59 \ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b(\ud835\udc65\ud835\udc56\ud835\udc57,\ud835\udc65\ud835\udc58\ud835\udc59) (2) Question relatedness. We estimate semantic relatedness of the KB item \ud835\udc65\ud835\udc56\ud835\udc57to the overall input question \ud835\udc5eby averaging pairwise cosine similarities (same min-max normalization as for coherence) between the embeddings of the item and each term \ud835\udc5e\ud835\udc56in Eq. 3. To avoid confounding this estimate with the question term for which \ud835\udc65\ud835\udc56\ud835\udc57was retrieved, we exclude this from the average to define semantic relatedness as: \ud835\udc5f\ud835\udc52\ud835\udc59(\ud835\udc65\ud835\udc56\ud835\udc57) = avg\ud835\udc5e\ud835\udc56\u2260\ud835\udc5e\ud835\udc58\ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udc52( \u00ae \ud835\udc65\ud835\udc56\ud835\udc57, \u00ae \ud835\udc5e\ud835\udc58) (3) Term match. This score is intended to take into account the original degree of lexical term match (via TF-IDF, BM25, or similar) for which \ud835\udc65\ud835\udc56\ud835\udc57was admitted into \ud835\udc59\ud835\udc56. However, such TF-IDF-like weights are often unbounded and may have a disproportionate influence when aggregated with the other signals, that all \u2208[0, 1]. Thus, we simply take the reciprocal rank of \ud835\udc65\ud835\udc56\ud835\udc57in \ud835\udc59\ud835\udc56as the match score (Eq. 4) to have it in the same [0, 1] interval: \ud835\udc5a\ud835\udc4e\ud835\udc61\ud835\udc50\u210e(\ud835\udc65\ud835\udc56\ud835\udc57) = 1/\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58(\ud835\udc65\ud835\udc56\ud835\udc57,\ud835\udc59\ud835\udc56) (4) 4.3 Finding top-\ud835\udc58across sorted lists We now sort each of these 4 \u00b7 \ud835\udc5alists in descending score-order. Note that for each \ud835\udc5e\ud835\udc56, all lists \ud835\udc59\ud835\udc56\ud835\udc60hold the same items (those in the original \ud835\udc59\ud835\udc56). Fig. 2 shows lists \ud835\udc59\ud835\udc56\ud835\udc60in the center. Top-\ud835\udc58algorithms operating over such multiple score-ordered lists, where each list holds the same set of items, require a monotonic aggregation function over the item scores in each list [3, 7, 11, 17]. Here, we use a linear combination of the four relevance scores as this aggregate: \ud835\udc4e\ud835\udc54\ud835\udc54\ud835\udc46\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52(\ud835\udc65\ud835\udc56\ud835\udc57) = \u210e\ud835\udc50\ud835\udc5c\u210e\u00b7\ud835\udc50\ud835\udc5c\u210e(\ud835\udc65\ud835\udc56\ud835\udc57) +\u210e\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b\u00b7\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b(\ud835\udc65\ud835\udc56\ud835\udc57) +\u210e\ud835\udc5f\ud835\udc52\ud835\udc59\u00b7\ud835\udc5f\ud835\udc52\ud835\udc59(\ud835\udc65\ud835\udc56\ud835\udc57) + \u210e\ud835\udc5a\ud835\udc4e\ud835\udc61\ud835\udc50\u210e\u00b7 \ud835\udc5a\ud835\udc4e\ud835\udc61\ud835\udc50\u210e(\ud835\udc65\ud835\udc56\ud835\udc57), where hyperparameters are tuned on a dev set, and \u210e\ud835\udc50\ud835\udc5c\u210e+ \u210e\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b+ \u210e\ud835\udc5f\ud835\udc52\ud835\udc59+ \u210e\ud835\udc5a\ud835\udc4e\ud835\udc61\ud835\udc50\u210e= 1. Since each score lies in [0, 1], we also have \ud835\udc4e\ud835\udc54\ud835\udc54\ud835\udc46\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52(\u00b7) \u2208[0, 1]. Threshold algorithm. We use the threshold algorithm (TA) over these score-ordered lists with early pruning [17]. TA is run over each set of 4 sorted lists \u27e8\ud835\udc59\ud835\udc561,\ud835\udc59\ud835\udc562,\ud835\udc59\ud835\udc563,\ud835\udc59\ud835\udc564\u27e9, corresponding to one question term \ud835\udc5e\ud835\udc56, to obtain the top-\ud835\udc58best KB items {\ud835\udc65\u2217 \ud835\udc56}\ud835\udc58per \ud835\udc5e\ud835\udc56, as follows: we perform a sorted access (SA) in parallel on each of the four sorted lists for each \ud835\udc5e\ud835\udc56. For each item \ud835\udc65\ud835\udc56\ud835\udc57seen with SA, we fetch all its scores \ud835\udc50\ud835\udc5c\u210e(\ud835\udc65\ud835\udc56\ud835\udc57),\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b(\ud835\udc65\ud835\udc56\ud835\udc57),\ud835\udc5f\ud835\udc52\ud835\udc59(\ud835\udc65\ud835\udc56\ud835\udc57) and \ud835\udc5a\ud835\udc4e\ud835\udc61\ud835\udc50\u210e(\ud835\udc65\ud835\udc56\ud835\udc57) by random access (RA). We compute \ud835\udc4e\ud835\udc54\ud835\udc54\ud835\udc46\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52(\ud835\udc65\ud835\udc56\ud835\udc57), and if \ud835\udc65\ud835\udc56\ud835\udc57 is one of the top-\ud835\udc58scoring items so far, we remember this. For each list \ud835\udc59\ud835\udc56\ud835\udc60, let \u02c6 \ud835\udc59\ud835\udc56\ud835\udc60be the score of the last item seen under SA. Given that lists \ud835\udc59\ud835\udc56\ud835\udc60are sorted, this score \u02c6 \ud835\udc59\ud835\udc56\ud835\udc60is the maximum value 4 \fBeyond NED: Fast and Effective Search Space Reduction for Complex Question Answering over Knowledge Bases WSDM\u201922, February 2022, Phoenix, Arizona x = France football team p = 1k \u3008France football team, type, football team\u3009 \u3008France football team, country, France\u3009 \u3008France football team, represents, French Football Federation\u3009 \u3008France football team, captain, Hugo Lloris\u3009 ... as subject as object as qualifier-object \u3008FIFA WC 2018, winner, France football team\u3009 \u3008FIFA WC 1998, winner, France football team\u3009 \u3008France v Albania, winner, France football team\u3009\u3009 \u3008Antoine Griezmann, member of sports team, France football team, start, 2014\u3009 \u3008Zinedine Zidane, member of sports team, France football team, start, 1994, end, 2006\u3009 \u30081958 FIFA WC, statistical leader, Just Fontaine, criterion, goals scored, for team, France football team\u3009 \u30082003 Confed Cup, statistical leader, Thierry Henry, criterion, goals scored, for team, France football team\u3009 \u3008Benjamin Mendy, participant in, 2018 FIFA WC, #matches played, 1, for team, France football team\u3009 \u3008Antoine Griezmann, participant in, 2018 FIFA WC, #matches played, 7, for team, France football team\u3009 \u3008Antoine Griezmann, participant in, 2014 FIFA WC, #matches played, 5, for team, France football team\u3009 ... ... 63 1,044 147 > 1k Figure 3: Pruning the search space with parameter \ud835\udc5d. KB item 1 2018 FIFA WC Final 2 2018 EuroLeague Final 3 2018 ChampionsLeague Final 4 2018 FA Cup Final 2018 final KB item 1 goal scored by 2 no. of goals scored scored KB item 1 France football team 2 France basketball team 3 France (state) france KB item 1 Croatia football team 2 Croatia basketball team 3 Croatia (state) croatia k=4 k=3 k=3 k=2 Figure 4: Auto-\ud835\udc58setting for running example. that could be observed in the unknown part of the list. We define the threshold \ud835\udeffas the aggregate of these maximum scores, i.e. \ud835\udeff= \u210e\ud835\udc50\ud835\udc5c\u210e\u00b7 \u02c6 \ud835\udc59\ud835\udc561 + \u210e\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b\u00b7 \u02c6 \ud835\udc59\ud835\udc562 + \u210e\ud835\udc5f\ud835\udc52\ud835\udc59\u00b7 \u02c6 \ud835\udc59\ud835\udc563 + \u210e\ud835\udc5a\ud835\udc4e\ud835\udc61\ud835\udc50\u210e\u00b7 \u02c6 \ud835\udc59\ud835\udc564. When \ud835\udc58items have been seen whose aggregate is at least \ud835\udeff, TA is terminated and the top-\ud835\udc58KB items are returned. Once we have these items {\ud835\udc65\u2217 \ud835\udc56}\ud835\udc58, we take the union \u00d0 \ud835\udc56=1...\ud835\udc5a{\ud835\udc65\u2217 \ud835\udc56}\ud835\udc58to create our final combination of KB items. KB facts of items in this final list comprise \u00d0 \ud835\udc56=1...\ud835\udc5a{\ud835\udc41\ud835\udc39(\ud835\udc65\u2217 \ud835\udc56)}\ud835\udc58, which is the output search space S of Clocq and would be passed on to the downstream QA system. 4.4 Pruning facts of highly frequent KB items To avoid including all facts of extremely frequent KB items into our search space S (U.K. brings in millions of entities), we use a pruning threshold \ud835\udc5das follows. An entity \ud835\udc65can appear in a fact as the subject, object or qualifier object, where usually the first role is the most salient. Whenever the last two total more than \ud835\udc5d, we take only the subject facts of \ud835\udc65(and all facts otherwise): this is a proxy for keeping only salient facts in S. For disambiguated predicates \ud835\udc65, \ud835\udc5d directly acts as a frequency threshold. Thus, parameter \ud835\udc5dessentially controls the amount of potentially noisier facts that goes into S. Fig. 3 illustrates how the parameter \ud835\udc5dhelps to prune the search space for France football team (\ud835\udc5dis set to 1k). 4.5 Automatic setting of \ud835\udc58and \ud835\udc5d The choice of \ud835\udc58and \ud835\udc5dmight not always be obvious, and in the methodology described above, it is set globally to the same value for all question words. Therefore, we propose a simple but effective mechanism to automatically choose\ud835\udc58and\ud835\udc5d, dynamically depending upon the specific question word. Choosing \ud835\udc8c. Intuitively, one would like to increase \ud835\udc58for ambiguous question words. For example, \u201cFrance\u201d can refer to many KB items. By increasing\ud835\udc58one can account for potential disambiguation errors. On the other hand, \u201cPaul Pogba\u201d is not as ambiguous and hence \ud835\udc58=1 should suffice. The ambiguity of a question word is closely connected to that of uncertainty or randomness. The more uncertainty there is in predicting what a word refers to, the more ambiguous it is. This makes entropy a suitable measure of ambiguity. More specifically, each question word is linked to \ud835\udc51KB items. These items form the sample space of size \ud835\udc51for the probability distribution. The numbers of KB facts of these items form a frequency distribution that can be normalized to obtain the required probability distribution. We compute the entropy of this probability distribution as the ambiguity score of a question word, and denote it as \ud835\udc52\ud835\udc5b\ud835\udc61(\ud835\udc5e\ud835\udc56). Incidentally, by definition, 0 \u2264\ud835\udc52\ud835\udc5b\ud835\udc61(\ud835\udc5e\ud835\udc56) \u2264log2 \ud835\udc51. Practical choices of \ud835\udc58and \ud835\udc51does not exceed 5 and 50 respectively, and hence \ud835\udc58and log2 \ud835\udc51are in the same ballpark (log2 50=5.6). This motivates us to make the simple choice of directly setting \ud835\udc58as \ud835\udc52\ud835\udc5b\ud835\udc61(\ud835\udc5e\ud835\udc56). Specifically, we use \ud835\udc58= \u230a\ud835\udc52\ud835\udc5b\ud835\udc61(\ud835\udc5e\ud835\udc56)\u230b+ 1 to avoid the situation of \ud835\udc58=0. Fig. 4 shows a possible \u201cAuto-\ud835\udc58\u201d (automatic choice of \ud835\udc58) setting for our running example. \u201c2018 final\u201d is highly ambiguous, and thus \ud835\udc58is set to a relatively high value. \u201cFrance\u201d and \u201cCroatia\u201d can also refer to various different concepts. The word \u201cscored\u201d is relatively unambiguous. Choosing \ud835\udc91. We identify a logical connection between \ud835\udc58and \ud835\udc5d: the less uncertainty there is in the disambiguation of a question word (i.e. the lower the \ud835\udc58), the more facts one wants to include in S for this word. On the contrary, for highly ambiguous question words, less facts should be admitted for avoiding a higher amount of noise. Therefore, we set \ud835\udc5dautomatically, by having \ud835\udc5d=\ud835\udc53(\ud835\udc58). For example, we could set \ud835\udc5d=10(5\u2212\ud835\udc58), such that \ud835\udc5dis set to a high value (\ud835\udc5d=104) for \ud835\udc58=1, but for a highly ambiguous word for which \ud835\udc58=5, only subject facts are considered (\ud835\udc5d=1). We experiment with different variations of the function \ud835\udc53that meet the desired criterion above. 5 EXPERIMENTAL SETUP Benchmarks. We use two recent QA benchmarks: LC-QuAD 2.0 [14] and ConvQuestions [12]. To make our case, we sampled 10k of the more complex questions from LC-QuAD 2.0 (LC-QuAD2.0-CQ in Table 2 with 2k dev, 8k test; no training required in Clocq). Complexity is loosely identified by the presence of multiple entities, as detected with Tagme [19], and/or predicates where main verbs were used as a proxy, detected with Stanza [39]. ConvQuestions was built for incomplete utterances in conversational QA, but also has well-formed complete questions that exhibit several complex phenomena. For ConvQuestions, we considered full questions from the benchmark (ConvQuestions-FQ in Table 2; 338 dev, 1231 test). Metrics. We use three metrics: i) answer presence, the percentage of times the correct answer is found in the reduced search space; ii) size of the search space |S|, measured by the number of entities and literals, that would be answer candidates to be considered by the downstream QA engine; and iii) runtime, summed over all steps that happen at answering time and measured in seconds. Baselines. We compare Clocq with a variety of NED baselines [15, 19, 25, 31, 54]. To provide baselines with competitive advantage w.r.t. efficient retrieval, we use the state-of-the-art HDT RDF [18] for KB storage and indexing. An example baseline would be Tagme+Hdt. For convenience, we omit the Hdt when referring to baselines in text. NED systems that link to Wikipedia are mapped to Wikidata using Wikipedia URLs that are also present in Wikidata. Baselines 5 \fWSDM\u201922, February 2022, Phoenix, Arizona P. Christmann et al. Table 2: Performance of Clocq w.r.t. baselines. Statistical significance of Clocq\u2019s answer presence over Tagme and Elq, the strongest baselines, is marked with \u2020 and * respectively (McNemar\u2019s test as answer presence is a binary variable, with p < 0.05). Benchmark LC-QuAD2.0-CQ [14] ConvQuestions-FQ [12] Metric \u2192 Answer presence Search space size Runtime Answer presence Search space size Runtime Method \u2193 (Percentage) (No. of KB items) (Seconds) (Percentage) (No. of KB items) (Seconds) Tagme [19]+Hdt [18] 76.8 2.9k 1.14 69.1 1.8k 1.43 Aida [25]+Hdt [18] 60.5 2.2k 0.75 44.4 2.2k 1.19 Earl [15]+Hdt [18] (\ud835\udc58=1) 53.8 1.1k 2.50 44.6 1.1k 2.49 Earl [15]+Hdt [18] (\ud835\udc58=5) 65.9 2.2k 2.50 53.4 2.0k 2.49 Rel [54]+Hdt [18] 55.8 0.7k 0.72 45.6 0.4k 0.61 Elq [31]+Hdt [18] 76.7 1.1k 0.62 77.5 0.6k 0.47 Clocq (Default: \ud835\udc58=Auto, \ud835\udc5d=1k) 82.6\u2020* 1.5k 0.50 84.7\u2020* 1.3k 0.42 Clocq (\ud835\udc58=1, \ud835\udc5d=10k) 80.0\u2020* 3.9k 0.48 78.4\u2020 2.3k 0.39 Clocq (\ud835\udc58=5, \ud835\udc5d=100) 80.9\u2020* 0.6k 0.49 84.2\u2020* 0.6k 0.40 are either run on our data with original code when available, or through APIs. Internal confidence thresholds were set to zero (no cut-off) in configurable baselines like Tagme and Aida to allow for as many disambiguations (linkings) as possible, to help boost answer presence. Otherwise, default configurations were retained. KB cleaning. We perform all experiments over Wikidata. The original Wikidata dump contains a large set of facts that are not needed for common question answering use cases. For example, Wikidata contains labels and descriptions in several languages, it provides meta-information for internal concepts (e.g. for predicates), and references for facts, URLs, images, or geographical coordinates. Furthermore, identifiers for external sites such as Spotify ID, IMDb ID, or Facebook ID are stored. As an initial effort, we pruned all facts containing such information from an N-triples Wikidata dump downloaded on 24 April 2020, such that the size on disk decreased from 1, 990 GB to 450 GB2. Initialization. After applying our KB index (Sec. 3), the size decreased to 18 GB on disk. Note that we applied the same pruning strategy and underlying Wikidata dump when using Hdt for retrieval, i.e. \ud835\udc41\ud835\udc39(\ud835\udc65) is exactly the same for the Clocq KB interface and Hdt. For baselines, we uniformly set \ud835\udc5d=10k to boost their answer presence. To build term matching lists of question terms against KB items, we used Elasticsearch [21]. We use Wikipedia2Vec [59] to compute embeddings for terms and KB items wherever needed. Questions were segmented into phrases like \u201cHarry Potter\u201d and \u201ctheme music\u201d using named entity recognition [19]. The depth of the term-matching lists was set to \ud835\udc51=20, and hyperparameters were tuned via dev sets to \u210e\ud835\udc50\ud835\udc5c\u210e=0.1, \u210e\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b=0.3, \u210e\ud835\udc5f\ud835\udc52\ud835\udc59=0.2, \u210e\ud835\udc5a\ud835\udc4e\ud835\udc61\ud835\udc50\u210e=0.4 for both benchmarks. The default setting for Clocq is an automatically chosen \ud835\udc58and \ud835\udc5d=1k (Sec. 4.5). Since \ud835\udc51=20, we have \ud835\udc58\u2208[1, 5]. This default configuration is implied when writing just \u201cClocq\u201d. 6 RESULTS AND INSIGHTS 6.1 Key findings Our main results on search space reduction are in Table 2. As a reference point, the a-priori answer search space consists of all entities and literals in the whole KB \ud835\udc3e, a total of about 152M items. Clocq keeps more answers in its search space. Clocq outperforms the best baseline on answer presence for both benchmarks: 2https://github.com/PhilippChr/wikidata-core-for-QA by 5.8% for LC-QuAD, and by 7.2% for ConvQuestions, pushing the upper bound for performance of QA systems. Clocq is able to keep 82.6% (LC-QuAD) and 84.7% (ConvQuestions) answers in its search space, which is statistically significant for all pairwise comparisons with Elq and Tagme, the strongest baselines for this task. Importantly, Clocq achieves this in sub-second runtimes, slightly faster than Elq, the fastest baseline. While Clocq (default) performs best, we note that Clocq (\ud835\udc58=1) achieves an answer presence that is substantially better than that of all baselines as well, showing the effectiveness of KB-aware signals for this task. Top-\ud835\udc8cresults add value over top-1. The true power of Clocq comes from the flexibility of top-\ud835\udc58outputs, coupled with the pruning threshold \ud835\udc5d. Fig. 5 shows variation in answer presence, search space size and runtime with \ud835\udc58and \ud835\udc5don the dev sets. We see that by increasing \ud835\udc58from 1 to 10, Clocq achieves very good answer presence (going above 80%, Fig. 5a and Fig. 5d), while keeping a tight threshold on items admitted into the search space (columns 1 and 2, \ud835\udc5d=100 or 1k). Here, the search space stays fairly small, in the order of a few thousand KB items (Fig. 5b and Fig. 5e). If, on the other hand, a QA system requires very high recall, Clocq can achieve this by increasing \ud835\udc5d(columns 3 and 4 in Fig. 5a/5d): answer presence is well above 90% and 80%, respectively. The price is a much larger search space. Another observation is that due to the use of our efficient top-\ud835\udc58architecture and novel KB index, the timings are fairly stable when increasing \ud835\udc58and \ud835\udc5d. For change in \ud835\udc5d, we did not observe any increase in runtimes, and for \ud835\udc58, the increase is \u22640.04 seconds. We added one top-\ud835\udc58variant with a good trade-off on the dev-set (\ud835\udc58=5, \ud835\udc5d=100) to Table 2. This significantly outperforms all baselines w.r.t. answer presence and runtime, with a very small search space size of only about 800 items (last row). Among our baselines, Earl [15] can produce top-\ud835\udc58disambiguations: using \ud835\udc58=5 for Earl (fourth row) also increases its answer presence, but this is far below that of Clocq. We identify a trade-off between answer presence and search space size as a major consideration for QA. The best setting for \ud835\udc58 and \ud835\udc5dhighly depends on the QA system operating on the contracted search space. In general, for improving the answer presence, we recommend increasing \ud835\udc58rather than \ud835\udc5d. Even though increasing both \ud835\udc58and \ud835\udc5dcannot decrease the answer presence, the additional facts admitted into S could still distract the QA system and lead to longer runtimes. Therefore, the choice of \ud835\udc58and \ud835\udc5ddepends on 6 \fBeyond NED: Fast and Effective Search Space Reduction for Complex Question Answering over Knowledge Bases WSDM\u201922, February 2022, Phoenix, Arizona (a) LC-QuAD: Answer presence. (b) LC-QuAD: Search space size. (c) LC-QuAD: Runtimes. (d) ConvQuestions: Answer presence. (e) ConvQuestions: Search space size. (f) ConvQuestions: Runtimes. Figure 5: Varying Clocq parameters on the LC-QuAD and ConvQuestions dev set. the maximum search space size and potential disambiguations per mention (manifested as \ud835\udc58) a specific QA system can handle. Impact on KB-QA. While answer presence is an important measure creating an upper bound for the QA system, the key goal of this work is to enhance the performance on the downstream QA task. To study these effects, we feed the outputs of Clocq and the baselines into the popular KB-QA system GRAFT-Net [50] and ran the two benchmark suites. We report the standard QA metrics [44] Precision at 1 (P@1), Mean reciprocal rank (MRR) and Hit at 5 (Hit@5). Results are in Table 4. For LC-QuAD, the configuration with Clocq significantly outperforms the two strongest baselines on all metrics. For ConvQuestions, Clocq has the best performance on MRR and Hit@5, and is only slightly behind Elq on P@1. These results show the benefits of Clocq for downstream QA. Clocq generates the search space faster: the average runtimes per query are 0.49 s for Clocq, 0.60 s for Elq+Hdt, and 1.18 s for Tagme+Hdt. 6.2 In-depth analysis Clocq identifies relevant concepts and types. For many questions, Clocq identifies not just additional entities but also concepts and types that are missed by baselines. Since \ud835\udc58>1 trivially adds more KB items, we set \ud835\udc58=1 for fair comparison in this analysis. For example, in What was the name of the theme music for the television series Mash?, Elq disambiguates only \u201cMash\u201d (incorrectly), to the 1970 film. Clocq, on the other hand, finds: \u201cname\u201d \u21a6\u2192 personal name, \u201ctheme music\u201d \u21a6\u2192theme music, \u201ctelevision series\u201d \u21a6\u2192 television series, and \u201cMash\u201d \u21a6\u2192M*A*S*H (the TV series, correct). On average, Clocq finds 4.68 KB items per question (LC-QuAD), while Elq, Aida, and Tagme find 1.82, 2.65 and 3.75, respectively. We verified that these additionally disambiguated types and concepts help: when removed from Clocq\u2019s output, answer presence drops from 78.3% to 65.5% (LC-QuAD dev). Note that standalone NED evaluation is out of scope here, because QA benchmarks have no ground-truth for KB item disambiguation. Representative examples. Representative examples of success cases for Clocq are in Table 3. In the first example, the inherent focus of Clocq on related concepts leads to some incorrect disambiguations: given the football context, \u201ccity\u201d is disambiguated to a set of football clubs. Further, \u201cmain\u201d appears to be the German river. Despite this noise Clocq, is able to correctly detect D\u00fcsseldorf within its disambiguations. Interestingly, the correct answer Fortuna D\u00fcsseldorf is also found, taking the football context into account. In the second example, \u201cson of the brother\u201d is not correctly disambiguated in top-1 results, but leveraging the Auto-\ud835\udc58mechanism, the method can make up for this error and add the correct KB item (nephew). Similarly, for the next question Clocq identifies \u201cAll We Know\u201d as a highly ambiguous phrase, and returns top-5 disambiguations. The baselines Tagme and Elq fail to get the correct entity on the top rank, which means that the question is a lost cause. In the last example, KB connectivity helps Clocq: both baselines identify a book, while the question word \u201cscreenwriter\u201d gives a clear hint that the question is about a movie. Clocq disambiguates screenwriter and leverages KB connectivity to disambiguate the correct movie (screenwriter and Crazy Rich Asians (film) are connected in 1-hop). Ablation studies. Clocq includes four signals in its architecture: this naturally calls for ablations (Table 5, dev sets). Answer presence on ConvQuestions dropped for each single signal that is removed, showing that all four matter (* = significant drop from full configuration). On LC-QuAD, trends are similar, just that removing relevance led to a slightly increased answer presence. While removing a single component has only small influence, dropping the pair of local and global signals (like \ud835\udc5a\ud835\udc4e\ud835\udc61\ud835\udc50\u210e+ \ud835\udc5f\ud835\udc52\ud835\udc59, or \ud835\udc50\ud835\udc5c\u210e+ \ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b) often results in noticeable loss. However, such choices may indeed need 7 \fWSDM\u201922, February 2022, Phoenix, Arizona P. Christmann et al. Table 3: Anecdotal examples from test sets of considered benchmarks for which only Clocq had an answer in the search space (green phrases denote correct mappings). Robust w.r.t. wrong mappings and redundant question words How is the main soccer club of the german city D\u00fcsseldorf called? (ConvQuestions) Clocq: \u201cmain\u201d\u21a6\u2192\u27e8Frankfurt (Main), Main (river), Offenbach am Main\u27e9; \u201csoccer\u201d\u21a6\u2192\u27e8football, Football team\u27e9; \u201cclub\u201d\u21a6\u2192\u27e8Nightclub, Torino F.C.\u27e9; \u201cgerman\u201d\u21a6\u2192\u27e8German, German Empire\u27e9; \u201ccity\u201d\u21a6\u2192\u27e8Manchester City F.C., Birmingham City F.C., Stoke City F.C., Cardiff City F.C.\u27e9; \u201cD\u00fcsseldorf\u201d\u21a6\u2192\u27e8D\u00fcsseldorf, Fortuna D\u00fcsseldorf\u27e9; Tagme: \u201cmain\u201d\u21a6\u2192\u27e8Main (river)\u27e9; \u201csoccer\u201d\u21a6\u2192\u27e8football\u27e9; \u201cclub\u201d\u21a6\u2192\u27e8sports club\u27e9; \u201cgerman\u201d\u21a6\u2192\u27e8Germany\u27e9; \u201ccity\u201d\u21a6\u2192\u27e8City of London\u27e9; Elq: \u201cgerman\u201d\u21a6\u2192\u27e8Germany\u27e9; \u201csoccer\u201d\u21a6\u2192\u27e8football\u27e9; Automatic top-\ud835\udc58for all question words can cover for errors Who is the son of the brother of Queenie Padilla? (LC-QuAD) Clocq: \u201cson\u201d\u21a6\u2192\u27e8Son en Breugel, nephew, Mae Hong Son, Porto do Son\u27e9; \u201cbrother\u201d\u21a6\u2192\u27e8sibling\u27e9; \u201cQueenie Padilla\u201d\u21a6\u2192\u27e8Queenie Padilla\u27e9; Tagme: \u201cWho\u201d\u21a6\u2192\u27e8World Health Organization\u27e9; \u201cbrother\u201d\u21a6\u2192\u27e8Brother\u27e9; Elq: \u201cPadilla\u201d\u21a6\u2192\u27e8Zsa Zsa Padilla\u27e9 Auto-\ud835\udc58mechanism can identify highly ambiguous words Who is the composer of All We Know? (LC-QuAD) Clocq: \u201ccomposer\u201d\u21a6\u2192\u27e8composer, film score composer\u27e9; \u201cAll We Know\u201d\u21a6\u2192\u27e8All We Know (Paramore), For All We Know (album), All We Know (Chainsmokers), For All We Know (Carpenters), For All We Know (1934 song)\u27e9; Tagme: \u201cWho\u201d\u21a6\u2192\u27e8The Who\u27e9; \u201ccomposer\u201d\u21a6\u2192\u27e8composer\u27e9; \u201cAll We Know\u201d\u21a6\u2192\u27e8For All We Know (Carpenters)\u27e9; Elq: \u201cAll We Know\u201d\u21a6\u2192\u27e8All We Know (Chainsmokers)\u27e9; KB connectivity is a vital indicator for understanding context Who was the screenwriter for Crazy Rich Asians? (ConvQuestions) Clocq: \u201cscreenwriter\u201d\u21a6\u2192\u27e8screenwriter\u27e9; \u201cCrazy Rich Asians\u201d\u21a6\u2192\u27e8Crazy Rich Asians (film)\u27e9; Tagme: \u201cCrazy Rich Asians\u201d\u21a6\u2192\u27e8Crazy Rich Asians (book)\u27e9; Elq: \u201cCrazy Rich Asians\u201d\u21a6\u2192\u27e8Crazy Rich Asians (book)\u27e9; Table 4: Impact of Clocq on KB-QA. Benchmark LC-QuAD2.0-CQ ConvQuestions-FQ QA system \u2192 GRAFT-Net [50] GRAFT-Net [50] Search space \u2193 P@1 MRR Hit@5 P@1 MRR Hit@5 Clocq 0.197* 0.268* 0.350* 0.207 0.264 0.337 Elq+Hdt 0.168 0.224 0.288 0.213 0.256 0.313 Tagme+Hdt 0.171 0.225 0.291 0.167 0.204 0.237 Table 5: Ablation study of configurations in Clocq. Benchmark LC-QuAD2.0-CQ ConvQuestions-FQ Method \u2193 Ans. pres. |S| Time Ans. pres. |S| Time Clocq 0.803 1.5k 0.47 s 0.760 1.1k 0.34 s w/o \ud835\udc5a\ud835\udc4e\ud835\udc61\ud835\udc50\u210e 0.726* 1.3k 0.46 s 0.607* 0.9k 0.30 s w/o \ud835\udc5f\ud835\udc52\ud835\udc59 0.806 1.5k 0.47 s 0.746* 1.2k 0.32 s w/o \ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b 0.790* 1.5k 0.41 s 0.746* 1.2k 0.26 s w/o \ud835\udc50\ud835\udc5c\u210e 0.802 1.5k 0.40 s 0.750 1.1k 0.24 s w/o \ud835\udc5a\ud835\udc4e\ud835\udc61\ud835\udc50\u210e+ \ud835\udc5f\ud835\udc52\ud835\udc59 0.733* 1.3k 0.47 s 0.618* 0.9k 0.30 s w/o \ud835\udc50\ud835\udc5c\u210e+ \ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b 0.791* 1.5k 0.34 s 0.743* 1.2k 0.22 s Table 6: Effect of choosing \ud835\udc58and \ud835\udc5ddynamically per term. Benchmark LC-QuAD2.0-CQ ConvQuestions-FQ Clocq \u2193 Ans. pres. |S| Ans. pres. |S| \ud835\udc58=Auto, \ud835\udc5d=1k (default) 0.803 1.5k 0.760 1.0k \ud835\udc58=Auto, \ud835\udc5d=10k 0.837 7.8k 0.778 4.6k \ud835\udc58=Auto, \ud835\udc5d=105\u2212\ud835\udc58 0.776 1.5k 0.728 0.9k \ud835\udc58=Auto, \ud835\udc5d=105\u22120.5 \u00a4 \ud835\udc58 0.835 6.3k 0.778 4.3k \ud835\udc58=Auto, \ud835\udc5d=104\u22120.5 \u00a4 \ud835\udc58 0.798 1.0k 0.734 0.8k \ud835\udc58=3, p=1k 0.809 1.7k 0.757 1.3k \ud835\udc58=5, p=100 0.797 0.6k 0.763 0.5k to be made when runtime is of utmost importance, since computing \ud835\udc50\ud835\udc5c\u210eand \ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5bare the most time-consuming steps in Clocq. Error analysis. Clocq misses the answer in S just about 20% of the time (both benchmarks), arising from two error cases: i) the answer is missing in the computed set of facts, as the depth\ud835\udc51term matching does not retrieve the relevant items (LC-QuAD 44.8%, ConvQuestions 46.7%); and ii) the answer is in the candidate space, but the top-\ud835\udc58algorithm fails to return one or more relevant items (LC-QuAD 55.2%, ConvQuestions 53.3%). Both cases could be countered by increasing \ud835\udc51or the range of \ud835\udc58, at the cost of increased end-to-end runtimes. Automatic choices for \ud835\udc91. Table 6 shows results of various choices. As discussed in Sec. 4.5, \ud835\udc5dcan be set as \ud835\udc53(\ud835\udc58). We tried \ud835\udc5d=105\u2212\ud835\udc58first, and found that \ud835\udc5dis falls off too drastically. Therefore, we compared with smoother versions \ud835\udc5d=105\u22120.5\ud835\udc58and \ud835\udc5d=104\u22120.5\ud835\udc58. Again, there is a trade-off between answer presence and search space size: having \ud835\udc5d=105\u22120.5\ud835\udc58gives the best answer presence, but \ud835\udc5d=104\u22120.5\ud835\udc58has a much smaller |S|. The runtime was almost the same across all variants. Overall, we found a static setting of \ud835\udc5dto perform slightly better with respect to the trade-off. IR-based extension. An intuitive extension or alternative is to fetch a larger subset of the KB, verbalize these facts [2, 36, 38], and use a standard IR pipeline to retrieve the most relevant facts for 8 \fBeyond NED: Fast and Effective Search Space Reduction for Complex Question Answering over Knowledge Bases WSDM\u201922, February 2022, Phoenix, Arizona Table 7: Effect of BM25 on verbalized KB facts. Benchmark LC-QuAD2.0-CQ ConvQuestions-FQ Method \u2193 Ans. pres. |S| Ans. pres. |S| Clocq (\ud835\udc58=1, \ud835\udc5d=10k) 0.782 3.7k 0.704 1.9k + BM25 (top-100) 0.625 0.1k 0.509 0.1k + BM25 (top-1000) 0.726 0.8k 0.630 0.7k Clocq (\ud835\udc58=3, \ud835\udc5d=10k) 0.846 9.7k 0.775 5.9k + BM25 (top-100) 0.614 0.1k 0.462 0.1k + BM25 (top-1000) 0.742 1.0k 0.648 0.9k Clocq (\ud835\udc58=5, \ud835\udc5d=10k) 0.872 15.1k 0.796 10.3k + BM25 (top-100) 0.605 0.1k 0.456 0.1k + BM25 (top-1000) 0.747 1.0k 0.648 0.9k use by the QA system. We implemented such a variant, treating the question as a query and the verbalized facts as the set of documents. BM25 [43] is used for scoring fact-relevance, and returns the top-100 or top-1000 facts. We used the rank_bm25 module3 and set \ud835\udc581=1.5 and \ud835\udc4f=0.75. Results on dev sets are shown in Table 7. Different variants of Clocq are used for retrieving the KB subset, where the focus is on larger initial S to measure the impact of BM25 (therefore the choice of a large \ud835\udc5dof 10k). Answer presence for top-1000 facts is comparable to the initial answer presence; but a significant drop was observed when taking only the top-100 facts. This indicates that the basic bag-of-words model in BM25 matching falls short for complex questions. However, an IR-based filter is a viable choice when the number of facts that can be \u201cconsumed\u201d is budgeted. 6.3 Effect of fact-based KB indexing Fact-centric KB storage is a foundation for Clocq: we now analyze its effect on runtimes for search space reduction. Our comparison points are the available Wikidata SPARQL endpoint4 (QueryService) and triple pattern queries issued to the Hdt [18] KB interface. We subtracted network latencies when measuring runtimes. Basic functionalities. Our first experiment was on the two basic functionalities required for KB-QA: retrieving all facts of a given KB item (neighborhood), and measuring the distance between two given KB items (KB-distance). For baselines, we optimized the amount of required queries and implemented the distance checks as for Clocq (Sec. 3). We took 1 million random KB items for the neighborhood lookups, and 1 million random KB item pairs for the connectivity checks. Average runtimes (per KB item/KB item pair) are shown in Table 8. We found that Hdt has a better performance than the Wikidata QueryService, making use of its efficient implementation via bit-streams. However, Clocq can improve neighborhood lookups by a factor of 10 and 103 over Hdt and QueryService, respectively. When measuring KB-distances, the effect becomes even larger: Clocq is 103 and 104 times faster than Hdt and the QueryService, respectively. The memory consumption for the Clocq KB index is slightly higher than that for Hdt, but this is still much lower than what loading the raw KB dump into memory would consume. 3https://github.com/dorianbrown/rank_bm25 4https://query.wikidata.org/bigdata/namespace/wdq/sparql?format=json Table 8: Comparison of KB interfaces w.r.t. functionalities. KB interface QueryService Hdt [18] Clocq RAM consumed \u2212 220\ud835\udc3a\ud835\udc35 340\ud835\udc3a\ud835\udc35 Neighborhood 1.48 \u00d7 10\u22122\ud835\udc60 6.73 \u00d7 10\u22124\ud835\udc60 4.98 \u00d7 10\u22125\ud835\udc60 KB-distance 2.46 \u00d7 10\u22122\ud835\udc60 5.43 \u00d7 10\u22123\ud835\udc60 3.23 \u00d7 10\u22126\ud835\udc60 Table 9: Timing KB interfaces for search space reduction. KB interface QueryService Hdt [18] Clocq Clocq \u2212 971 s 0.54 s Elq [31] 0.89 s 0.62 s 0.12 s TagMe [19] 19 s 1.25 s 0.52 s Effect on search space reduction. We now compare runtimes with these KB interfaces for search space reduction on the LCQuAD dev set. While Clocq makes use of the neighborhood and KB-distance functions, only the neighborhood function is necessary in Elq and TagMe. We observe similar trends as before: runtimes of Clocq are much better when using the Clocq KB index. The QueryService script did not terminate within a reasonable amount of time. Interestingly, these trends also hold for Elq and TagMe: when using the Clocq KB index for search space reduction, the runtime is significantly reduced. This shows that our fact-based KB index is valuable beyond this specific use in Clocq. Gains in runtime are due to the fact-centric KB index, which is specifically designed for providing efficient KB access for QA functionalities. KB interface baselines may provide very fast KB access for general-purpose querying, but fall short for the more specific requirements in QA. 7 DISCUSSION Disambiguating not only entities, but also general concepts, types, or predicates when establishing the search space, is generally beneficial for QA systems. This is something that is done by Clocq but is beyond NED systems. The detected trade-off between answer presence and search space size is an important factor: increasing |S| improves answer presence but also injects noise, whereas a smaller search space could potentially be cleaner and easier to explore by the QA system. This trade-off is closely connected to the choice of \ud835\udc58and the amount of facts for a specific KB item that is admitted into S, that is controlled by our other parameter \ud835\udc5d. Among the static settings for \ud835\udc58, we found \ud835\udc58=5 to perform best on the considered benchmarks. For other types of questions (e.g. simpler questions or list questions), the appropriate setting may have to be reconsidered. The degree of ambiguity of question words is a key factor: we found a dynamic setting of \ud835\udc58(per question word) to perform the best among our variants. The answer presence obtained by Clocq lies in the range of 80 to 90 percent. This seems to indicate that downstream KB-QA methods cannot achieve a perfect answering performance. But on a practical note, there is no QA system yet which gets anywhere near 100% performance on realistic benchmarks. While state-ofthe-art methods on some simpler benchmarks have reported a performance of 60 \u221280%, the datasets of complex questions used in our experiments are much more demanding. In fact, we observe a substantial gap between the answer presence in the search space and the actual performance of a state-of-the-art QA system. 9 \fWSDM\u201922, February 2022, Phoenix, Arizona P. Christmann et al. 8 RELATED WORK KB interfaces. Optimizing KBs for executing SPARQL queries is a well-studied problem [16, 22, 35, 51, 56]. Urbani and Jacobs [52] recently proposed Trident for enabling different kinds of workloads (e.g. SPARQL, graph analytics) on large KBs. HDT [18] encodes triples using bitmaps. It constructs two individual integer-streams holding predicates and objects, adjacent to some given subject, and two additional bit-streams for encoding connections between these predicates and objects. Due to multiple indexes, triple pattern queries can be answered very efficiently using HDT. These works focus on optimizing queries on triple stores. However, the problem of retrieving the complete facts of a KB item including qualifier information is a typical task in KB-QA, but is not targeted. Named entity disambiguation. In named entity disambiguation (NED), the goal is to map entity mentions to the corresponding real-life concept: pages in Wikipedia or entries in curated KBs like Wikidata. Tagme [19] leverages Wikipedia anchors to detect entity mentions, looks up possible mappings, and scores these with regard to a collective agreement implemented by a voting scheme. In Aida [25], a mention-entity graph is established, and the mentions are disambiguated jointly by approximating the densest subgraph. More recently, van Hulst et al. [54] proposed a framework Rel for end-to-end entity linking, building on state-of-the-art neural components. Elq [31] jointly performs mention detection and disambiguation leveraging a BERT-based bi-encoder. These methods are optimized for computing the top-1 entity per mention, and mostly return only the top-ranked entity in the disambiguation. Top-1 NED is prone to errors that can propagate through the answering pipeline [46, 61]. Early work in S-Mart [60] applied statistical models of regression trees on a set of (mention, entity)-pairs and corresponding features. Unlike most other works, S-Mart returned top-\ud835\udc58disambiguations per mention. However, since it is proprietary, their code was not available for comparison. Search space reduction. Methods in complex KB-QA mostly follow one of two approaches [44]: i) disambiguating entities, predicates, and types over the whole KB [13, 26, 46, 53], for e.g., by leveraging question-word specific index lists [46, 53] for subsequent semantic parsing; and, ii) applying NED as an initial step to focus the remaining computation on a restricted search space [5, 6, 9, 33, 41, 45, 49, 57, 61]. The focus here is on improving the second line of work: instead of performing top-1 or top-\ud835\udc58NED, we disambiguate all question cue words and compute the top-\ud835\udc58results per question token. This leads to a search space that is more likely to contain the relevant KB items and the correct answer(s). Earl [15] takes an approach of disambiguating both entity and predicate mentions. We generalize this direction by disambiguating all keywords in the question. 9" + }, + { + "url": "http://arxiv.org/abs/1910.03262v3", + "title": "Look before you Hop: Conversational Question Answering over Knowledge Graphs Using Judicious Context Expansion", + "abstract": "Fact-centric information needs are rarely one-shot; users typically ask\nfollow-up questions to explore a topic. In such a conversational setting, the\nuser's inputs are often incomplete, with entities or predicates left out, and\nungrammatical phrases. This poses a huge challenge to question answering (QA)\nsystems that typically rely on cues in full-fledged interrogative sentences. As\na solution, we develop CONVEX: an unsupervised method that can answer\nincomplete questions over a knowledge graph (KG) by maintaining conversation\ncontext using entities and predicates seen so far and automatically inferring\nmissing or ambiguous pieces for follow-up questions. The core of our method is\na graph exploration algorithm that judiciously expands a frontier to find\ncandidate answers for the current question. To evaluate CONVEX, we release\nConvQuestions, a crowdsourced benchmark with 11,200 distinct conversations from\nfive different domains. We show that CONVEX: (i) adds conversational support to\nany stand-alone QA system, and (ii) outperforms state-of-the-art baselines and\nquestion completion strategies.", + "authors": "Philipp Christmann, Rishiraj Saha Roy, Abdalghani Abujabal, Jyotsna Singh, Gerhard Weikum", + "published": "2019-10-08", + "updated": "2019-11-05", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION 1.1 Motivation Obtaining direct answers to fact-centric questions is supported by large knowledge graphs (KGs) such as Wikidata or industrial KGs (at Google, Microsoft, Baidu, Amazon, etc.), consisting of semantically organized entities, attributes, and relations in the form of subject-predicate-object (SPO) triples. This task of question answering over KGs (KG-QA) has been intensively researched [2, 4, 5, 7, 15, 35, 36, 38]. However, users\u2019 information needs are Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. CIKM\u201919, October 2019, Beijing, China \u00a9 2019 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn not always expressed in well-formed and self-contained questions for one-shot processing. Quite often, users issue a series of followup questions to explore a topic [12, 30], analogous to search sessions [28]. A major challenge in such conversational QA settings is that follow-up questions are often incomplete, with entities or predicates not spelled out, and use of ungrammatical phrases. So a large part of the context is unspecified, assuming that the systems implicitly understand the user\u2019s intent from previous interactions. Consider the following conversation as a running example. A user asks questions (or utterances) qi and the system has to generate answers ai: q0: Which actor voiced the Unicorn in The Last Unicorn? a0: Mia Farrow q1: And Alan Arkin was behind . . .? a1: Schmendrick q2: Who did the score? a2: Jimmy Webb q3: So who performed the songs? a3: America q4: Genre of this band\u2019s music? a4 : Folk rock, Soft rock q5: By the way, who was the director? a5: Jules Bass Such conversations are characterized by a well-formed and complete initial question (q0) with incomplete follow-ups (q1 \u2212q5), an initial and often central entity of interest (\u201cThe Last Unicorn\u201d), slight shifts in focus (inquiry of the band America\u2019s genre in q4), informal styles (q1,q5), and a running context comprised of entities and predicates in all preceding questions and answers (not just immediate precedents). Limitations of state-of-the-art KG-QA. State-of-the-art systems [2, 7, 15, 21, 35] expect well-formed input questions (like q0), complete with cue words for entities (\u201cUnicorn\u201d), predicates (\u201cvoiced\u201d), and types (\u201cactor\u201d), and map them to corresponding KGitems. A SPARQL query (or an equivalent logical expression) is generated to retrieve answers. For example, a Wikidata query for q0 would be: SELECT ?x WHERE {TheLastUnicorn voiceActor ?x . ?x characterRole TheUnicorn}. In our conversational setup, such methods completely fall apart due to the incompleteness of followup questions, and the ad-hoc ways in which they are phrased. The alternative approach of question completion [17] aims to create syntactically correct full-fledged interrogative sentences from the user\u2019s inputs, closing the gaps by learning from supervision 1 arXiv:1910.03262v3 [cs.IR] 5 Nov 2019 \fpairs, while being agnostic to the underlying KG. However, this paradigm is bound to be limited and would fail for ad-hoc styles of user inputs or when training data is too sparse. 1.2 Approach and Contributions Our proposed approach, Convex (CONVersational KG-QA with context EXpansion) overcomes these limitations, based on the following key ideas. The initial question is used to identify a small subgraph of the KG for retrieving answers, similar to what prior methods for unsupervised KG-QA use [7]. For incomplete and ungrammatical follow-up questions, we capture context in the form of a subgraph as well, and we dynamically maintain it as the conversation proceeds. This way, relevant entities and predicates from previous turns are kept in the gradually expanding context. However, we need to be careful about growing the subgraph too much as the conversation branches and broadens in scope. As nodes in a KG have many 1-hop neighbors and a huge number of 2-hop neighbors, there is a high risk of combinatorial explosion, and a huge subgraph would no longer focus on the topic of interest. Convex copes with this critical issue by judiciously expanding the context subgraph, using a combination of look-ahead, weighting, and pruning techniques. Hence the \u201clook before you hop\u201d in the paper title. Specifically, Convex works as follows. Answers to the first question are obtained by any standard KG-QA system (we use the stateof-the-art system QAnswer [7] and other variants in our experiments over Wikidata). Entities in the initial question q0, the answer a0, and their connections initialize a context subgraph X 1 (Xt for turn t) for the conversation in the KG. When a follow-up question q1 arrives, all nodes (entities, predicates, or types) in the KG-neighborhood of X 1 are deemed as candidates that will be used to expand the current graph. Brute force addition of all neighbors to Xt will quickly lead to an explosion in its size after a few turns (hugely exacerbated if popular entities are added, e.g. Germany and Barcelona have \u22431.6M and \u224340k neighbor entities in Wikidata). Thus, we opt for prudent expansion as follows. Each neighbor is scored based on its similarity to the question, its distance to important nodes in X 1, the conversation turn t, and KG priors. This information is stored in in respective sorted lists with these neighbors as elements. A small number of top-scoring neighbors of X 1 in a turn, termed \u201cfrontier nodes\u201d ({F1}), are identified by aggregating information across these queues. Next, all KG triples (SPO facts) for these frontiers only, are added to X 1, to produce an expanded context X 1 +. These {F1} are the most relevant nodes w.r.t the current question q1, and hence are expected to contain the answer a1 in their close proximity. Each entity in X 1 + is thus scored by its distance to each frontier node F1 i and other important nodes in X 1, and the topranked entity (possibly multiple, in case of ties) is returned as a1. This process is then iterated for each turn in the conversation with question qt , producing Xt , Xt +, {Ft }, and ultimately at at each step. Benchmark. We compiled the first realistic benchmark, termed ConvQuestions, for conversational KG-QA. It contains about 11k conversations which can be evaluated over Wikidata. They are compiled from the inputs of crowdworkers on Amazon Mechanical Turk, with conversations from five domains: \u201cBooks\u201d, \u201cMovies\u201d, \u201cSoccer\u201d, \u201cMusic\u201d, and \u201cTV Series\u201d. The questions feature a variety of complex question phenomena like comparisons, aggregations, Notation Concept K, E, P, C, L Knowledge graph, entity, predicate, class, literal S, P,O Subject, predicate, object N, E Nodes and edges in graph C,t Conversation, turn qt,at Question and answer at turn t Xt,Xt + Initial and expanded context graphs at turn t N(Xt ) k-hop neighborhood of nodes in Xt Ft = {Ft 1 . . . Ft r } Frontier nodes at turn t E(qt ) Entities mapped to by words in qt Table 1: Notation for key concepts in Convex. compositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable fair comparison across diverse methods. Contributions. The main contributions of this work are: \u2022 We devise Convex, an unsupervised method for addressing conversational question answering over knowledge graphs. \u2022 We release ConvQuestions, the first realistic benchmark to evaluate conversational KG-QA. \u2022 We present extensive experiments, showing how Convex enables any stand-alone system with conversational support. \u2022 An online demo and all code, data and results is available at http://qa.mpi-inf.mpg.de/convex/. 2 CONCEPTS AND NOTATION We first introduce concepts that will assist in an easier explanation for the Convex method, and corresponding notations. An example workflow instantiating these concepts is shown in Fig. 1, and Table 1 provides a ready reference. Knowledge graph. A knowledge graph, or a knowledge base, K is a set of subject-predicate-object (SPO) RDF triples, each representing a real-world fact, where S is of type entity E (like The Last Unicorn), P is a predicate (like director), and O is another entity, a class C (like animated feature film), or a literal L (like 19 November 1982). All E, P, C, and L in K are canonicalized. Most modern KGs support n-ary facts like movie-cast information (with more than two E and more than one P) via reification with intermediate nodes [32]. In Wikidata, such information is represented via optional qualifiers with the main fact (TheLastUnicorn voiceActor MiaFarrow . characterRole TheUnicorn). Compound Value Types (CVTs) were the Freebase analogue. Tapping into qualifier information is a challenge for SPARQL queries, but is easily accessible in a graph-based method like Convex. Convex stores the KG as a graph K = (N, E), with a set of nodes N and a set of edges E, instead of a database-like RDF triple store. Each E, P, C, and L is assigned a unique node in K, with two nodes n1,n2 \u2208N having an edge e \u2208E between them if there is a triple \u27e8n1,n2, \u00b7\u27e9\u2208K or \u27e8\u00b7,n1,n2\u27e9\u2208K. While it is more standard practice to treat each P as an edge label, we represent every item in K as a node, because it facilitates computing standard graph measures downstream. Examples of sample n and e are shown in the (sub-)graphs in Fig. 1. E and P nodes are in rectangles with sharp and rounded corners, respectively. For simplicity, C and L nodes are not shown. An important thing to note is that each instance of some P retains an individual existence in the graph to prevent false inferences (e.g. two voice actor nodes in the figure). As a 2 \fFigure 1: A typical conversation illustrating perfect (but simplified) context expansion and answering at every turn. simple example, if we merge the node for married from two triples \u27e8E1,married, E2\u27e9and \u27e8E3,married, E4\u27e9, then we may accidentally infer that E1 is married to E4 during answering. Conversation. A conversationC withT turns is made up of a sequence of questionsQ = {qt } and corresponding answers A = {at }, where t = 0, 1, . . .T, such that C = \u27e8(q0,a0), (q1,a1) . . . (qT ,aT )\u27e9. Fig. 1 (left side) shows a typical C that Convex handles, with T = 5 (six turns). Usually, q0 is well-formed, and all other qt are ad hoc. Question. Each qt is a sequence of words qt i , such that qt = \u27e8qt 1 . . .qt |qt |\u27e9, where |qt | is the number of words in qt . During answering, each word inqt potentially maps to one or more items in K (qt i 7\u2192E \u222aP \u222aC \u222aL). However, since conversations revolve around entities of interest, we fixate on the mapped entities, and refer to them as E(qt ). E.g., \u201cAlan\u201d in q1 7\u2192{Alan Arkin}, and \u201cscore\u201d in q2 7\u2192{composer, soundtrack, The Last Unicorn Soundtrack}; so E(q1) = {Alan Arkin}, and E(q2) = {The Last Unicorn Soundtrack}. Answer. Each answer at to question qt is a (possibly multiple, single, or null-valued) set of entities or literals in K, i.e. at \u2208E \u222aL (questions asking for predicates or classes are usually not realistic). Each at is shaded in light blue in Fig. 1. Context subgraph. In the Convex model, every turn t in C is associated with a context Xt , that is a subgraph grounded or anchored in a localized zone in K. Each Xt subgraph consists of: (i) the previous question entities in C, {E(q1) \u222a. . . E(qt\u22121)}, (ii) previous answer entities in C: {a1 \u222a. . . at\u22121}, (iii) intermediate nodes and edges connecting the above in K. All E nodes corresponding to turns 1, . . . , (t \u22121), are shaded in light green. Frontier nodes. At every turn t, nodes in the k-hop neighborhood of Xt , N(Xt ), define something like a border to which we may need to expand Xt for answering the next qt (current nodes in Xt are subsumed in N(Xt )). The number of hops k is small in practice, owing to the fact that typical users do not suddenly make large topic jumps during a specific C. Even then, since expanding Xt to include every n \u2208N(Xt ) results in an exponential growth rate for its size that we wish to avoid, we first select the best (topr) nodes in N(Xt ). These optimal expansion points in N(Xt ) are referred to as frontier nodes, a ranked set Ft = {Ft 1, . . . Ft r }, and are the most relevant nodes with respect to the current question qt and the current context Xt , as ranked by some frontier score (defined later). This entails that only those triples (along with qualifiers) (analogously, the resultant nodes and edges) that connect these Ft i to the Xt are added to the context. The top-1 frontier node Ft 1 at every t is shown in orange in the figure (multiple in case of ties). Expanded context. Once all triples in K corresponding to frontier nodes Ft are added to Xt , we obtain an expanded context graph Xt +. All nodes in Xt + are candidate answers at , that are scored appropriately. Fig. 1 shows expanded contexts {Xt +} for every t in our example conversation. Corresponding Xt can be visualized by removing facts with the orange frontiers. Notably, Xt + = Xt+1. 3 THE CONVEX ALGORITHM We now describe the Convex conversation handler method, that can be envisaged as a seamless plug-in enabling stand-alone KG-QA systems to answer incomplete follow up questions with possibly ungrammatical and informal formulations. Convex thus requires an underlying KG, a standard QA system that can answer wellformulated questions, and the conversational utterances as input. On receiving an input question at a given turn, our method proceeds in two stages: (i) expand the context graph, and (ii) rank the answer candidates in the expanded graph. We discuss these steps next. 3.1 Context expansion The initial question q0 is answered by the KG-QA system that Convex augments, and say, that it produces answer(s) a0. Since entities in the original question are of prime importance in a conversation, we use any off-the-shelf named entity recognition and disambiguation (NERD) system like TagMe [11] or AIDA [14] to identify entities E(q0). Such E(q0), a0, and the KG connections between them initialize the context subgraph X 1. 3 \fNow, when the first question q1 arrives, we need to look for answer(s) a1 in the vicinity of X 1. The main premise of this work is not to treat every node in such neighborhood of X 1, and more generally, Xt , as an answer candidate. This is because, over turns, expanding the context, by any means, is inevitable: users can freely drift away and revisit the initial entities of interest over the full course of the conversation. Under this postulate, the total number of such context nodes can easily go to the order of millions, aggravated by the presence of popular entities, especially countries (UK, Russia) or cities (Munich, Barcelona) in the KG around prominent entities of discussion (Harry Potter, Christiano Ronaldo). The logical course of action, then, is to perform this expansion in a somewhat austere fashion, which we propose to do as follows. We wish to identify some key nodes in the k-hop neighborhood of X 1, that will prove the most worthwhile if included into X 1 (along with their connections to X 1) w.r.t. answering q1. We call these optimal expansion points frontier nodes. From here on, we outline frontier identification at a general conversation turn t, where t = 0, 1, . . .T. Frontiers are marked by three signals: (i) relevance to the words in qt ; (ii) relevance to the current context Xt ; and (iii) KG priors. We now explain these individual factors. Relevance to question. The question words {qt i } provide a direct clue to the relevant nodes in the neighborhood. However, there is often a vocabulary mismatch between what users specify in their questions and the KG terminology, as typical users are unaware of the KG schema. For example, let us consider q3 = Who did the score?. This indicates the sought information is about the score of the movie but unfortunately the KG does not use this term. So, we define the matching similarity score of a neighbor n with a question word using cosine similarity between word2vec [23] embeddings of the node label and the word. Stopwords like and, of, to, etc. are excluded from this similarity. For multiword phrases, we use an averaging of the word vectors [37]. The cosine similarity is originally in [\u22121, +1]: it is scaled to [0, 1] using min-max normalization for comparability to the later measures. So we have: match(n,qt i |n \u2208N(Xt )) = cosnorm(w2v(label(n)),w2v(qt i )) (1) We then take the maximum of these word-wise scores to define the matching score of a candidate frontier to the question as a whole: match(n,qt |n \u2208N(Xt )) = max i match(n,qt i ) (2) Relevance to context. Nevertheless, such soft lexical matching with embeddings is hardly enough. Let us now consider the word \u201cgenre\u201d in q3 = Genre of this band\u2019s music?. Looking at the toy example in Fig. 2, we see that even with an exact match, there are five genre-s lurking in the vicinity at X 4 (there are several more in reality), where the one connected to America is the intended fit. We thus define the relevance of a node n to the current context Xt as the total graph distance dK (in number of hops in K) of n to the nodes in Xt . Note that we are interested in the relevance score being directly proportional to this quantity, and hence consider proximity, the reciprocal of distance (1/dK ), as the measure instead. For the aggregation over nodes x in Xt , we prefer \u00cd(1/dK(n,x)) over 1/\u00cd(dK(n,x)) as the latter is more sensitive to outliers. Next, not all nodes in Xt are valuable for the answering process. For anchoring a conversation C in a KG, entities that have appeared in questions or as answers in turns 0, . . . , (t \u22121), are what specifically Figure 2: An illustration of the ambiguity in frontier node selection for a specific question word (\u201cgenre\u201d in q4 = Genre of this band\u2019s music?), and how effective scoring can potentially pick the best candidate in a noisy context graph X 4. matter. Thus, it suffices to consider only \u00d0t\u22121 j=0 E(qj) and \u00d0t\u22121 j=0 aj for computing the above proximities. We encode this factor using an indicator function 1QA(x|x \u2208Xt ) that equals 1 if x \u2208\u00d0t\u22121 j=0[E(qj)\u222a aj], and zero otherwise. Contributions of such Q/A nodes x \u2208Xt (E(qj) or aj) should be weighted according to the turn in which they appeared in their respective roles, denoted by turn(x)). This is when such nodes had the \u201cspotlight\u201d, in a sense; so recent turns have higher weights than older ones. In addition, since the entity in the first question E(q0) may always be important as the theme of the conversation (The Last Unicorn), turn(E(q0)) is set to the maximum value (t \u22121) instead of zero. We thus define the context proximity score for neighbor n, normalized by the number of Q/A nodes in the context, as: prox(n,Xt |n \u2208N(Xt )) = \u00cd x \u2208X t turn(x) \u00b7 1QA(x) \u00b7 1/dK(n,x) \u00cd x \u2208X t 1QA(x) (3) KG priors. Finally, KG nodes have inherent salience (or prominence) values that reflect their likelihoods of being queried about in users\u2019 questions. For example, Harry Potter has higher salience as opposed to some obscure book like Harry Black, and the same can be said to hold about the author predicate compared to has edition for books. Ideally, such salience should be quantified using large-scale query logs from real users that commercial Web search engines possess. In absence of such resources, we use a more intrisic proxy for salience: the frequency of the concept in the KG. The raw frequency is normalized by corresponding maximum values for entities, predicates, classes, and literals, to give f reqnorm. Thus, we have the KG prior for a node n as: prior(n,K) = f reqnorm(n,K) (4) Aggregation using Fagin\u2019s Threshold Algorithm. We now have three independent signals from the question, context, and the KG regarding the likelihood of a node being a frontier at a given turn. We use the Fagin\u2019s Threshold Algorithm (FTA) [10] to aggregate items in the three sorted lists L = {Lmatch, Lprox, Lprior }, that are created when candidates are scored by each of these signals. FTA is chosen as it is an optimal algorithm with correctness and performance guarantees for rank aggregation. In FTA, we perform 4 \fAlgorithm 1: Convex (K,T,q0,a0, E(q0), \u27e8q1, . . .qT \u27e9,r) t = 1; initialize X 1 = E(q0) \u222aa0 \u222aK(E(q0),a0); while t \u2264T do for n \u2208N(Xt ) do compute match(n,qt ) [Eq. 2]; compute prox(n,Xt ) [Eq. 3]; compute prior(n,K) [Eq. 4]; insert scores into sorted lists L = {Lmatch, Lprox, Lprior }; end find {Ft i }i=1 r = Fagin\u2019s-Threshold-Algorithm(L,r) [Eq. 5]; assign E(qt ) = E(Ft ); expand Xt + = Xt \u222af acts(Ft ); for a \u2208Xt + do compute ans\u2212score(a) [Eq. 6]; end find at = arg maxX t + ans\u2212score(a); Xt+1 \u2190Xt +; t \u2190t + 1; end Result: \u27e8a1, . . . aT \u27e9 sorted access in parallel to each of the three sorted lists Li. As each candidate frontier node n is seen under sorted access, we retrieve each of its individual scores by random access. We then compute a frontier score for n as a simple linear combination of the outputs from Eqs. 2, 3, and 4 using frontier hyperparameters hF 1 , hF 2 , and hF 3 , where \u00cd i hF i = 1: f rontier(n,qt |n \u2208N(Xt )) = hF 1 \u00b7 match(n,qt ) + hF 2 \u00b7 prox(n,Xt ) + hF 3 \u00b7 prior(n,K) (5) In general, FTA requires a monotonic score aggregation function f rontier() such that f rontier(s1,s2,s3) \u2264f rontier(s\u2032 1,s\u2032 2,s\u2032 3) whenever si \u2264s\u2032 i, \u2200i, where the component scores of f rontier(n) are denoted as si\u2019s (Eq. 5) in corresponding lists Li. Once the above is done, as nodes are accessed from Li, if this is one of the top r answers so far, we remember it. Here, we assume a buffer of bounded size. For each Li, let \u02c6 si be the score of the last node seen under sorted access. We define the threshold value thresh to be f rontier( \u02c6 s1, \u02c6 s2, \u02c6 s3). When r nodes have been seen whose frontier score is at least thresh, then we stop and return the top r nodes as the final frontiers. Thus, at the end of this step, we have a set of r frontier nodes {Ft i }r i=1 for turn t. If any of these frontiers are entities, they are used to populate E(qt ). We add the triples (along with qualifiers if any) that connect the Ft i to the current context Xt to produce the expanded graph Xt + (the step containing f acts(Ft ) in Algorithm 1). 3.2 Answer ranking Our task now is to look for answers at in Xt +. Since frontiers are the most relevant nodes in Xt + w.r.t question qt , it is expected that at will appear in their close proximity. Attribute Value Title Generate question-answer conversations on popular entities Description Generate conversations in different domains (books, movies, music, soccer, and TV series) on popular entities of your choice. You need to ask natural questions and provide the corresponding answers via Web search. Total participants 70 Time allotted per HIT 6 hours Time taken per HIT 3 hours Payment per HIT 25 Euros Table 2: Basic details of the AMT HIT (five conversations). However, labels of frontier nodes only reflect what was explicit in qt , the unspecified or implicit part of the context in qt usually refers to a previous question or answer entity (\u00d0t\u22121 j=0[E(qj) \u222aaj]). Thus, we should consider closeness to these context entities as well. Note that just as before, frontiers and Q/A entities both come with corresponding weights: frontier scores and turn id\u2019s, respectively. Thus, while considering proximities is key here, using weighted versions is a more informed choice. We thus score every node a \u2208Xt + by its weighted proximity, using Eqs. 3 and 5, as follows (again, we invert distance to use a measure directly proportional to the candidacy of an answer node): at = arg max a\u2208X t + [hA 1 \u00b7 \u00cdr i=1[f rontier(Ft i ) \u00b7 (1/dK(a, Ft i ))] r + hA 2 \u00b7 \u00cd x \u2208X t turn(x) \u00b7 1QA(x) \u00b7 1/dK(a,x) \u00cd x \u2208X t 1QA(x) ] (6) Contributions by proximities to frontier and Q/A nodes (each normalized appropriately) are again combined linearly with answer hyperparameters hA 1 and hA 2 , where hA 1 + hA 2 = 1. Thus, the final answer score also lies in [0, 1]. Finally, the top scoring at (possibly multiple, in case of ties) node(s) is returned as the answer to qt . The Convex method is outlined in Algorithm 1. As mentioned before, a0 and E(q0) are obtained by passing q0 through a standalone KG-QA system, and a NERD algorithm, respectively. K(\u00b7) returns all KG triples that contain the arguments of this function, and the generalized E(\u00b7) returns the set of entities from its arguments. Note that this algorithm illustrates the workings of Convex in a static setting when all qt (t = 0 . . .T) are given upfront; in a real setting, each qt is issued interactively with a user in the loop. 4 THE CONVQUESTIONS BENCHMARK 4.1 Benchmark creation Limitations of current choices. Popular benchmarks for KGQA like WebQuestions [5], SimpleQuestions [6], WikiMovies [24], ComplexWebQuestions [34], and ComQA [3], are all designed for one-shot answering with well-formulated questions. The CSQA dataset [30] takes preliminary steps towards the sequential KG-QA paradigm, but it is extremely artificial: initial and follow-up questions are generated semi-automatically via templates, and sequential utterances are only simulated by stitching questions with shared entities or relations in a thread, without a logical flow. QBLink [9], 5 \fTurn Books Movies Soccer Music TV series q0 When was the first book of the book series The Dwarves published? Who played the joker in The Dark Knight? Which European team did Diego Costa represent in the year 2018? Led Zeppelin had how many band members? Who is the actor of James Gordon in Gotham? a0 2003 Heath Ledger Atl\u00e9tico Madrid 4 Ben McKenzie q1 What is the name of the second book? When did he die? Did they win the Super Cup the previous year? Which was released first: Houses of the Holy or Physical Graffiti? What about Bullock? a1 The War of the Dwarves 22 January 2008 No Houses of the Holy Donal Logue q2 Who is the author? Batman actor? Which club was the winner? Is the rain song and immigrant song there? Creator? a2 Markus Heitz Christian Bale Real Madrid C.F. No Bruno Heller q3 In which city was he born? Director? Which English club did Costa play for before returning to Atl\u00e9tico Madrid? Who wrote those songs? Married to in 2017? a3 Homburg Christopher Nolan Chelsea F.C. Jimmy Page Miranda Cowley q4 When was he born? Sequel name? Which stadium is this club\u2019s home ground? Name of his previous band? Wedding date first wife? a4 10 October 1971 The Dark Knight Rises Stamford Bridge The Yardbirds 19 June 1993 Table 3: Representative conversations in ConvQuestions from each domain, highlighting the stiff challenges they pose. CoQA [27], ans ShARC [29] are recent resources for sequential QA over text. The SQA resource [16], derived from WikiTableQuestions [25], is aimed at driving conversational QA over (relatively small) Web tables. Conceptual challenges. In light of such limitations, we overcome several conceptual challenges to build the first realistic benchmark for conversational KG-QA, anchored in Wikidata. The key questions included, among others: Should we choose q0 from existing benchmarks and ask humans to create only follow-ups? Should the answers already come from some KG-QA system, observing which, users create follow-ups? Should we allocate templates to crowdworkers to systematically generate questions that miss either entities, predicates, and types? Can we interleave questions by different workers to create a large number of conversations? Can we permute the order of follow-ups to generate an even larger volume? If there are multiple correct at , and qt+1 in the benchmark involves a different at than what the system returns at run-time, how can we evaluate such a dynamic workflow? How can we built a KG-QA resource that is faithful to the setup but is not overly limited to the information the KG contains today? Creating ConvQuestions. With insights from a meticulous in-house pilot study with ten students over two weeks, we posed the conversation generation task on Amazon Mechanical Turk (AMT) in the most natural setup: Each crowdworker was asked to build a conversation by asking five sequential questions starting from any seed entity of his/her choice, as this is an intuitive mental model that humans may have when satisfying their real information needs via their search assistants. A system-in-the-loop is hardly ideal: this creates comparison across methods challenging, is limited by the shortcomings of the chosen system, and most crucially, there exist no such systems today with satisfactory performance. In a single AMT Human Intelligence Task (HIT), Turkers had to create one conversation each from five domains: \u201cBooks\u201d, \u201cMovies\u201d, \u201cSoccer\u201d, \u201cMusic\u201d, and \u201cTV Series\u201d (other potential choices were \u201cPolitics\u201d, but we found that it quickly becomes subjective, and \u201cFinance\u201d, but that is best handled by relational databases and not curated KGs). Each conversation was to have five turns, including q0. To keep conversations as natural as possible, we neither interleaved questions from multiple Turkers, nor permuted orders of questions within a conversation. For quality, only AMT Master Workers (who have consistently high performances: see https://www.mturk.com/ help#what_are_masters), were allowed to participate. We registered 70 participants, and this resulted in 350 initial conversations, 70 from each domain. Along with questions, Turkers were asked to provide textual surface forms and Wikidata links of the seed entities and the answers (via Web search), along with paraphrases of each question. The paraphrases provided us with two versions of the same question, and hence a means of augmenting the core data with several interesting variations that can simultaneuosly boost and test the robustness of KG-QA systems [8]. Since paraphrases of questions (any qt ) are always semantically equivalent and interchangeable, each conversation with five turns thus resulted in 25 = 32 distinct conversations (note that this does not entail shuffling sequences of the utterances). Thereby, in total, we obtained 350 \u00d7 32 = 11, 200 such conversations, that we release with this paper. If the answers were dates or literals like measurable quantities with units, Turkers were asked to follow the Wikidata formats for the same. They were provided with minimal syntactic guidelines to remain natural in their questions. They were shown judiciously selected examples so as not to ask opinionated questions (like best film by this actor?), or other non-factoid questions (causal, procedural, etc.). The authors invested substantial manual effort for quality control and spam prevention, by verifying both answers of random utterances, and alignments between provided texts and Wikidata URLs. Each question was allowed to have at most three answers, but single-answer questions were encouraged to preclude the possibility of non-deterministic workflows during evaluation. To allow for ConvQuestions being relevant for a few years into the future, we encouraged users to ask complex questions involving 6 \fjoins, comparisons, aggregations, temporal information needs, and so on. Given the complexity arising from incomplete cues, these additional facets pose an even greater challenge for future KG-QA systems. So as not to restrict questions to only those predicates that are present in Wikidata today, relations connecting question and answer entities are sometimes missing in the KG but can be located in sources like Wikipedia, allowing scope for both future growth of the KG, and experimentation with text plus KG combinations. 4.2 Benchmark analysis Basic details of our AMT HIT are provided in Table 2 for reference. Question entities and expected answers had a balanced distribution among human (actors, authors, artists) and non-human types (books, movies, stadiums). Detailed distributions are omitted due to lack of space. Illustrative examples of challenging questions from ConvQuestions are in Table 3. We see manifestations of: incomplete cues (TV Series; q3), ordinal questions (Books; q1), comparatives (Music; q1), indirections (Soccer; q4), anaphora (Music; q3), existentials (Soccer;q2), temporal reasoning (Soccer;q3), among other challenges. The average lengths of the first and follow-up questions were 9.07 and 6.20 words, respectively. Finally, we present the key quantifier for the difficulty in our benchmark: the average KG distance of answers from the original seed entity is 2.30 hops, while the highest goes up to as high as five KG hops. Thus, an approach that remains fixated on a specific entity is doomed to fail: context expansion is the key to success on ConvQuestions. 5 EXPERIMENTAL SETUP 5.1 Baselines and Metrics Stand-alone systems. We use the state-of-the-art system QAnswer [7], and also Platypus [35], as our stand-alone KG-QA systems, that serve as baselines, and which we enhance with Convex. At the time of writing (May 2019), these are the only two systems that have running prototypes over Wikidata. To make Convex a self-sufficient system, we also implement a na\u00efve version of answering the first question as follows. Entities are detected in q0 using the TagMe NERD system [11], and mapped to their Wikidata IDs via Wikipedia links provided by TagMe. Embeddings were obtained by averaging word2vec vectors of the non-entity words in q0, and their cosine similarities were computed around each of the predicates around the detected entities E(q0). Finally, the best (E, P) pair was found (as a joint disambiguation), and the returned answer was the subject or object in the triple according as the triple structure was \u27e8\u00b7, P, E\u27e9or \u27e8E, P, \u00b7\u27e9. Due to the complexity in even the first question q0 in ConvQuestions, all of the above systems achieve a very poor performance for a0 on the benchmark. This limits the value that Convex can help these systems achieve, as E(q0) and a0 together initialize X 1. To decouple the effect of the original QA system, we experiment with an Oracle strategy, where we use E(q0) and a0 provided by the human annotator (Turker who created the conversation). Conversation models. As intuitive alternative strategies to Convex for handling conversations, we explore two variants: (i) the star-join, and (ii) the chain-join models. The naming is inspired by DB terminology, where a star query looks for a join on several attributes around a single variable (of the form SELECT ?x WHERE {?x att1 val1 . ?x att2 val2 . ?x att3 val3}), while a chain SQL searches for a multi-variable join via indirections (SELECT ?x WHERE {?x att1 ?y . ?y att2 ?z . ?z att3 val1}). For conversations, this entails the following: in the star model, the entity in q0 is always assumed to be the entity in all subsequent utterances (like The Last Unicorn). The best predicate is disambiguated via a search around such E(q0) using similarities of word2vec embeddings of Wikidata phrases and non-entity words in q0. The corresponding missing argument from the triple is returned as the answer. In the chain model of a conversation, the previous answer at\u22121 is always taken as the reference entity at turn t, instead of E(q0). Predicate selection and answer detection are done analogously as in the star model. No frontiers. We also investigated whether the idea of a frontier node in itself was necessary, by defining an alternative configuration where we optimize an answer-scoring objective directly. The same three signals of question matching, context proximity, and KG priors were aggregated (Eqs. 2, 3, and 4), and the Fagin\u2019s Threshold Algorithm was again applied for obtaining the top-r list. However, these top-r returned nodes are now directly the answers. The process used translates to a branch-and-bound strategy for iteratively exploring the neighborhood of the initial context (E(q0), a0, and their interconnections) as follows, without explicitly materializing a context subgraph. The 2-hop neighborhood (2-hop as we now directly score for an answer, without finding a frontier first) of each node in the context at a given turn is scored on its likelihood of being an answer, in a breadth-first manner. The first computed score defines a lower bound on the node being a potential answer, that is updated as better candidates are found. If a node\u2019s answer score is lower than the lower bound so far, it is not expanded further (its neighborhood is not explored anymore). We keep exploring the 2-hop neighborhood of the context iteratively until we do not find any node in K better than the current best answer. End-to-end neural model. We compared our results with D2A (Dialog-to-Action) [12], the state-of-the-art end-to-end neural model for conversational KG-QA. Since Convex is an unsupervised method that does not rely on training data, we used the D2A model pretrained on the large CSQA benchmark [30]. D2A manages dialogue memory using a generative model based on a flexible grammar. Question completion. An interesting question to ask at this stage is whether an attempt towards completing the follow-up utterances is worthwhile. While a direct adaptation of a method like [17] is infeasible due to absence of training pairs and the need for rewriting as opposed to plain completion, we investigate certain reasonable alternatives: (i) when qt is concatenated with keywords (all nouns and verbs) from q0; (ii) when qt is concatenated with a0; (iii) when qt is concatenated with keywords from qt\u22121; and, (iv) with ai\u22121. These variants are then passed through the stand-alone KG-QA system. Fortunately, the state-of-the-art system QAnswer is totally syntax-agnostic, and searches the KG with all question cue words to formulate an optimal SPARQL query whose components best cover the mapped KG items. This syntax-independent approach was vital as it would be futile to massage the \u201ccompleted\u201d questions above into grammatically correct forms. Platypus, on the other hand, is totally dependent on an accurate dependency parse of the input utterance, and hence is unsuitable for plugging in these question completion strategies. 7 \fMetrics. Since most questions in ConvQuestions had exactly one or at most a few correct answers, we used the standard metrics of Precision at the top rank (P@1), Mean Reciprocal Rank (MRR), and Hit@5 metrics. The last measures the fraction of times a correct answer was retrieved within the top-5 positions. 5.2 Configuration Dataset. We evaluate Convex and other baselines on ConvQuestions. A random 20% of the 11k conversations was held out for tuning model parameters, and the remaining 80% was used for testing. Care was taken that this development set was generated from a separate set of seed conversations (70 out of the original 350) so as to preclude possibilities of \u201cleakage\u201d on to the test set. Initialization. We use Wikidata (www.wikidata.org) as our underlying KG, and use the complete RDF dump in NTriples format from 15 April 2019 (http://bit.ly/2QhsSDC, \u22431.3 TB uncompressed). Identifier triples like those containing predicates like Freebase ID, IMDb ID, etc. were excluded. We used indexing with HDT (www.rdfhdt.org/) that enables much faster lookups. The Python library NetworkX (https://networkx.github.io/) was used for graph processing. TagMe was used for NERD, and word2vec embeddings were obtained via the spaCy package. Stanford CoreNLP [22] was used for POS tagging to extract nouns and verbs for question completion. The ideal number of frontier nodes, r, was found to be three by tuning on the dev set. 6 RESULTS AND INSIGHTS 6.1 Key findings Table 4 lists main results, where all configurations are run on the follow-up utterances in the ConvQuestions test (8, 960 conversations; 35, 840 questions). An asterisk (*) indicates statistical significance of Convex-enabled systems over the strongest baseline in the group, under the 2-tailed paired t-test at p < 0.05 level. We make the following key observations. Convex enables stand-alone systems. The state-of-the-art QAnswer [7] scores only about 0.011 \u22120.064 (since it produces sets and not ranked lists, all metric values are identical) on its own on the incomplete utterances, which it is clearly not capable of addressing. When Convex is applied, its performance jumps significantly to 0.172 \u22120.264 ) (MRR) across the domains. We have exactly the same trends with the Platypus system. The naive strategy with direct entity and predicate linking performs hopelessly in absence of explicit cues, but with Convex we again see noticeable improvements, brought in by a relevant context graph and its iterative expansion. In the Oracle method, a0 is known, and hence a row by itself is not meaningful. However, contrasting Oracle+Convex with other \u201c+Convex\u201d methods, we see that there is significant room for improvement that can be achieved by answering q0 correctly. Star and chain models of conversations fall short. For every configuration, we see the across-the-board superiority of Convex-boosted methods over starand chain-models (often over 100% gains). This clearly indicates that while these are intuitive ways of modeling human conversation (as seen in the often respectable values that these achieve), they are insufficient and oversimplified. Evidently, real humans rather prefer the middle path: sometimes hovering around the initial entity, sometimes drifting in a chain of answers. A core component of Convex that we can attribute this pattern to, is the turn-based weighting of answer and context proximity that prefers entities in the first and the last turns. \u201cQAnswer + Star\u201d and \u201cPlatypus + Star\u201d achieve the same values as they both operate around the same q0 entity detected by TagMe. Convex generalizes across domains. In Table 4, we also note that the performance of Convex stretches across all five domains (even though the nature of questions in each of these domains have their own peculiarities), showing the potential of of our unsupervised approach in new domains with little training resources, or to deal with cold starts in enterprise applications. While we did tune hyperparameters individually for each domain, there were surprisingly little variation across them (hF 1 \u22430.5 \u22120.6,hF 2 \u2243 0.3 \u22120.4,hF 3 \u22430.1,hA 1 \u22430.8 \u22120.9,hA 2 \u22430.1 \u22120.2). Frontiers help. We applied our frontier-less approach over the oracle annotations for q0, and in the row marked \u201cOracle + No frontiers\u201d in Table 4, we find that this results in degraded performance. We thus claim that locating frontiers is an essential step before answer detection. The primary reason behind this is that answers only have low direct matching similarity to the question, making a 2-stage approach worthwhile. Also, exploring a 2-hop neighborhood was generally found to suffice: nodes further away from the initial context rarely manage to \u201cwin\u201d, due to the proximity score component quickly falling off as KG-hops increase. Pre-trained models do not suffice. D2A produces a single answer for every utterance, which is why the three metrics are equal. From the D2A row in Table 4, we observe that pre-trained neural models do not work well off-the-shelf on ConvQuestions (when compared to the Convex-enabled QAnswer row, for example). This is mostly due to the restrictive patterns in the CSQA dataset, owing to its semi-synthetic mode of creation. A direct comparison, though, is not fair, as Convex is an enabler method for a stand-alone KGQA system, while D2A is an end-to-end model. Nevertheless, the main classes of errors come from: (i) a predicate necessary in ConvQuestions that is absent in CSQA (D2A cannot answer temporal questions like In what year was Ender\u2019s game written? as such relations are absent in CSQA); (ii) D2A cannot generate 2-hop KG triple patterns; (iii) D2A cannot resolve long-term co-references in questions (pronouns only come from the last turn in CSQA, but not in ConvQuestions); (iv) In CSQA, co-references are almost always indicated as \u201cit\u201d or \u201cthat one\u201d. But since ConvQuestions is completely user-generated, we have more challenging cases with \u201cthis book\u201d, \u201cthe author\u201d, \u201cthat year\u201d, and so on. Convex outperforms question completion methods. Comparison with question completion methods are presented in Table 5. Clear trends show that while these strategies generally perform better than stand-alone systems (contrasting QAnswer with Table 4, for, say, Movies, we see 0.050 \u22120.109 vs. 0.032 on MRR previously), use of Convex results in higher improvement (0.264 MRR on Movies). This implies that question completion is hardly worthwhile in this setup when the KG structure already reveals a great deal about the underlying user intents left implicit in follow-up utterances. 6.2 Analysis Convex maintains its performance over turns. One of the most promising results of this zoomed-in analysis is that the MRR 8 \fDomain Movies TV Series Music Books Soccer Method P@1 MRR Hit@5 P@1 MRR Hit@5 P@1 MRR Hit@5 P@1 MRR Hit@5 P@1 MRR Hit@5 QAnswer [7] 0.032 0.032 0.032 0.064 0.064 0.064 0.020 0.020 0.020 0.011 0.011 0.011 0.020 0.020 0.020 QAnswer + Convex 0.222* 0.264* 0.311* 0.136* 0.172* 0.214* 0.168 0.197* 0.232* 0.177 0.213* 0.252* 0.179* 0.221* 0.265* QAnswer + Star 0.201 0.201 0.201 0.132 0.132 0.132 0.183 0.183 0.183 0.199 0.199 0.199 0.170 0.170 0.170 QAnswer + Chain 0.077 0.077 0.077 0.028 0.028 0.028 0.056 0.056 0.056 0.034 0.034 0.034 0.044 0.044 0.044 Platypus [35] 0.000 0.000 0.000 0.000 0.000 0.000 0.005 0.005 0.005 0.002 0.002 0.002 0.004 0.004 0.004 Platypus + Convex 0.218* 0.255* 0.295* 0.124 0.153* 0.189* 0.167 0.197* 0.233* 0.180 0.216* 0.256* 0.179* 0.222* 0.269* Platypus + Star 0.201 0.201 0.201 0.132 0.132 0.132 0.183 0.183 0.183 0.199 0.199 0.199 0.170 0.170 0.170 Platypus + Chain 0.047 0.047 0.047 0.000 0.000 0.000 0.028 0.028 0.028 0.028 0.028 0.028 0.015 0.015 0.015 Naive 0.016 0.016 0.016 0.020 0.020 0.020 0.021 0.021 0.021 0.007 0.007 0.007 0.016 0.016 0.016 Naive + Convex 0.212* 0.252* 0.296* 0.121 0.149* 0.185* 0.164 0.194* 0.229* 0.176 0.210* 0.248* 0.161* 0.201* 0.245* Naive + Star 0.205 0.205 0.205 0.129 0.129 0.129 0.185 0.185 0.185 0.205 0.205 0.205 0.154 0.154 0.154 Naive + Chain 0.059 0.059 0.059 0.014 0.014 0.014 0.039 0.039 0.039 0.051 0.051 0.051 0.031 0.031 0.031 Oracle + Convex 0.259* 0.305* 0.355* 0.178 0.218* 0.269* 0.190 0.237 0.293* 0.198 0.246* 0.303* 0.188* 0.234* 0.284* Oracle + Star 0.257 0.257 0.257 0.194 0.194 0.194 0.241 0.241 0.241 0.241 0.241 0.241 0.179 0.179 0.179 Oracle + Chain 0.094 0.094 0.094 0.031 0.031 0.031 0.040 0.040 0.040 0.053 0.053 0.053 0.016 0.016 0.016 Oracle + No frontiers 0.124 0.153 0.191 0.073 0.094 0.125 0.116 0.144 0.185 0.103 0.137 0.199 0.087 0.122 0.166 D2A [12] 0.090 0.090 0.090 0.067 0.067 0.067 0.072 0.072 0.072 0.121 0.121 0.121 0.107 0.107 0.107 The highest value in a group (metric-domain-system triple) is in bold. QAnswer and Platypus return only a top-1 answer and not ranked lists, and hence have the same P@1, MRR, and Hit@5 values. Table 4: Our main results on follow-up utterances in ConvQuestions showing how Convex enables KG-QA enables for conversations, and its comparison with baselines. Method Movies TV Music Books Soccer QAnswer + Convex 0.264* 0.172* 0.197* 0.213* 0.221* QAnswer + q0 keywords 0.071 0.052 0.084 0.039 0.075 QAnswer + a0 0.077 0.054 0.048 0.096 0.045 QAnswer + qi\u22121 keywords 0.050 0.045 0.045 0.025 0.046 QAnswer + ai\u22121 0.109 0.079 0.093 0.064 0.070 Table 5: Comparison with question completion strategies (MRR). The highest value in a column is in bold. Turn Movies TV Music Books Soccer 1 0.375 0.393 0.080 0.446 0.357 2 0.375 0.250 0.214 0.281 0.188 3 0.161 0.205 0.124 0.435 0.304 4 0.325 0.214 0.044 0.375 0.137 Table 6: Performance of Convex over turns (MRR). for Convex (measured via its combination with the Oracle, to decouple the effect of the QA system) does not diminish over turns. This shows particular robustness of our graph-based method: while we may produce several wrong results during the session of the conversation, we are not bogged down by any single mistake, as the context graph retains several scored candidates within itself, guarding against \u201cnear misses\u201d. This is in stark contrast to the chain model, where it is exclusively dependent on at\u22121. Error analysis. Convex has two main steps in its pipeline: context expansion, and answer ranking. Analogously, there are two main cases of error: when the answer is not pulled in when Xt is expanded at the frontiers (incorrect frontier scoring), or when the answer is there in Xt + but is not ranked at the top. These numbers are shown in Table 7. We find that there is significant scope for improvement for frontier expansion, as 80 \u221290% errors lie in this bag. It is however, heartening to see that no particular turn is singularly affected. This calls for more informed frontier scoring than our current strategy. Answer ranking can be improved with Scenario Turn 1 Turn 2 Turn 3 Turn 4 Ans. not in expanded graph 87.1 79.8 89.2 89.6 Ans. in expanded graph but not in top-1 12.9 20.2 10.8 10.4 Table 7: Error analysis (percentages of total errors). Utterance: What was the name of the director? (Movies, Turn 4) Intent: Who was the director of the movie Breakfast at Tiffany\u2019s? Utterance: What about Mr Morningstar? (TV Series, Turn 2) Intent: Which actor plays the role of Mr Morningstar in the TV series Lucifer? Utterance: What record label put out the album? (Music, Turn 3) Intent: What is the name of the record label of the album Cosmic Thing? Utterance: written in country? (Books, Turn 4) Intent: In which country was the book \u201cThe Body in the Library\u201d by Agatha Christie written? Utterance: Who won the World Cup that year? (Soccer, Turn 4) Intent: Which national team won the 2010 FIFA World Cup? Table 8: Representative examples where Oracle + Convex produced the best answer at the top-1, but neither Oracle + Star, nor Oracle + Chain could. better ways of aggregating the two proximity signals. Table 8 lists anecdotal examples of success cases with Convex. 7 RELATED WORK Question answering over KGs. Starting with early approaches in 2012-\u201913 [5, 36, 38], based on parsing questions via handcoded templates and grammars, KG-QA already has a rich body of literature. While templates continued to be a strong line of work due to its focus on interpretability and generalizability [1, 2, 4, 7, 35], a parallel thread has focused on neural methods driven by performance gains [15, 20, 31]. Newer trends include shifts towards more complex questions [19, 21, 34], and fusion of knowledge graphs and text [31, 33]. However, none of these approaches can deal with incomplete questions in a conversational setting. Conversational question answering. Saha et al. [30] introduce the paradigm of sequential question answering over KGs, and create a large benchmark CSQA for the task, along with a baseline 9 \fwith memory networks. Guo et al. [12] propose D2A, an end-toend technique for conversational KG-QA , that introduces dialog memory management for inferring the logical form of current utterances. While our goal is rather to build a conversation enabler method, we still compare with, and outperform the CSQA-trained D2A model on ConvQuestions. Question completion approaches [17, 26, 28] target this setting by attempting to create full-fledged interrogatives from partial utterances while being independent of the answering resource, but suffer in situations without training pairs and with ad hoc styles. Nevertheless, we try to compare with this line of thought, and show that such completion may not be necessary if the underlying KG can be properly exploited. Iyyer et al. [16] initiate the direction of sequential QA over tables using dynamic neural semantic parsing trained via weakly supervised reward-guided search, and evaluate by decomposing a previous benchmark of complex questions [25] to create sequential utterances. However, such table-cell search methods cannot scale to real-world, large-scale curated KGs. QBLink [9], CoQA [27], and ShARC [29] are recent benchmarks aimed at driving conversational QA over text, and the allied paradigm in text comprehension on interactive QA [18]. Hixon et al. [13] try to learn concept knowledge graphs from conversational dialogues over science questions, but such KGs are fundamentally different from curated ones like Wikidata with millions of facts. 8" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file