diff --git "a/abs_29K_G/test_abstract_long_2405.02816v1.json" "b/abs_29K_G/test_abstract_long_2405.02816v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.02816v1.json" @@ -0,0 +1,1283 @@ +{ + "url": "http://arxiv.org/abs/2405.02816v1", + "title": "Stochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization", + "abstract": "This paper introduces Stochastic RAG--a novel approach for end-to-end\noptimization of retrieval-augmented generation (RAG) models that relaxes the\nsimplifying assumptions of marginalization and document independence, made in\nmost prior work. Stochastic RAG casts the retrieval process in RAG as a\nstochastic sampling without replacement process. Through this formulation, we\nemploy straight-through Gumbel-top-k that provides a differentiable\napproximation for sampling without replacement and enables effective end-to-end\noptimization for RAG. We conduct extensive experiments on seven diverse\ndatasets on a wide range of tasks, from open-domain question answering to fact\nverification to slot-filling for relation extraction and to dialogue systems.\nBy applying this optimization method to a recent and effective RAG model, we\nadvance state-of-the-art results on six out of seven datasets.", + "authors": "Hamed Zamani, Michael Bendersky", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "This paper introduces Stochastic RAG--a novel approach for end-to-end\noptimization of retrieval-augmented generation (RAG) models that relaxes the\nsimplifying assumptions of marginalization and document independence, made in\nmost prior work. Stochastic RAG casts the retrieval process in RAG as a\nstochastic sampling without replacement process. Through this formulation, we\nemploy straight-through Gumbel-top-k that provides a differentiable\napproximation for sampling without replacement and enables effective end-to-end\noptimization for RAG. We conduct extensive experiments on seven diverse\ndatasets on a wide range of tasks, from open-domain question answering to fact\nverification to slot-filling for relation extraction and to dialogue systems.\nBy applying this optimization method to a recent and effective RAG model, we\nadvance state-of-the-art results on six out of seven datasets.", + "main_content": "INTRODUCTION Most machine learning systems, including large generative models, are self-contained systems, with both knowledge and reasoning encoded in model parameters. However, these models do not work effectively for tasks that require knowledge grounding [46], especially in case of non-stationary data where new information is actively being produced [47, 52]. As suggested by Zamani et al. [52], this issue can be addressed when machine learning systems Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0431-4/24/07. https://doi.org/10.1145/3626772.3657923 are being enhanced with the capability of retrieving stored content. For example, in retrieval-augmented generation (RAG), as a special case of retrieval-enhanced machine learning (REML) [52], systems consume the responses provided by one or more retrieval models for the purpose of (text) generation [21, 22]. RAG models demonstrate substantial promise across various applications, including open-domain question answering [16, 21, 53], fact verification [44], dialogue systems [5, 42, 48], and personalized generation [36, 37]. Many prior studies on RAG use off-the-shelf retrieval models. For instance, Nakano et al. [25] used APIs from a commercial search engine for text generation. Glass et al. [9], on the other hand, used a term matching retrieval model. Neural ranking models trained based on human annotated data have also been used in the literature [12, 21]. There also exist methods that only optimize the retrieval model and keep the language model parameters frozen [40]. A research direction in this area argues that optimizing retrieval models in RAG should depend on the downstream language model that consumes the retrieval results. This is also motivated by the findings presented by Salemi and Zamani [38] on evaluating retrieval quality in RAG systems. There exist solutions based on knowledge distillation [13] or end-to-end optimization based on some simplifying assumptions [35]. One of these assumptions is marginalization via top \ud835\udc58approximation [10, 21]. In more details, they first retrieve the top \ud835\udc58documents using off-the-shelf retrieval models, e.g., BM25 [34], and optimize retrieval models by re-scoring them, i.e., re-ranking, and feeding the documents to the downstream language model one-by-one independently [21]. This is far from reality as RAG models often consume multiple documents. This paper introduces Expected Utility Maximization for RAG\u2013a novel framework for end-to-end RAG optimization by relaxing these simplifying assumptions. This approach takes a utility function, which can be any arbitrary evaluation metric for the downstream generation task, such as exact match, BLEU [26], and ROUGE [23]. A major challenge in end-to-end optimization of RAG systems is that ranking and top \ud835\udc58selection is a non-differentiable process. Hence, this prevents us from using gradient descent-based methods for optimization. We address this issue by casting retrieval as a sampling without replacement process from the retrieval score distribution, which is approximated using the straight-through Gumbel-top-k approach. This stochastic approach\u2014called Stochastic RAG\u2014adds a Gumbel noise to the unnormalized retrieval scores and uses softmax to approximate argmax [17, 18]. Stochastic RAG can be applied to any RAG application. We evaluate our models using seven datasets from a wide range of applications, ranging from open-domain question answering to fact verification to slot-filling for relation extraction as well as dialogue systems. We apply our optimization method to FiD-Light [12], which arXiv:2405.02816v1 [cs.CL] 5 May 2024 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Hamed Zamani and Michael Bendersky is the best performing system on six out of these seven datasets, according to the knowledge-intensive language tasks (KILT) leaderboard as of Feb. 1, 2024.1 Our results demonstrate significant improvements on all these datasets. 2 EXPECTED UTILITY MAXIMIZATION FOR STOCHASTIC RAG Each RAG system consists of two main components: a text generation model \ud835\udc3a\ud835\udf03parameterized by \ud835\udf03and a retrieval model \ud835\udc45\ud835\udf19 parameterized by \ud835\udf19that retrieves documents from a large document collection\ud835\udc36. The text generation model consumes the retrieval results returned by the retrieval model. End-to-end optimization of RAG systems is challenging. This is mainly because retrieving top \ud835\udc58documents and feeding them to the generation model is not a differentiable process [52], thus one cannot simply employ gradientbased optimization algorithms for end-to-end optimization of these models. In this section, we introduce stochastic expected utility maximization for end-to-end optimization of retrieval-augmented models. Let \ud835\udc47= {(\ud835\udc651,\ud835\udc661), (\ud835\udc652,\ud835\udc662), \u00b7 \u00b7 \u00b7 , (\ud835\udc65\ud835\udc5b,\ud835\udc66\ud835\udc5b)} be a training set containing \ud835\udc5bpairs of \ud835\udc65\ud835\udc56(an input text) and \ud835\udc66\ud835\udc56(the ground truth output text). Let\ud835\udc48denote a utility function that takes the output generated by the RAG system \u02c6 \ud835\udc66and the ground truth output \ud835\udc66and generates a scalar value. The utility function can be any arbitrary metric, including but is not limited to, exact match, term overlap F1, BLEU, and ROUGE. We assume (1) the higher the utility value, the better, (2) the utility function is bounded within the [0, 1] range, and (3) \ud835\udc48(\ud835\udc66,\ud835\udc66) = 1. We define RAG Expected Utility as follows: RAG Expected Utility = 1 \ud835\udc5b \u2211\ufe01 (\ud835\udc65,\ud835\udc66)\u2208\ud835\udc47 \u2211\ufe01 \u02c6 \ud835\udc66\u2208Y \ud835\udc48(\ud835\udc66, \u02c6 \ud835\udc66)\ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) (1) where Y the output space, i.e., all possible output texts. In some models, the output space is limited, for instance in fact verification, the output space is often binary: the given candidate fact is often true or false. In other situations, such as free-form text generation, the output space is unlimited. To make sure that expected utility calculation is tractable, we would need to approximate the above equation by sampling from the unlimited space Y. We will explain how such samples can be obtained at the end of this section. The probability of generating any given output \u02c6 \ud835\udc66in a RAG system can be modeled as: \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = \u2211\ufe01 d\u2208\ud835\udf0b\ud835\udc58(\ud835\udc36) \ud835\udc5d( \u02c6 \ud835\udc66, d|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = \u2211\ufe01 d\u2208\ud835\udf0b\ud835\udc58(\ud835\udc36) \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\ud835\udc5d(d|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = \u2211\ufe01 d\u2208\ud835\udf0b\ud835\udc58(\ud835\udc36) \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) (2) where \ud835\udf0b\ud835\udc58(\ud835\udc36) denotes all permutations of \ud835\udc58documents being selected from the retrieval collection \ud835\udc36. The first step in the above equation is obtained using the law of total probability, the second step is obtained using the chain rule, and the third step is obtained due to the fact that the probability of a result list d being retrieved is independent of the text generation model \ud835\udc3a\ud835\udf03. 1https://eval.ai/web/challenges/challenge-page/689/leaderboard. Note that considering all permutations in \ud835\udf0b\ud835\udc58(\ud835\udc36) is expensive and impractical for large collections, thus we can compute an approximation of this equation. We do such approximation through a stochastic process. We rewrite Equation (2) as follows: \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = Ed\u223c\ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) [\ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)] (3) where |d| = \ud835\udc58. Inspired by the seq2seq models [43], we compute \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\u2014the component in Equation (2)\u2014as follows: \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03) = | \u02c6 \ud835\udc66| \u00d6 \ud835\udc56=1 \ud835\udc5d( \u02c6 \ud835\udc66\ud835\udc56| \u02c6 \ud835\udc66<\ud835\udc56,\ud835\udc65, d;\ud835\udc3a\ud835\udf03) = exp \u00a9 \u00ad \u00ab | \u02c6 \ud835\udc66| \u2211\ufe01 \ud835\udc56=1 log\ud835\udc5d( \u02c6 \ud835\udc66\ud835\udc56| \u02c6 \ud835\udc66<\ud835\udc56,\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\u00aa \u00ae \u00ac (4) where \u02c6 \ud835\udc66\ud835\udc56denotes the \ud835\udc56th token in \u02c6 \ud835\udc66and \u02c6 \ud835\udc66<\ud835\udc56denotes all tokens \u02c6 \ud835\udc661, \u02c6 \ud835\udc662, \u00b7 \u00b7 \u00b7 , \u02c6 \ud835\udc66\ud835\udc56\u22121. The next step is to estimate \ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) in Equation (3), which represents the probability of retrieving the result list d in response to input \ud835\udc65using the retrieval model \ud835\udc45\ud835\udf19. Most retrieval models score each query-document pair independently and then sort them with respect to their relevance score in descending order. Therefore, the probability of a document list being produced by \ud835\udc45\ud835\udf19can be modeled as a sampling without replacement process. In other words, assume that the retrieval model \ud835\udc45\ud835\udf19produces a retrieval score \ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51\u2208R for any document \ud835\udc51\u2208\ud835\udc36. Sampling without replacement probability of a document list is then computed as: \ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) = |d| \u00d6 \ud835\udc56=1 \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udc65;\ud835\udc45\ud835\udf19) 1 \u2212\u00cd\ud835\udc56\u22121 \ud835\udc57=1 \ud835\udc5d(\ud835\udc51\ud835\udc57|\ud835\udc65;\ud835\udc45\ud835\udf19) (5) where document-level probabilities \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udc65;\ud835\udc45\ud835\udf19) can be computed using the softmax operation: \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udc65;\ud835\udc45\ud835\udf19) = exp (\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51\ud835\udc56) \u00cd \ud835\udc51\u2208\ud835\udc36exp (\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51) (6) This iterative process of document sampling is non-differentiable, and thus cannot be simply used in gradient descent-based optimization approaches. To address both of these problems, Kool et al. [17, 18] recently introduced Ancestral Gumbel-Top-\ud835\udc58sampling. This approach creates a tree over all items in the sampling set and extends the Gumbel-Softmax sampling approach [24] to sampling without replacement. According to [17], independently perturbing each individual document score with Gumbel noise and picking the top \ud835\udc58documents with the largest perturbed values will generate a valid sample from the Plackett-Luce distribution. Gumbel perturbation itself can be done efficiently by simply drawing a sample \ud835\udc48\u223cUniform(0, 1), as Gumbel(0, \ud835\udefd) \u223c\u2212\ud835\udefdlog(\u2212log(\ud835\udc48)) [24]. \u02dc \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udf19,\ud835\udf03) = exp(\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51\ud835\udc56+ \ud835\udc3a\ud835\udc51\ud835\udc56) \u00cd \ud835\udc51\u2208\ud835\udc36exp(\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51+ \ud835\udc3a\ud835\udc51) (7) where \ud835\udc3a\ud835\udc51denotes the gumbel noise added for scoring document \ud835\udc51. We use straight-through gumbel-top-k, in which the top \ud835\udc58elements are selected from the above distribution using the arg max operation in the forward path, however, the softmax distribution is \fStochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA used in the backward path for computing the gradients. For more information on straight-through gumbel-softmax, refer to [14, 28]. Gumbel-top-k has been used in IR systems too. For instance, Zamani et al. [51] used the gumbel-top-k trick to optimize re-ranking models conditioned on the first stage retrieval models. Selecting Y. In Equation (1), Y denotes the output space, which can be unlimited for free-form text generation tasks, hence computationally intractable. In such cases, we need to estimate RAG Expected Utility by sampling from the output space. A uniformly random sample can give us an unbiased estimation, however, most random samples are completely unrelated to the input, so they can be easily discriminated from the ground truth output. Inspired by work on hard negative sampling for training ranking models [31, 49], at every \ud835\udc41= 10, 000 training steps, we run the RAG model that is being trained on the training inputs that will be used in the next \ud835\udc41steps and use beam search to return 100 most probable outputs. We randomly sample \ud835\udc5a= 10 of these outputs to form Y. We then made sure that for every pair (\ud835\udc65,\ud835\udc66) in the training set for the next \ud835\udc41steps,\ud835\udc66is included in Y, otherwise we randomly replace one of the sampled outputs in Y with \ud835\udc66. The reason for doing this is to make sure that our sample contains the ground truth output, ensuring that the model learns to produce higher probability for the ground truth output. Preparing Y for the next \ud835\udc41training steps would also enable us to pre-compute utility values\ud835\udc48(\ud835\udc66, \u02c6 \ud835\udc66) : \u2200\u02c6 \ud835\udc66\u2208Y, ensuring an efficient optimization process for RAG Expected Utility Maximization (see Equation (1)). 3 EXPERIMENTS 3.1 Data We use the Natural Questions (NQ) [19], TriviaQA [15], HotpotQA [50], FEVER [45], T-REx [7], zsRE [20], and Wizard of Wikipedia (WoW) [6] datasets from the KILT [29] benchmark. Due to the unavailability of ground truth labels for test set, our experiments are conducted on the publicly accessible validation sets. As the retrieval corpus, we employ the Wikipedia dump provided with the KILT benchmark2 and adhere to the preprocessing steps outlined by Karpukhin et al. [16], where each document is segmented into passages, each constrained to a maximum length of 100 words. The concatenation of the article title and passage text is used as a document. Note that the KILT benchmark furnishes document-level relevance labels (called Provenance) for its datasets, and these are employed for evaluating retrieval performance. In line with our preprocessing method outlined in this paper, we define all passages within a positive document as positive passages for our evaluation. For evaluating our models, we follow the standard KILT evaluation setup [29] by focusing on KILT-score metrics. KILT-scores combine R-Precision (\ud835\udc45\ud835\udc43) obtained by the retrieval results and the quality of the generated output text that is evaluated using any arbitrary metric \ud835\udc40(such as EM, Accuracy, or F1). For a query set \ud835\udc44, KILT-scores are computed as follows: KILT-M = 1 |\ud835\udc44| \u2211\ufe01 \ud835\udc5e\u2208\ud835\udc44 {\ud835\udc45\ud835\udc43(p, d) == 1} \u2217\ud835\udc40(\ud835\udc66, \u02c6 \ud835\udc66) (8) 2Retrieval corpus: https://dl.fbaipublicfiles.com/ur/wikipedia_split/psgs_w100.tsv.gz where d is the retrieval results produced by the retrieval model, p is the provenance label set provided by KILT, \ud835\udc66is the ground truth output, and \u02c6 \ud835\udc66is the generated text. Note that there is only one provenance label per query in most KILT datasets. FEVER and HotPotQA are the only exceptions. 12% of queries are associated with more than one supporting document in FEVER and all queries in HotPotQA (which focuses on multi-hop question answering) are associated with two documents. KILT-scores only evaluates the generated text if R-Precision is 1. This means that it does not solely focus on the quality of the generated text, but also makes sure that relevant supporting documents are provided. We adopt the metrics recommended by the KILT benchmark, namely Exact Match (KILTEM) for NQ, TriviaQA, and HotpotQA, Accuracy (KILT-AC) for FEVER, and F1-score (KILT-F1) for the WoW dataset. 3.2 Experimental Setup We apply the proposed optimization framework to a state-of-the-art RAG model on the KILT benchmark (i.e., FiD-Light, according to the KILT leaderboard) [29]. Therefore, we follow the experimental setup of Hofst\u00e4tter et al. [12] for FiD-Light. That means we used multi-task relevance sampled training set from the authors earlier work in [11] and trained a dense retrieval model, which is pretrained on the MSMARCO passage retrieval data [2]. Given that the datasets in our experiments focuses on relatively short-text generation tasks, and since all passages are less than or equal to 100 tokens, we set the input token limit for both query and passage combined at 384 tokens and for the output at 64 tokens. For training, we use a batch size of 128 with up to 40 retrieved passages, and a learning rate of 10\u22123 with the Adafactor optimizer [39]. We trained our models for 50,000 steps. We cut the learning rate by half for the large language models (i.e., T5-XL). During decoding, we use beam search with a beam size of 4. All our experiments are based on the T5X framework [33] on TPUs using T5v1.1 as the language model backbone [32]. For each dataset, we use the official KILT-score metric as the utility function for optimization (Equation (1)). 3.3 Results To evaluate the effectiveness of the RAG Expected Utility Maximization framework, we compare our model with the best performing entries in the KILT leaderboard (as of February 1, 2024) according to the official KILT-score metrics. These methods use a wide range of techniques to address these issues including dense retrieval methods followed by BART or T5 for generation, generative retrieval models, retrieval and reranking models, pre-trained large language models without augmentation, etc. These methods and their corresponding references are listed in Table 1. For the sake of space, we do not list their underlying methods here. The performance of these methods is obtained from the KILT leaderboard. We use FiD-Light as the main baseline in this paper, as it produces state-of-the-art results on six out of seven datasets and the proposed optimization method is applied to FiD-Light. FiD-Light is a simple extension of the Fusion-in-Decoder architecture that generates the document identifier of relevant documents in addition to the output text and uses then at inference for re-ranking the input result list. According to the results presented in Table 1, employing stochastic expected \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Hamed Zamani and Michael Bendersky Table 1: Comparing our models with top performing entries in the KILT leaderboard according to KILT-scores, as of February 1, 2024. The results are reported on the blind KILT test sets. Model Open Domain QA Fact Slot Filling Dialog NQ HotpotQA TriviaQA FEVER T-REx zsRE WOW KILT-EM KILT-EM KILT-EM KILT-AC KILT-AC KILT-AC KILT-F1 RAG [21] 32.7 3.2 38.1 53.5 23.1 36.8 8.8 DPR + FiD [30] 35.3 11.7 45.6 65.7 64.6 67.2 7.6 KGI [8] 36.4 \u2013 42.9 64.4 69.1 72.3 11.8 Re2G [10] 43.6 \u2013 57.9 78.5 75.8 \u2013 12.9 Hindsight [27] \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 13.4 SEAL + FiD [4] 38.8 18.1 50.6 71.3 60.1 73.2 11.6 Re3val [41] 39.5 24.2 51.3 73.0 \u2013 \u2013 13.5 GripRank [1] 43.6 \u2013 58.1 \u2013 \u2013 79.9 14.7 PLATO [3] \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 13.6 FiD-Light (T5-Base, \ud835\udc58= 64) 45.6 25.6 57.6 80.6 76.0 81.1 11.9 FiD-Light (T5-XL, \ud835\udc58= 8) 51.1 29.2 63.7 84.5 76.3 84.0 13.1 Stochastic RAG with FiD-Light (T5-Base, \ud835\udc58= 64) 46.2 27.3 59.7 81.3 76.9 82.8 12.8 Stochastic RAG with FiD-Light (T5-XL, \ud835\udc58= 8) 53.0 31.1 64.7 84.8 78.3 87.0 14.2 Figure 1: Sensitivity of Stochastic RAG with FiD-Light XL to the number of samples for estimating Equation (3). utility maximization leads to improvements in all datasets. Comparing against state-of-the-art baselines from the KILT leaderboard, our approach presents the best performing result in all datasets except for Wizard of Wikipedia, where only one method, named GripRank, performs slightly better than our best performing system. Note that in another dataset (i.e., zsRE), our methods outperform GripRank by a large margin. The last two rows in Table 1 present the results for the same model with different sizes for the downstream language model. T5Base contains 220 million parameters, while T5-XL is a language model with 3 billion parameters. We observe that both model sizes benefit from applying stochastic expected utility maximization. As expected, the larger model exhibits a better performance. That said, the performance difference between the Base and XL size models is not consistent across datasets. For instance, we observe substantial relative improvements on Natural Questions (i.e., 14.5%), while improvements on T-REx are smaller (i.e., 1.8%). To provide a deeper analysis of the Stochastic RAG performance, we vary the number of samples we take for estimating Equation (3). For the sake of visualization, we only present the results for a QA, a fact verification, and a slot-filling dataset in Figure 1. We observe that the model is robust with respect to the different number of samples. That said, sometimes we observe slight improvement as we increase the sample size (e.g., on TriviaQA). 4", + "additional_graph_info": { + "graph": [ + [ + "Hamed Zamani", + "Nick Craswell" + ], + [ + "Hamed Zamani", + "Michael Bendersky" + ], + [ + "Hamed Zamani", + "Fernando Diaz" + ], + [ + "Nick Craswell", + "Bhaskar Mitra" + ], + [ + "Nick Craswell", + "Emine Yilmaz" + ], + [ + "Nick Craswell", + "Daniel Campos" + ], + [ + "Nick Craswell", + "Ellen M. Voorhees" + ], + [ + "Nick Craswell", + "Jimmy Lin" + ], + [ + "Michael Bendersky", + "Honglei Zhuang" + ], + [ + "Michael Bendersky", + "Shuguang Han" + ], + [ + "Michael Bendersky", + "Ryan Mcdonald" + ], + [ + "Fernando Diaz", + "Bhaskar Mitra" + ], + [ + "Fernando Diaz", + "Michael Madaio" + ], + [ + "Fernando Diaz", + "Andres Ferraro" + ], + [ + "Fernando Diaz", + "Michael D. Ekstrand" + ], + [ + "Fernando Diaz", + "Asia J. Biega" + ] + ], + "node_feat": { + "Hamed Zamani": [ + { + "url": "http://arxiv.org/abs/2405.02816v1", + "title": "Stochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization", + "abstract": "This paper introduces Stochastic RAG--a novel approach for end-to-end\noptimization of retrieval-augmented generation (RAG) models that relaxes the\nsimplifying assumptions of marginalization and document independence, made in\nmost prior work. Stochastic RAG casts the retrieval process in RAG as a\nstochastic sampling without replacement process. Through this formulation, we\nemploy straight-through Gumbel-top-k that provides a differentiable\napproximation for sampling without replacement and enables effective end-to-end\noptimization for RAG. We conduct extensive experiments on seven diverse\ndatasets on a wide range of tasks, from open-domain question answering to fact\nverification to slot-filling for relation extraction and to dialogue systems.\nBy applying this optimization method to a recent and effective RAG model, we\nadvance state-of-the-art results on six out of seven datasets.", + "authors": "Hamed Zamani, Michael Bendersky", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR", + "cs.LG" + ], + "main_content": "INTRODUCTION Most machine learning systems, including large generative models, are self-contained systems, with both knowledge and reasoning encoded in model parameters. However, these models do not work effectively for tasks that require knowledge grounding [46], especially in case of non-stationary data where new information is actively being produced [47, 52]. As suggested by Zamani et al. [52], this issue can be addressed when machine learning systems Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0431-4/24/07. https://doi.org/10.1145/3626772.3657923 are being enhanced with the capability of retrieving stored content. For example, in retrieval-augmented generation (RAG), as a special case of retrieval-enhanced machine learning (REML) [52], systems consume the responses provided by one or more retrieval models for the purpose of (text) generation [21, 22]. RAG models demonstrate substantial promise across various applications, including open-domain question answering [16, 21, 53], fact verification [44], dialogue systems [5, 42, 48], and personalized generation [36, 37]. Many prior studies on RAG use off-the-shelf retrieval models. For instance, Nakano et al. [25] used APIs from a commercial search engine for text generation. Glass et al. [9], on the other hand, used a term matching retrieval model. Neural ranking models trained based on human annotated data have also been used in the literature [12, 21]. There also exist methods that only optimize the retrieval model and keep the language model parameters frozen [40]. A research direction in this area argues that optimizing retrieval models in RAG should depend on the downstream language model that consumes the retrieval results. This is also motivated by the findings presented by Salemi and Zamani [38] on evaluating retrieval quality in RAG systems. There exist solutions based on knowledge distillation [13] or end-to-end optimization based on some simplifying assumptions [35]. One of these assumptions is marginalization via top \ud835\udc58approximation [10, 21]. In more details, they first retrieve the top \ud835\udc58documents using off-the-shelf retrieval models, e.g., BM25 [34], and optimize retrieval models by re-scoring them, i.e., re-ranking, and feeding the documents to the downstream language model one-by-one independently [21]. This is far from reality as RAG models often consume multiple documents. This paper introduces Expected Utility Maximization for RAG\u2013a novel framework for end-to-end RAG optimization by relaxing these simplifying assumptions. This approach takes a utility function, which can be any arbitrary evaluation metric for the downstream generation task, such as exact match, BLEU [26], and ROUGE [23]. A major challenge in end-to-end optimization of RAG systems is that ranking and top \ud835\udc58selection is a non-differentiable process. Hence, this prevents us from using gradient descent-based methods for optimization. We address this issue by casting retrieval as a sampling without replacement process from the retrieval score distribution, which is approximated using the straight-through Gumbel-top-k approach. This stochastic approach\u2014called Stochastic RAG\u2014adds a Gumbel noise to the unnormalized retrieval scores and uses softmax to approximate argmax [17, 18]. Stochastic RAG can be applied to any RAG application. We evaluate our models using seven datasets from a wide range of applications, ranging from open-domain question answering to fact verification to slot-filling for relation extraction as well as dialogue systems. We apply our optimization method to FiD-Light [12], which arXiv:2405.02816v1 [cs.CL] 5 May 2024 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Hamed Zamani and Michael Bendersky is the best performing system on six out of these seven datasets, according to the knowledge-intensive language tasks (KILT) leaderboard as of Feb. 1, 2024.1 Our results demonstrate significant improvements on all these datasets. 2 EXPECTED UTILITY MAXIMIZATION FOR STOCHASTIC RAG Each RAG system consists of two main components: a text generation model \ud835\udc3a\ud835\udf03parameterized by \ud835\udf03and a retrieval model \ud835\udc45\ud835\udf19 parameterized by \ud835\udf19that retrieves documents from a large document collection\ud835\udc36. The text generation model consumes the retrieval results returned by the retrieval model. End-to-end optimization of RAG systems is challenging. This is mainly because retrieving top \ud835\udc58documents and feeding them to the generation model is not a differentiable process [52], thus one cannot simply employ gradientbased optimization algorithms for end-to-end optimization of these models. In this section, we introduce stochastic expected utility maximization for end-to-end optimization of retrieval-augmented models. Let \ud835\udc47= {(\ud835\udc651,\ud835\udc661), (\ud835\udc652,\ud835\udc662), \u00b7 \u00b7 \u00b7 , (\ud835\udc65\ud835\udc5b,\ud835\udc66\ud835\udc5b)} be a training set containing \ud835\udc5bpairs of \ud835\udc65\ud835\udc56(an input text) and \ud835\udc66\ud835\udc56(the ground truth output text). Let\ud835\udc48denote a utility function that takes the output generated by the RAG system \u02c6 \ud835\udc66and the ground truth output \ud835\udc66and generates a scalar value. The utility function can be any arbitrary metric, including but is not limited to, exact match, term overlap F1, BLEU, and ROUGE. We assume (1) the higher the utility value, the better, (2) the utility function is bounded within the [0, 1] range, and (3) \ud835\udc48(\ud835\udc66,\ud835\udc66) = 1. We define RAG Expected Utility as follows: RAG Expected Utility = 1 \ud835\udc5b \u2211\ufe01 (\ud835\udc65,\ud835\udc66)\u2208\ud835\udc47 \u2211\ufe01 \u02c6 \ud835\udc66\u2208Y \ud835\udc48(\ud835\udc66, \u02c6 \ud835\udc66)\ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) (1) where Y the output space, i.e., all possible output texts. In some models, the output space is limited, for instance in fact verification, the output space is often binary: the given candidate fact is often true or false. In other situations, such as free-form text generation, the output space is unlimited. To make sure that expected utility calculation is tractable, we would need to approximate the above equation by sampling from the unlimited space Y. We will explain how such samples can be obtained at the end of this section. The probability of generating any given output \u02c6 \ud835\udc66in a RAG system can be modeled as: \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = \u2211\ufe01 d\u2208\ud835\udf0b\ud835\udc58(\ud835\udc36) \ud835\udc5d( \u02c6 \ud835\udc66, d|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = \u2211\ufe01 d\u2208\ud835\udf0b\ud835\udc58(\ud835\udc36) \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\ud835\udc5d(d|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = \u2211\ufe01 d\u2208\ud835\udf0b\ud835\udc58(\ud835\udc36) \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) (2) where \ud835\udf0b\ud835\udc58(\ud835\udc36) denotes all permutations of \ud835\udc58documents being selected from the retrieval collection \ud835\udc36. The first step in the above equation is obtained using the law of total probability, the second step is obtained using the chain rule, and the third step is obtained due to the fact that the probability of a result list d being retrieved is independent of the text generation model \ud835\udc3a\ud835\udf03. 1https://eval.ai/web/challenges/challenge-page/689/leaderboard. Note that considering all permutations in \ud835\udf0b\ud835\udc58(\ud835\udc36) is expensive and impractical for large collections, thus we can compute an approximation of this equation. We do such approximation through a stochastic process. We rewrite Equation (2) as follows: \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = Ed\u223c\ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) [\ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)] (3) where |d| = \ud835\udc58. Inspired by the seq2seq models [43], we compute \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\u2014the component in Equation (2)\u2014as follows: \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03) = | \u02c6 \ud835\udc66| \u00d6 \ud835\udc56=1 \ud835\udc5d( \u02c6 \ud835\udc66\ud835\udc56| \u02c6 \ud835\udc66<\ud835\udc56,\ud835\udc65, d;\ud835\udc3a\ud835\udf03) = exp \u00a9 \u00ad \u00ab | \u02c6 \ud835\udc66| \u2211\ufe01 \ud835\udc56=1 log\ud835\udc5d( \u02c6 \ud835\udc66\ud835\udc56| \u02c6 \ud835\udc66<\ud835\udc56,\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\u00aa \u00ae \u00ac (4) where \u02c6 \ud835\udc66\ud835\udc56denotes the \ud835\udc56th token in \u02c6 \ud835\udc66and \u02c6 \ud835\udc66<\ud835\udc56denotes all tokens \u02c6 \ud835\udc661, \u02c6 \ud835\udc662, \u00b7 \u00b7 \u00b7 , \u02c6 \ud835\udc66\ud835\udc56\u22121. The next step is to estimate \ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) in Equation (3), which represents the probability of retrieving the result list d in response to input \ud835\udc65using the retrieval model \ud835\udc45\ud835\udf19. Most retrieval models score each query-document pair independently and then sort them with respect to their relevance score in descending order. Therefore, the probability of a document list being produced by \ud835\udc45\ud835\udf19can be modeled as a sampling without replacement process. In other words, assume that the retrieval model \ud835\udc45\ud835\udf19produces a retrieval score \ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51\u2208R for any document \ud835\udc51\u2208\ud835\udc36. Sampling without replacement probability of a document list is then computed as: \ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) = |d| \u00d6 \ud835\udc56=1 \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udc65;\ud835\udc45\ud835\udf19) 1 \u2212\u00cd\ud835\udc56\u22121 \ud835\udc57=1 \ud835\udc5d(\ud835\udc51\ud835\udc57|\ud835\udc65;\ud835\udc45\ud835\udf19) (5) where document-level probabilities \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udc65;\ud835\udc45\ud835\udf19) can be computed using the softmax operation: \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udc65;\ud835\udc45\ud835\udf19) = exp (\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51\ud835\udc56) \u00cd \ud835\udc51\u2208\ud835\udc36exp (\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51) (6) This iterative process of document sampling is non-differentiable, and thus cannot be simply used in gradient descent-based optimization approaches. To address both of these problems, Kool et al. [17, 18] recently introduced Ancestral Gumbel-Top-\ud835\udc58sampling. This approach creates a tree over all items in the sampling set and extends the Gumbel-Softmax sampling approach [24] to sampling without replacement. According to [17], independently perturbing each individual document score with Gumbel noise and picking the top \ud835\udc58documents with the largest perturbed values will generate a valid sample from the Plackett-Luce distribution. Gumbel perturbation itself can be done efficiently by simply drawing a sample \ud835\udc48\u223cUniform(0, 1), as Gumbel(0, \ud835\udefd) \u223c\u2212\ud835\udefdlog(\u2212log(\ud835\udc48)) [24]. \u02dc \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udf19,\ud835\udf03) = exp(\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51\ud835\udc56+ \ud835\udc3a\ud835\udc51\ud835\udc56) \u00cd \ud835\udc51\u2208\ud835\udc36exp(\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51+ \ud835\udc3a\ud835\udc51) (7) where \ud835\udc3a\ud835\udc51denotes the gumbel noise added for scoring document \ud835\udc51. We use straight-through gumbel-top-k, in which the top \ud835\udc58elements are selected from the above distribution using the arg max operation in the forward path, however, the softmax distribution is \fStochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA used in the backward path for computing the gradients. For more information on straight-through gumbel-softmax, refer to [14, 28]. Gumbel-top-k has been used in IR systems too. For instance, Zamani et al. [51] used the gumbel-top-k trick to optimize re-ranking models conditioned on the first stage retrieval models. Selecting Y. In Equation (1), Y denotes the output space, which can be unlimited for free-form text generation tasks, hence computationally intractable. In such cases, we need to estimate RAG Expected Utility by sampling from the output space. A uniformly random sample can give us an unbiased estimation, however, most random samples are completely unrelated to the input, so they can be easily discriminated from the ground truth output. Inspired by work on hard negative sampling for training ranking models [31, 49], at every \ud835\udc41= 10, 000 training steps, we run the RAG model that is being trained on the training inputs that will be used in the next \ud835\udc41steps and use beam search to return 100 most probable outputs. We randomly sample \ud835\udc5a= 10 of these outputs to form Y. We then made sure that for every pair (\ud835\udc65,\ud835\udc66) in the training set for the next \ud835\udc41steps,\ud835\udc66is included in Y, otherwise we randomly replace one of the sampled outputs in Y with \ud835\udc66. The reason for doing this is to make sure that our sample contains the ground truth output, ensuring that the model learns to produce higher probability for the ground truth output. Preparing Y for the next \ud835\udc41training steps would also enable us to pre-compute utility values\ud835\udc48(\ud835\udc66, \u02c6 \ud835\udc66) : \u2200\u02c6 \ud835\udc66\u2208Y, ensuring an efficient optimization process for RAG Expected Utility Maximization (see Equation (1)). 3 EXPERIMENTS 3.1 Data We use the Natural Questions (NQ) [19], TriviaQA [15], HotpotQA [50], FEVER [45], T-REx [7], zsRE [20], and Wizard of Wikipedia (WoW) [6] datasets from the KILT [29] benchmark. Due to the unavailability of ground truth labels for test set, our experiments are conducted on the publicly accessible validation sets. As the retrieval corpus, we employ the Wikipedia dump provided with the KILT benchmark2 and adhere to the preprocessing steps outlined by Karpukhin et al. [16], where each document is segmented into passages, each constrained to a maximum length of 100 words. The concatenation of the article title and passage text is used as a document. Note that the KILT benchmark furnishes document-level relevance labels (called Provenance) for its datasets, and these are employed for evaluating retrieval performance. In line with our preprocessing method outlined in this paper, we define all passages within a positive document as positive passages for our evaluation. For evaluating our models, we follow the standard KILT evaluation setup [29] by focusing on KILT-score metrics. KILT-scores combine R-Precision (\ud835\udc45\ud835\udc43) obtained by the retrieval results and the quality of the generated output text that is evaluated using any arbitrary metric \ud835\udc40(such as EM, Accuracy, or F1). For a query set \ud835\udc44, KILT-scores are computed as follows: KILT-M = 1 |\ud835\udc44| \u2211\ufe01 \ud835\udc5e\u2208\ud835\udc44 {\ud835\udc45\ud835\udc43(p, d) == 1} \u2217\ud835\udc40(\ud835\udc66, \u02c6 \ud835\udc66) (8) 2Retrieval corpus: https://dl.fbaipublicfiles.com/ur/wikipedia_split/psgs_w100.tsv.gz where d is the retrieval results produced by the retrieval model, p is the provenance label set provided by KILT, \ud835\udc66is the ground truth output, and \u02c6 \ud835\udc66is the generated text. Note that there is only one provenance label per query in most KILT datasets. FEVER and HotPotQA are the only exceptions. 12% of queries are associated with more than one supporting document in FEVER and all queries in HotPotQA (which focuses on multi-hop question answering) are associated with two documents. KILT-scores only evaluates the generated text if R-Precision is 1. This means that it does not solely focus on the quality of the generated text, but also makes sure that relevant supporting documents are provided. We adopt the metrics recommended by the KILT benchmark, namely Exact Match (KILTEM) for NQ, TriviaQA, and HotpotQA, Accuracy (KILT-AC) for FEVER, and F1-score (KILT-F1) for the WoW dataset. 3.2 Experimental Setup We apply the proposed optimization framework to a state-of-the-art RAG model on the KILT benchmark (i.e., FiD-Light, according to the KILT leaderboard) [29]. Therefore, we follow the experimental setup of Hofst\u00e4tter et al. [12] for FiD-Light. That means we used multi-task relevance sampled training set from the authors earlier work in [11] and trained a dense retrieval model, which is pretrained on the MSMARCO passage retrieval data [2]. Given that the datasets in our experiments focuses on relatively short-text generation tasks, and since all passages are less than or equal to 100 tokens, we set the input token limit for both query and passage combined at 384 tokens and for the output at 64 tokens. For training, we use a batch size of 128 with up to 40 retrieved passages, and a learning rate of 10\u22123 with the Adafactor optimizer [39]. We trained our models for 50,000 steps. We cut the learning rate by half for the large language models (i.e., T5-XL). During decoding, we use beam search with a beam size of 4. All our experiments are based on the T5X framework [33] on TPUs using T5v1.1 as the language model backbone [32]. For each dataset, we use the official KILT-score metric as the utility function for optimization (Equation (1)). 3.3 Results To evaluate the effectiveness of the RAG Expected Utility Maximization framework, we compare our model with the best performing entries in the KILT leaderboard (as of February 1, 2024) according to the official KILT-score metrics. These methods use a wide range of techniques to address these issues including dense retrieval methods followed by BART or T5 for generation, generative retrieval models, retrieval and reranking models, pre-trained large language models without augmentation, etc. These methods and their corresponding references are listed in Table 1. For the sake of space, we do not list their underlying methods here. The performance of these methods is obtained from the KILT leaderboard. We use FiD-Light as the main baseline in this paper, as it produces state-of-the-art results on six out of seven datasets and the proposed optimization method is applied to FiD-Light. FiD-Light is a simple extension of the Fusion-in-Decoder architecture that generates the document identifier of relevant documents in addition to the output text and uses then at inference for re-ranking the input result list. According to the results presented in Table 1, employing stochastic expected \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Hamed Zamani and Michael Bendersky Table 1: Comparing our models with top performing entries in the KILT leaderboard according to KILT-scores, as of February 1, 2024. The results are reported on the blind KILT test sets. Model Open Domain QA Fact Slot Filling Dialog NQ HotpotQA TriviaQA FEVER T-REx zsRE WOW KILT-EM KILT-EM KILT-EM KILT-AC KILT-AC KILT-AC KILT-F1 RAG [21] 32.7 3.2 38.1 53.5 23.1 36.8 8.8 DPR + FiD [30] 35.3 11.7 45.6 65.7 64.6 67.2 7.6 KGI [8] 36.4 \u2013 42.9 64.4 69.1 72.3 11.8 Re2G [10] 43.6 \u2013 57.9 78.5 75.8 \u2013 12.9 Hindsight [27] \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 13.4 SEAL + FiD [4] 38.8 18.1 50.6 71.3 60.1 73.2 11.6 Re3val [41] 39.5 24.2 51.3 73.0 \u2013 \u2013 13.5 GripRank [1] 43.6 \u2013 58.1 \u2013 \u2013 79.9 14.7 PLATO [3] \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 13.6 FiD-Light (T5-Base, \ud835\udc58= 64) 45.6 25.6 57.6 80.6 76.0 81.1 11.9 FiD-Light (T5-XL, \ud835\udc58= 8) 51.1 29.2 63.7 84.5 76.3 84.0 13.1 Stochastic RAG with FiD-Light (T5-Base, \ud835\udc58= 64) 46.2 27.3 59.7 81.3 76.9 82.8 12.8 Stochastic RAG with FiD-Light (T5-XL, \ud835\udc58= 8) 53.0 31.1 64.7 84.8 78.3 87.0 14.2 Figure 1: Sensitivity of Stochastic RAG with FiD-Light XL to the number of samples for estimating Equation (3). utility maximization leads to improvements in all datasets. Comparing against state-of-the-art baselines from the KILT leaderboard, our approach presents the best performing result in all datasets except for Wizard of Wikipedia, where only one method, named GripRank, performs slightly better than our best performing system. Note that in another dataset (i.e., zsRE), our methods outperform GripRank by a large margin. The last two rows in Table 1 present the results for the same model with different sizes for the downstream language model. T5Base contains 220 million parameters, while T5-XL is a language model with 3 billion parameters. We observe that both model sizes benefit from applying stochastic expected utility maximization. As expected, the larger model exhibits a better performance. That said, the performance difference between the Base and XL size models is not consistent across datasets. For instance, we observe substantial relative improvements on Natural Questions (i.e., 14.5%), while improvements on T-REx are smaller (i.e., 1.8%). To provide a deeper analysis of the Stochastic RAG performance, we vary the number of samples we take for estimating Equation (3). For the sake of visualization, we only present the results for a QA, a fact verification, and a slot-filling dataset in Figure 1. We observe that the model is robust with respect to the different number of samples. That said, sometimes we observe slight improvement as we increase the sample size (e.g., on TriviaQA). 4" + }, + { + "url": "http://arxiv.org/abs/2304.14522v1", + "title": "Multivariate Representation Learning for Information Retrieval", + "abstract": "Dense retrieval models use bi-encoder network architectures for learning\nquery and document representations. These representations are often in the form\nof a vector representation and their similarities are often computed using the\ndot product function. In this paper, we propose a new representation learning\nframework for dense retrieval. Instead of learning a vector for each query and\ndocument, our framework learns a multivariate distribution and uses negative\nmultivariate KL divergence to compute the similarity between distributions. For\nsimplicity and efficiency reasons, we assume that the distributions are\nmultivariate normals and then train large language models to produce mean and\nvariance vectors for these distributions. We provide a theoretical foundation\nfor the proposed framework and show that it can be seamlessly integrated into\nthe existing approximate nearest neighbor algorithms to perform retrieval\nefficiently. We conduct an extensive suite of experiments on a wide range of\ndatasets, and demonstrate significant improvements compared to competitive\ndense retrieval models.", + "authors": "Hamed Zamani, Michael Bendersky", + "published": "2023-04-27", + "updated": "2023-04-27", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.LG" + ], + "main_content": "INTRODUCTION Utilizing implicit or explicit relevance labels to learn retrieval models, also called learning-to-rank models, is at the core of information retrieval research. Due to efficiency and even sometimes effectiveness reservations, learning-to-rank models have been mostly used for reranking documents retrieved by an efficient retrieval model, such as BM25 [39]. Therefore, the performance of learning-to-rank models was bounded by the quality of candidate documents selected for reranking. In 2018, the SNRM model [55] has revolutionized This work is licensed under a Creative Commons Attribution International 4.0 License. SIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan \u00a9 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-9408-6/23/07. https://doi.org/10.1145/3539618.3591740 the way we look at learning-to-rank models by arguing that biencoder neural networks can be used for representing queries and documents, and document representations can be then indexed for efficient retrieval at query time. The model applied learned latent sparse representations for queries and documents, and indexed the document representations using an inverted index. In 2020, the DPR model [23] demonstrated that even bi-encoder networks with dense representations can be used for efficient retrieval. They took advantage of approximate nearest neighbor algorithms for indexing dense document representations. This category of models, often called dense retrieval models, has attracted much attention and led to state-of-the-art performance on a wide range of retrieval tasks [18, 24, 37, 53, 57]. Existing sparse and dense representation learning models can be seen as instantiations of Salton et al.\u2019s vector space models [41], i.e., queries and documents are represented using vectors and relevance is defined using vector similarity functions, such as inner product or cosine similarity. Such approaches suffer from a major shortcoming: they do not represent the model\u2019s confidence on the learned representations. Inspired by prior work on modeling uncertainty in information retrieval (e.g., [7, 8, 52]), this paper builds upon the following hypothesis: Neural retrieval models would benefit from modeling uncertainty (or confidence) in the learned query and document representations. Therefore, we propose a generic framework that represents each query and document using a multivariate distribution, called the MRL framework. In other words, instead of representing queries and documents using \ud835\udc58-dimensional vectors, we can assign a probability to each point in this \ud835\udc58-dimensional space; the higher the probability, the higher the confidence that the model assigns to each point. For \ud835\udc58= 2, Figure 1(a) depicts the representation of a query and a document in existing single-vector dense retrieval models.1 On the other hand, Figure 1(b) demonstrates the representations that we envision for queries and documents. To reduce the complexity of the model, we assume that the representations are multivariate normal distributions with a diagonal covariance matrix; meaning that the representation dimensions are orthogonal and independent. With this assumption, we learn two \ud835\udc58-dimensional vectors for each query or document: a mean vector and a variance vector. In addition to uncertainty, such probabilistic modeling can implicitly represent breadth of information in queries and documents. For instance, a document that covers multiple topics and potentially satisfies a diverse set of information needs may be represented by a multivariate distribution with large variance values. 1The third dimension is only used for consistent presentation. One can consider the probability of 1 for one point in the two-dimensional space and zero elsewhere. arXiv:2304.14522v1 [cs.IR] 27 Apr 2023 \fSIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Hamed Zamani and Michael Bendersky -3 -2 -1 0 1 2 3 x1 -3 -2 -1 0 1 2 3 x2 0 0.05 0.1 0.15 0.2 0.25 0.3 (a) Representation learning in existing single-vector dense retrieval models (b) Representation learning in MRL Figure 1: Existing dense retrieval methods use a vector to represent any input. Figure 1(a) demonstrates example representations they learn for two inputs (e.g., a query and a document). The proposed framework learns multivariate distributions to represent each input, which is depicted in Figure 1(b). MRL uses negative multivariate Kullback-Leibler (KL) divergence between query and document representations to compute the relevance scores. We prove that the relevance scores can be computed efficiently by proposing solutions that can be implemented using existing approximate nearest neighbor search algorithms. We also demonstrate that one can simply implement the MRL framework using existing pre-trained large language models, such as BERT [13]. We show that an implementation of MRL that uses a single vector with 768 dimensions to represent multivariate representations for each query and document significantly outperforms existing single vector dense retrieval models on several standard text retrieval benchmarks. MRL also often outperforms ColBERTv2 [43], a state-of-the-art multi vector dense retrieval model, while using significantly less storage and having significantly lower query latency. We further demonstrate that MRL also performs effectively in zero-shot settings when applied to unseen domains. Besides, we also demonstrate that the norm of variance vectors learned by MRL are a strong indicator of the retrieval effectiveness and can be used as a pre-retrieval query performance predictor. We believe that MRL smooths the path towards developing more advanced probabilistic dense retrieval models and its applications can be extended to recommender systems, conversational systems, and a wide range of retrieval-enhanced machine learning models. 2 RELATED WORK Variance of retrieval performance among different topics has been a long-standing research theme in the information retrieval community. For instance, TREC 2004 Robust Track organizers noted that solely optimizing the average metric aggregates (e.g., MAP) \u201cfurther improves the effectiveness of the already-effective topics, sometimes at the expense of the poor performers\u201d [49]. Moreover, identifying poorly performing topics is hard, and failure to do so leads to degraded user perception of the retrieval system as \u201can individual user does not see the average performance of the system, but only the effectiveness of the system on his or her requests\u201d [49]. These insights led the information retrieval community to consider query performance prediction [5] \u2013 a notion that certain signals can predict the performance of a search query. Such predictions can be helpful in guiding the retrieval system in taking further actions as needed for more difficult queries, e.g., suggesting alternative query reformulations [1]. A degree of query ambiguity with respect to the underlying corpus has been shown to be a valuable predictor of poor performance of search queries [11]. Therefore, dealing with retrieval uncertainty has been proposed as a remedy. For instance, Collins-Thompson and Callan [8] propose estimating query uncertainty by repeatedly fitting a Dirichlet distribution over bootstrap samples from the top\ud835\udc58retrieved documents. They show that a Bayesian combination of multiple boostrap samples (which takes into account sample variance) leads to both significantly better retrieval metrics, and better retrieval robustness (less queries hurt by the query expansion methods). In a related vein, Zhu et al. [62] develop a risk-aware language model based on the Dirichlet distribution (as a conjugate prior to the multinomial distribution). They use the variance of the Dirichlet distribution for adjusting the risk in the final ranking score (i.e., revising the relevance estimates downwards in face of high variance). The idea of risk adjustment inspired by the financial investment literature was further developed by Wang and Zhu into the portfolio theory for information retrieval [52]. Portfolio theory generalizes the probability ranking principle (PRP) by considering both the uncertainty of relevance predictions and correlations between retrieved documents. It also demonstrates that one way to address uncertainty is via diversification [6]. The portfolio theory-based approach to retrieval has since been applied in several domains including recommendation [44], quantum-based information retrieval [63], and computational advertising [59], among others. While, as this prior research shows, there has been an extensive exploration of risk and mean-variance trade-offs in the statistical language models for information retrieval, there has been so far much less discussion of these topics in the context of neural (aka \fMultivariate Representation Learning for Information Retrieval SIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan dense) models for retrieval. As a notable exception to this, Cohen et al. [7] recently proposed a Bayesian neural relevance model, where a posterior is approximated using Monte Carlo sampling based on drop-out [14]. A similar approach was proposed by Penha and Hauff [34] in the context of conversational search. These approaches, which employ variational inference at training time, can only be applied for reranking. In contrast, in this work we model uncertainty at the level of query and document representations, and demonstrate how such representations can be efficiently and effectively used for retrieval using any of the existing approximate nearest neighbor methods. Outside the realm of information retrieval research, various forms of representations that go beyond Euclidean vectors have been explored, including order embeddings [46], hyperbolic embeddings [31], and probabilistic box embeddings [47], Such representations have been shown to be effective for various NLP tasks that involve modeling complex relationship or structures. Similar to our work, Vilnis and McCallum [48] used Gaussian distributions for representation learning by proposing Gaussian embeddings for words. In this work, we focus on query and document representations in the retrieval setting. Some prior work, as a way to achieve semantically richer representations, model queries and documents using a combination of multiple vectors [26, 43, 61]. While such representations were shown to lead to better retrieval effectiveness, they do come at significant computational and storage costs. We demonstrate that our multivariate distribution representations are significantly more efficient than multi-vector ones, while attaining comparable or better performance on a wide range of collections. 3 THE MRL FRAMEWORK Existing single vector dense retrieval models uses a \ud835\udc58-dimensional latent vector to represent each query or each query token [17, 23, 53, 57]. We argue that these dense retrieval models can benefit from modeling uncertainty in representation learning. That means the model may produce a representation for a clear navigational query with high confidence, while it may have lower confidence in representing an ambiguous query. Same argument applies to the documents. However, the existing frameworks for dense retrieval do not model such confidence or uncertainty in representations. In this paper, we present MRL\u2013 a generic framework for modeling uncertainty in representation learning for information retrieval. MRL models each query (or document) using a \ud835\udc58-variate distribution \u2013 a group of \ud835\udc58continuous random variables using which we can compute the probability of any given vector in a \ud835\udc58-dimensional space being a representation of the input query (or document). Formally, MRL encodes each query \ud835\udc5eand each document \ud835\udc51as follows: Q = (\ud835\udc441,\ud835\udc442, \u00b7 \u00b7 \u00b7 ,\ud835\udc44\ud835\udc58)\u22ba= Encoderq(\ud835\udc5e) D = (\ud835\udc371, \ud835\udc372, \u00b7 \u00b7 \u00b7 , \ud835\udc37\ud835\udc58)\u22ba= Encoderd(\ud835\udc51) (1) where Encoderq and Encoderd respectively denote query and document encoders. Each \ud835\udc44\ud835\udc56and \ud835\udc37\ud835\udc56is a random variable; thus Q and D are \ud835\udc58-variate distributions representing the query and the document. The superscript \u22badenotes the transpose of the vector. In this paper, we assume that Q and D both are \ud835\udc58-variate normal distributions. The reasons for this assumption are: (1) we can define each \ud835\udc58-variate normal distribution using a mean vector and a covariance matrix, (2) lower order distributions (i.e., any combination of the \ud835\udc58dimensions) and conditional distributions are also normal, which makes it easily extensible, (3) linear functions of multivariate normal distributions are also multivariate normal, leading to simple aggregation approaches. A \ud835\udc58-variate normal distribution can be represented using a \ud835\udc58\u00d7 1 mean vector \ud835\udc40 \ud835\udc40 \ud835\udc40= (\ud835\udf071, \ud835\udf072, \u00b7 \u00b7 \u00b7 , \ud835\udf07\ud835\udc58)\u22baand a \ud835\udc58\u00d7 \ud835\udc58covariance matrix \u03a3 \u03a3 \u03a3 as follows: N\ud835\udc58(\ud835\udc40 \ud835\udc40 \ud835\udc40, \u03a3 \u03a3 \u03a3). We compute the representations as \ud835\udc58independent normal distributions, thus the covariance matrix is diagonal. Therefore, our representations are modeled as follows: N\ud835\udc58 \u23a1 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a3 \u239b \u239c \u239c \u239c \u239c \u239d \ud835\udf071 \ud835\udf072 . . . \ud835\udf07\ud835\udc58 \u239e \u239f \u239f \u239f \u239f \u23a0 , \u239b \u239c \u239c \u239c \u239c \u239d \ud835\udf0e2 1 0 . . . 0 0 \ud835\udf0e2 2 . . . 0 . . . ... 0 0 . . . \ud835\udf0e2 \ud835\udc58 \u239e \u239f \u239f \u239f \u239f \u23a0 \u23a4 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a6 (2) With this formulation, we can re-write Equation (1) as follows: Q \u223cN\ud835\udc58(\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44, \u03a3 \u03a3 \u03a3\ud835\udc44), \ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44, \u03a3 \u03a3 \u03a3\ud835\udc44= Encoderq(\ud835\udc5e) D \u223cN\ud835\udc58(\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37, \u03a3 \u03a3 \u03a3\ud835\udc37), \ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37, \u03a3 \u03a3 \u03a3\ud835\udc37= Encoderd(\ud835\udc51) (3) where \ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44= (\ud835\udf07\ud835\udc5e1, \ud835\udf07\ud835\udc5e2, \u00b7 \u00b7 \u00b7 , \ud835\udf07\ud835\udc5e\ud835\udc58)\u22ba, \u03a3 \u03a3 \u03a3\ud835\udc44= (\ud835\udf0e2 \ud835\udc5e1, \ud835\udf0e2 \ud835\udc5e2, \u00b7 \u00b7 \u00b7 , \ud835\udf0e2 \ud835\udc5e\ud835\udc58)\u22ba\u00d7\ud835\udc3c\ud835\udc58, \ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37= (\ud835\udf07\ud835\udc511, \ud835\udf07\ud835\udc512, \u00b7 \u00b7 \u00b7 , \ud835\udf07\ud835\udc51\ud835\udc58)\u22ba, and \u03a3 \u03a3 \u03a3\ud835\udc37= (\ud835\udf0e2 \ud835\udc511, \ud835\udf0e2 \ud835\udc512, \u00b7 \u00b7 \u00b7 , \ud835\udf0e2 \ud835\udc51\ud835\udc58)\u22ba\u00d7 \ud835\udc3c\ud835\udc58. Therefore, it is safe to claim that MRL uses large language models to learn a \ud835\udc58-dimensional mean vector and a \ud835\udc58-dimensional variance vector for representing each input query and document. This representation for a query and a document is plotted in Figure 1(b) (\ud835\udc58= 2 in the plot). Using the flexible modeling offered by the MRL framework, we can compute the probability of any \ud835\udc58dimensional vector representing each query or document. In more detail, the probability of vector x = (\ud835\udc651,\ud835\udc652, \u00b7 \u00b7 \u00b7 ,\ud835\udc65\ud835\udc58)\u22babeing generated from the \ud835\udc58-variate normal distribution in Equation (2) is equal to: \ud835\udc5d(x) = 1 (2\ud835\udf0b) \ud835\udc58 2 det(\u03a3 \u03a3 \u03a3) 1 2 exp (\ufe03 \u22121 2 (x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40)\u22ba\u03a3 \u03a3 \u03a3\u22121(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40) )\ufe03 (4) where det(\u00b7) denotes the determinant of the given matrix. This formulation enables us to compute the probability of any\ud835\udc58-dimensional vector being a representation for each query and document. Once the queries and documents are represented, MRL computes the relevance score for a pair of query and document using the negative Kullback-Leibler divergence (negative KL divergence) between two \ud835\udc58-variate distributions: \u2212KLD\ud835\udc58(Q \u2225D). The KL divergence can be computed as follows: KLD\ud835\udc58(Q \u2225D) = EQ [\ufe03 log Q D ]\ufe03 = EQ [log Q \u2212log D] = 1 2EQ [\ufe02 \u2212log det(\u03a3 \u03a3 \u03a3\ud835\udc44) \u2212(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc44(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44) + log det(\u03a3 \u03a3 \u03a3\ud835\udc37) + (x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc37(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37) ]\ufe01 = 1 2 log det(\u03a3 \u03a3 \u03a3\ud835\udc37) det(\u03a3 \u03a3 \u03a3\ud835\udc44) \u22121 2EQ [\ufe02 (x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc44(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44) ]\ufe02 + 1 2EQ [\ufe01 (x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc37(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37) ]\ufe01 (5) \fSIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Hamed Zamani and Michael Bendersky Since (x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc44(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44) is a real scalar (i.e., \u2208R, it is equivalent to tr{(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc44(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)}, where tr{\u00b7} denotes the trace of the given matrix. Since tr{\ud835\udc4b\ud835\udc4c} = tr{\ud835\udc4c\ud835\udc4b} for any two matrices \ud835\udc4b\u2208R\ud835\udc4e\u00d7\ud835\udc4fand \ud835\udc4c\u2208R\ud835\udc4f\u00d7\ud835\udc4e, we have: tr{(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc44(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)} = tr{(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc44} Therefore, since E[tr{\ud835\udc4b}] = tr{E[\ud835\udc4b]} for any square matrix \ud835\udc4b, we can rewrite Equation (5) as follows: KLD\ud835\udc58(Q \u2225D) = 1 2 log det(\u03a3 \u03a3 \u03a3\ud835\udc37) det(\u03a3 \u03a3 \u03a3\ud835\udc44) \u22121 2tr {\ufe02 EQ [\ufe02 (x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc44 ]\ufe02}\ufe02 + 1 2tr {\ufe01 EQ [\ufe01 (x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc37(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37) ]\ufe01}\ufe01 (6) Given the definition of the covariance matrix, we know that \u03a3 \u03a3 \u03a3\ud835\udc44= EQ [\ufe01 (x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)\u22ba]\ufe01 . Therefore, we have: tr{EQ [\ufe02 (x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc44 ]\ufe02 } = tr {\ufe02 EQ [\ufe02 \u03a3 \u03a3 \u03a3\ud835\udc44\u03a3 \u03a3 \u03a3\u22121 \ud835\udc44 ]\ufe02}\ufe02 = tr{\ud835\udc3c\ud835\udc58} = \ud835\udc58 (7) In addition, since \ud835\udc44is a multivariate normal distribution, for any matrix \ud835\udc34we have E\ud835\udc44[x\u22ba\ud835\udc34x] = tr{\ud835\udc34\u03a3 \u03a3 \u03a3\ud835\udc44} + \ud835\udc40 \ud835\udc40 \ud835\udc40\u22ba \ud835\udc44\ud835\udc34\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44. This results in: tr {\ufe01 EQ [\ufe01 (x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc37(x \u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37) ]\ufe01}\ufe01 = tr{\u03a3 \u03a3 \u03a3\u22121 \ud835\udc37\u03a3 \u03a3 \u03a3\ud835\udc44} + (\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44\u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc37(\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44\u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37) (8) Using Equations (7) and (8), we can rewrite Equation (6) as follows: 1 2 [\ufe03 log det(\u03a3 \u03a3 \u03a3\ud835\udc37) det(\u03a3 \u03a3 \u03a3\ud835\udc44) \u2212\ud835\udc58+ tr{\u03a3 \u03a3 \u03a3\u22121 \ud835\udc37\u03a3 \u03a3 \u03a3\ud835\udc44} + (\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44\u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37)\u22ba\u03a3 \u03a3 \u03a3\u22121 \ud835\udc37(\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44\u2212\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37) ]\ufe03 (9) This equation can be further simplified. Based on our earlier assumption that the covariance matrices are diagonal, then det(\u03a3 \u03a3 \u03a3\ud835\udc37) = \u220f\ufe01\ud835\udc58 \ud835\udc56=1 \ud835\udf0e2 \ud835\udc51\ud835\udc56. In addition, since we are using KL divergence to rank documents, constant values (e.g., \ud835\udc58) or document independent values (e.g., log det(\u03a3 \u03a3 \u03a3\ud835\udc44)) do not impact document ordering. Therefore, there can be omitted and we can use the following equation to rank the documents using negative multivariate KL-divergence: score(\ud835\udc5e,\ud835\udc51) = \u2212KLD\ud835\udc58(Q \u2225D) =rank \u22121 2 \u23a1 \u23a2 \u23a2 \u23a2 \u23a2 \u23a3 \ud835\udc58 \u2211\ufe02 \ud835\udc56=1 log \ud835\udf0e2 \ud835\udc51\ud835\udc56+ \u220f\ufe01\ud835\udc58 \ud835\udc56=1 \ud835\udf0e2 \ud835\udc5e\ud835\udc56 \u220f\ufe01\ud835\udc58 \ud835\udc56=1 \ud835\udf0e2 \ud835\udc51\ud835\udc56 + \ud835\udc58 \u2211\ufe02 \ud835\udc56=1 (\ud835\udf07\ud835\udc5e\ud835\udc56\u2212\ud835\udf07\ud835\udc51\ud835\udc56)2 \ud835\udf0e2 \ud835\udc51\ud835\udc56 \u23a4 \u23a5 \u23a5 \u23a5 \u23a5 \u23a6 (10) In Section 4.3, we explain how to efficiently compute this scoring function using approximate nearest neighbor methods. 4 MRL IMPLEMENTATION In this section, we first describe our network architecture for implementing the the query and document encoders Encoderq and Encoderd (see Equation (3)). Next, we explain our optimization approach for training the models. 4.1 Encoder Architecture Pretrained large language models (LLMs) have demonstrated promising results in various information retrieval tasks [10, 17, 33, 57]. Therefore, we decide to adapt existing pretrained LLMs to learn a \ud835\udc58-variate normal distribution for each given input. As described above, each \ud835\udc58-variate normal distribution can be modeled using a \ud835\udc58-dimensional mean vector and a \ud835\udc58-dimensional variance vector. We use two special tokens as the input of pretrained LLMs to obtain these two vectors. For example, we convert an input query \u2018neural information retrieval\u2019 to \u2018[CLS] [VAR] neural information retrieval [SEP]\u2019 and feed it to BERT-base [13]. Let \ud835\udc5e \u20d7[CLS] \u2208R1\u00d7768 and \ud835\udc5e \u20d7[VAR] \u2208R1\u00d7768 respectively denote the representations produced by BERT for the first two tokens [CLS] and [VAR]. We obtain the mean and variance vectors for query \ud835\udc5e using two separate dense projection layers on \ud835\udc5e \u20d7[CLS] and \ud835\udc5e \u20d7[VAR], as follows: \ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44= \ud835\udc5e \u20d7[CLS]\ud835\udc4a\ud835\udc40 \u03a3 \u03a3 \u03a3\ud835\udc44= 1 \ud835\udefdlog(1 + exp(\ud835\udefd.\ud835\udc5e \u20d7[VAR]\ud835\udc4a\u03a3)).\ud835\udc3c\ud835\udc58 (11) where \ud835\udc4a\ud835\udc40\u2208R768\u00d7\ud835\udc58and \ud835\udc4a\u03a3 \u2208R768\u00d7\ud835\udc58are the projection layer parameters. To compute the diagonal covariance matrix, we use the softplus function (i.e., 1 \ud835\udefdlog(1 + exp(\ud835\udefd.\ud835\udc65))) for the following reasons: (1) it is continuous and differentiable, thus it can be used in gradient descent-based optimization, (2) softplus ensures that variance values are always positive, (3) zero is its lower bound (lim\ud835\udc65\u2192\u2212\u221e1 \ud835\udefdlog(1 + exp(\ud835\udefd.\ud835\udc65)) = 0), yet it is never equal to zero, thus it does not cause numeric instability in KL-divergence calculation (see Equation (10)), and (4) for large \ud835\udc65values, it can be approximated using a linear function, i.e., lim\ud835\udc65\u2192\u221e1 \ud835\udefdlog(1+exp(\ud835\udefd.\ud835\udc65)) = \ud835\udc65, ensuring numerical stability for large input values. To better demonstrate its properties, Figure 2 in our experiments plots softplus for various values of \ud835\udefd\u2013 a hyper-parameter that specifies the softplus formation. The \ud835\udc58\u00d7 \ud835\udc58identity matrix \ud835\udc3c\ud835\udc58in Equation (11) is used to convert the variance vector to a diagonal covariance matrix. Note that MRL does not explicitly compute variance, instead learns representations for the [VAR] token such that it minimizes the loss function based on negative multivariate KL divergence scoring. Therefore, the model implicitly learns how to represent latent variance vectors. The mean vector and covariance matrices for document representations are also computed similarly. In our experiments, all parameters (including parameters in BERT and the dense projection layers) are updated and shared between the query and document encoders (i.e., Encoderq and Encoderd). 4.2 Model Training Recent research has suggested that dense retrieval models can significantly benefit from knowledge distillation [18, 37, 43]. Following these models, we use a BERT-based cross-encoder re-ranking model as the teacher model. Let \ud835\udc37\ud835\udc5ebe a set of documents selected for query \ud835\udc5efor knowledge distillation. We use the following listwise \fMultivariate Representation Learning for Information Retrieval SIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan loss function for each query \ud835\udc5eas follows: \u2211\ufe02 \ud835\udc51,\ud835\udc51\u2032\u2208\ud835\udc37\ud835\udc5e 1{\ud835\udc66\ud835\udc47 \ud835\udc5e(\ud835\udc51) > \ud835\udc66\ud835\udc47 \ud835\udc5e(\ud835\udc51\u2032)}| 1 \ud835\udf0b\ud835\udc5e(\ud835\udc51) \u2212 1 \ud835\udf0b\ud835\udc5e(\ud835\udc51\u2032) | log(1+\ud835\udc52\ud835\udc66\ud835\udc46 \ud835\udc5e(\ud835\udc51\u2032)\u2212\ud835\udc66\ud835\udc46 \ud835\udc5e(\ud835\udc51)) (12) where \ud835\udf0b\ud835\udc5e(\ud835\udc51) denotes the rank of document \ud835\udc51in the result list produced by the student dense retrieval model, and \ud835\udc66\ud835\udc47 \ud835\udc5e(\ud835\udc51) and \ud835\udc66\ud835\udc46 \ud835\udc5e(\ud835\udc51) respectively denote the scores produced by the teacher and the student models for the pair of query \ud835\udc5eand document \ud835\udc51. This knowledge distillation listwise loss function is inspired by LambdaRank [3] and is also used by Zeng et al. [57] for dense retrieval distillation. For each query \ud835\udc5e, the document set \ud835\udc37\ud835\udc5eis constructed based on the following steps: \u2022 \ud835\udc37\ud835\udc5eincludes all positive documents from the relevance judgments (i.e., qrel). \u2022 \ud835\udc37\ud835\udc5eincludes \ud835\udc5aBM25 \u2208R documents from the top 100 documents retrieved by BM25. \u2022 \ud835\udc37\ud835\udc5eincludes \ud835\udc5ahard \u2208R documents from the top 100 documents retrieved by student model (i.e., negative sampling using the model itself every 5000 steps). In addition, we take advantage of the other passages in the batch as in-batch negatives. Although in-batch negatives resemble randomly sampled negatives that can be distinguished easily from other documents, it is efficient since passage representations can be reused within the batch [23]. 4.3 Efficient Retrieval Existing dense retrieval models use approximate nearest neighbor (ANN) approaches for efficient retrieval. However, using ANN algorithms in the proposed MRL framework is not trivial. The reason is that MRL uses the negative \ud835\udc58-variate KL divergence formulation presented in Equation (10) to compute relevance scores. This is while existing ANN algorithms only support simple similarity functions such as dot product, cosine similarity, or negative Euclidean distance. To address this issue, we convert Equation (10) to a dot product formulation. Let us expand the last term in Equation (10):2 \u2212 \u23a1 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a3 \ud835\udc58 \u2211\ufe02 \ud835\udc56=1 log \ud835\udf0e2 \ud835\udc51\ud835\udc56 \u23de\u02c9\u02c9\u02c9\u02c9\u02c9\u02c9\u02c9\u23df\u23df\u02c9\u02c9\u02c9\u02c9\u02c9\u02c9\u02c9\u23de doc prior + \u220f\ufe01\ud835\udc58 \ud835\udc56=1 \ud835\udf0e2 \ud835\udc5e\ud835\udc56 \u220f\ufe01\ud835\udc58 \ud835\udc56=1 \ud835\udf0e2 \ud835\udc51\ud835\udc56 + \ud835\udc58 \u2211\ufe02 \ud835\udc56=1 \ud835\udf072 \ud835\udc5e\ud835\udc56 \ud835\udf0e2 \ud835\udc51\ud835\udc56 + \ud835\udc58 \u2211\ufe02 \ud835\udc56=1 \ud835\udf072 \ud835\udc51\ud835\udc56 \ud835\udf0e2 \ud835\udc51\ud835\udc56 \u23de\u02c9\u02c9\u23df\u23df\u02c9\u02c9\u23de doc prior \u2212 \ud835\udc58 \u2211\ufe02 \ud835\udc56=1 2\ud835\udf07\ud835\udc51\ud835\udc56\ud835\udf07\ud835\udc5e\ud835\udc56 \ud835\udf0e2 \ud835\udc51\ud835\udc56 \u23a4 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a6 (13) The first and the fourth terms in Equation (13) are document priors, thus they are query independent and can be pre-computed. Therefore, let \ud835\udefe\ud835\udc51= \u2212\u2211\ufe01\ud835\udc58 \ud835\udc56=1 (log \ud835\udf0e2 \ud835\udc51\ud835\udc56+ \ud835\udf072 \ud835\udc51\ud835\udc56 \ud835\udf0e2 \ud835\udc51\ud835\udc56) denote the document prior score. Therefore, the scoring function in Equation (10) can be formulated as the dot product of the following two vectors: \ud835\udc5e \u20d7= [\ufe02 1, \u03a0\ud835\udc5e, \ud835\udf072 \ud835\udc5e1, \ud835\udf072 \ud835\udc5e2, \u00b7 \u00b7 \u00b7 , \ud835\udf072 \ud835\udc5e\ud835\udc58, \ud835\udf07\ud835\udc5e1, \ud835\udf07\ud835\udc5e2, \u00b7 \u00b7 \u00b7 , \ud835\udf07\ud835\udc5e\ud835\udc58 ]\ufe02 \ud835\udc51 \u20d7= [\ufe04 \ud835\udefe\ud835\udc51, \u22121 \u03a0\ud835\udc51 , \u22121 \ud835\udf0e2 \ud835\udc511 , \u22121 \ud835\udf0e2 \ud835\udc512 , \u00b7 \u00b7 \u00b7 , \u22121 \ud835\udf0e2 \ud835\udc51\ud835\udc58 , 2\ud835\udf07\ud835\udc511 \ud835\udf0e2 \ud835\udc511 , 2\ud835\udf07\ud835\udc512 \ud835\udf0e2 \ud835\udc512 , \u00b7 \u00b7 \u00b7 , 2\ud835\udf07\ud835\udc51\ud835\udc58 \ud835\udf0e2 \ud835\udc51\ud835\udc58 ]\ufe04 (14) 2We drop multiplication to 1 2 as it does not impact document ordering. where \u03a0\ud835\udc5e= \u220f\ufe01\ud835\udc58 \ud835\udc56=1 \ud835\udf0e2 \ud835\udc5e\ud835\udc56and \u03a0\ud835\udc51= \u220f\ufe01\ud835\udc58 \ud835\udc56=1 \ud835\udf0e2 \ud835\udc51\ud835\udc56are pre-computed scalars. The dot product of \ud835\udc5e \u20d7\u2208R1\u00d7(2\ud835\udc58+2) and \ud835\udc51 \u20d7\u2208R1\u00d7(2\ud835\udc58+2) is equal to the retrieval score formulated in Equation (10). More importantly, \ud835\udc5e \u20d7is document independent and \ud835\udc51 \u20d7is query independent. Therefore, we can use existing approximate nearest neighbor algorithms, such as HNSW [30], and existing tools, such as FAISS [22], to index all \ud835\udc51 \u20d7 vectors and conduct efficient retrieval for any query vector \ud835\udc5e \u20d7. 5 DISCUSSION In this section, we attempt to shed some light on the behavior of retrieval using MRL, by providing theoretical answers to the following questions. Q1. How does MRL rank two documents with identical covariance matrices? Let \ud835\udc51and \ud835\udc51\u2032 be two documents, represented by the mean vectors \ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37and \ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37\u2032 and identical covariance matrix \u03a3 \u03a3 \u03a3\ud835\udc37= \u03a3 \u03a3 \u03a3\ud835\udc37\u2032. Therefore, given Equation (10) we have: score(\ud835\udc5e,\ud835\udc51) \u2212score(\ud835\udc5e,\ud835\udc51\u2032) =rank \ud835\udc58 \u2211\ufe02 \ud835\udc56=1 [\ufe01 (\ud835\udf07\ud835\udc5e\ud835\udc56\u2212\ud835\udf07\ud835\udc51\u2032\ud835\udc56)2 \u2212(\ud835\udf07\ud835\udc5e\ud835\udc56\u2212\ud835\udf07\ud835\udc51\ud835\udc56)2]\ufe01 This shows that in case of identical covariance matrices, MRL assigns a higher relevance score to the document whose mean vector is closest to the query mean vector with respect to Euclidean distance. A remark of this finding is that if the covariance matrix is constant for all documents (i.e., if we ignore uncertainty), MRL can be reduced to existing dense retrieval formulation, where negative Euclidean distance is used to measure vector similarity. Therefore, MRL is a generalized form of this dense retrieval formulation. Q2. Popular dense retrieval models use inner product to compute the similarity between query and document vectors. What happens if we use inner product in MRL? Inner product or dot product cannot be defined for multivariate distributions, however, one can take several samples from the query and document distributions and compute their dot product similarity. Since the query distribution Q \u223cN\ud835\udc58(\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44, \u03a3 \u03a3 \u03a3\ud835\udc44) and the document distribution D \u223cN\ud835\udc58(\ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37, \u03a3 \u03a3 \u03a3\ud835\udc37) are independent, the expected value of their product is: E[Q \u00b7 D] = E[Q] \u00b7 E[D] = \ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc44\u00b7 \ud835\udc40 \ud835\udc40 \ud835\udc40\ud835\udc37 That means, in expectation, the dot product of samples from multivariate distributions will be equivalent to the dot product of their mean vectors. Therefore, with this formulation (i.e., using expected dot product instead of negative KL divergence) the results produced by MRL will be equivalent to the existing dense retrieval models and representation uncertainties are not considered. Q3. Negative KL divergence has been used in the language modeling framework of information retrieval [27]. How is it connected with the proposed MRL framework? Lafferty and Zhai [27] extended the query likelihood retrieval model of Ponte and Croft [35] by computing negative KL divergence between unigram query and document language models. Similarly, \fSIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Hamed Zamani and Michael Bendersky Table 1: Characteristics and statistics of the datasets in our experiments. Dataset Domain # queries # documents avg doc length MS MARCO DEV Miscellaneous 6,980 8,841,823 56 TREC DL \u201919 Miscellaneous 43 8,841,823 56 TREC DL \u201920 Miscellaneous 54 8,841,823 56 SciFact Scientific fact retrieval 300 5,183 214 FiQA Financial answer retrieval 648 57,638 132 TREC COVID Bio-medical retrieval for Covid-19 50 171,332 161 CQADupStack Duplicate question retrieval 13,145 457,199 129 MRL uses negative KL divergence to compute relevance scores, however, there are several fundamental differences. First, Lafferty and Zhai [27] compute the distributions based on term occurrences in queries and documents through maximum likelihood estimation, while MRL learns latent distributions based on the contextual representations learned from queries and documents. Second, Lafferty and Zhai [27] use univariate distributions for queries and documents, while MRL uses high-dimensional multivariate distributions. 6 EXPERIMENTS To evaluate the impact of multivariate representation learning, we first run experiments on standard passage retrieval collections from MS MARCO and TREC Deep Learning Tracks. We also study the parameter sensitivity of the model in this task. We further demonstrate the ability of multivariate representations to better model distribution shift when applied to zero-shot retrieval settings, i.e., retrieval on a target collection that is significantly different from the training set. Our experiments also shows that the norm of learned variance vectors is correlated with the retrieval performance of the model. 6.1 Datasets In this section, we introduce our training set and evaluation sets whose characteristics and statistics are reported in Table 1. Training Set. We train our ranking model on the MS MARCO passage retrieval training set. The MS MARCO collection [4] contains approximately 8.8M passages and its training set includes 503K unique queries. The MS MARCO training set was originally constructed for a machine reading comprehension tasks, thus it did not follow the standard IR annotation guidelines (e.g., pooling). The training set contains an average of 1.1 relevant passage per query, even though there exist several relevant documents that are left adjudged. This is one of the reasons that knowledge distillation help dense retrieval models learn more robust representations. Passage Retrieval Evaluation Sets. We evaluate our models on three query sets for the passage retrieval task. They all use the MS MARCO passage collection. These evaluation query sets are: (1) MS MARCO DEV: the standard development set of MS MARCO passage retrieval task that consists of 6980 queries with incomplete relevance annotations (similar to the training set), (2) TREC-DL\u201919: passage retrieval query set used in the first iteration of TREC Deep Learning Track in 2019 [9] which includes 43 queries, and (3) TRECDL\u201920: the passage retrieval query set of TREC Deep Learning Track 2020 [10] with 54 queries. Relevance annotation for TREC DL tracks was curated using standard pooling techniques. Therefore, we can consider them as datasets with complete relevance annotations. Zero-Shot Passage Retrieval Evaluation Sets. To demonstrate the generalization of retrieval models to different domains, we perform a zero-shot passage retrieval experiment (i.e., the models are trained on the MS MARCO training set). To do so, we use four domains which diverse properties. (1) SciFact [51]: a dataset for scientific fact retrieval with 300 queries, (2) FiQA [29]: a passage retrieval dataset for natural language questions in the financial domain with 648 queries, (3) TREC COVID [50]: a task of retrieving abstracts of bio-medical articles in response to 50 queries related to the Covid-19 pandemic, and (4) CQADupStack [19]: the task of duplicated question retrieval on 12 diverse StackExchange websites with 13,145 test queries. To be consistent with the literature, we used the BEIR [45] version of all these collections. 6.2 Experimental Setup We implemented and trained our models using TensorFlow. The network parameters were optimized with Adam [25] with linear scheduling with the warmup of 4000 steps. In our experiments, the learning rate was selected from [1 \u00d7 10\u22126, 1 \u00d7 10\u22125] with a step size of 1 \u00d7 10\u22126. The batch size was set to 512. The parameter \ud835\udefd was selected from [0.5, 1, 2.5, 5, 7.5, 10]. To have a fair comparison with the baselines that often use 768 dimensions for representing queries and documents using BERT, we set the parameter \ud835\udc58(i.e., the number of random variables in our multivariate normal distributions) to 768 2 \u22121 = 381 (see Section 4.3 for more information). In our experiments, we use the DistilBERT [42] with the pre-trained checkpoint made available from TAS-B [18] as the initialization. As the re-ranking teacher model, we use a BERT cross-encoder, similar to that of Nogueira and Cho [33]. Hyper-parameter selection and early stopping was conducted based on the performance in terms of MRR on the MS MARCO validation set. 6.3 Evaluation Metrics We use appropriate metrics for each evaluation set based on their properties. For MS MARCO Dev, we use MRR@10 which is the standard metric for this dataset, and we followed TREC Deep Learning Track\u2019s recommendation on using NDCG@10 [21] as the evaluation \fMultivariate Representation Learning for Information Retrieval SIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Table 2: The passage retrieval results obtained by the proposed approach and the baselines. The highest value in each column is bold-faced. The superscript \u2217denotes statistically significant improvements compared to all the baselines based on two-tailed paired t-test with Bonferroni correction at the 95% confidence level. \u201c-\u201d denotes the results that are not applicable or available. Model Encoder #params MS MARCO DEV TREC-DL\u201919 TREC-DL\u201920 MRR@10 MAP NDCG@10 MAP NDCG@10 MAP Single Vector Dense Retrieval Models ANCE [53] BERT-Base 110M 0.330 0.336 0.648 0.371 0.646 0.408 ADORE [58] BERT-Base 110M 0.347 0.352 0.683 0.419 0.666 0.442 RocketQA [37] ERNIE-Base 110M 0.370 Contriever-FT [20] BERT-Base 110M 0.621 0.632 TCT-ColBERT [28] BERT-Base 110M 0.335 0.342 0.670 0.391 0.668 0.430 Margin-MSE [17] DistilBERT 66M 0.325 0.331 0.699 0.405 0.645 0.416 TAS-B [18] DistilBERT 66M 0.344 0.351 0.717 0.447 0.685 0.455 CLDRD [57] DistilBERT 66M 0.382 0.386 0.725 0.453 0.687 0.465 MRL (ours) DistilBERT 66M 0.393\u2217 0.402\u2217 0.738 0.472\u2217 0.701\u2217 0.479\u2217 Some Sparse Retrieval Models (For Reference) BM25 [39] 0.187 0.196 0.497 0.290 0.487 0.288 DeepCT [12] BERT-Base 110M 0.243 0.250 0.550 0.341 0.556 0.343 docT5query [32] T5-Base 220M 0.272 0.281 0.642 0.403 0.619 0.407 Multi Vector Dense Retrieval Model (For Reference) ColBERTv2 [43] DistilBERT 66M 0.384 0.389 0.733 0.464 0.712 0.473 metrics. To complement our result analysis, we also use mean average precision of the top 1000 retrieved documents (MAP), which is a common recall-oriented metric. For zero-shot evaluation, we follow BEIR\u2019s recommendation and use NDCG@10 to be consistent with the literature [45]. The two-tailed paired t-test with Bonferroni correction is used to identify statistically significant performance differences (\ud835\udc5d_\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc52< 0.05). 6.4 Experimental Results Baselines. We also compare against the following state-of-theart dense retrieval models with single vector representations: \u2022 ANCE [53] and ADORE [58]: two effective dense retrieval models based on BERT-Base [13] that use the model itself to mine hard negative documents. \u2022 RocketQA [37], Margin-MSE [17], and TAS-B [18]: effective dense retrieval models that use knowledge distillation from a BERT reranking model (a cross-encoder) in addition to various techniques for negative sampling. \u2022 Contriever-FT [20]: a single vector dense retrieval model that is pre-trained for retrieval tasks and then fine-tuned on MS MARCO. This model has shown effective performance on out-ofdistribution target domain datasets. \u2022 TCT-ColBERT [28]: a single vector dense retrieval model that is trained through knowledge distillation where a multi vector dense retrieval model (i.e., ColBERT [24]) is used as the teacher model. \u2022 CLDRD [57]: the state-of-the-art single vector dense retrieval model that uses knowledge distillation from a reranking teacher model through gradual increase of training data difficulty (curriculum learning). Even though MRL is a single vector dense retrieval model, as a point of reference, we use a state-of-the-art dense retrieval model with multiple vectors (i.e., ColBERTv2 [43]). For demonstrating a fair comparison, all baselines are trained and tuned in the same way as the proposed approach. We also compare our model against the following baselines that use inverted index for computing relevance scores (sometimes called sparse retrieval models): \u2022 BM25 [39]: a simple yet strong term matching model for document retrieval that computes relevance scores based on term frequency in each document, document length, and inverse document frequency in the collection. We use the Galago search engine [36] to compute BM25 scores and tuned BM25 parameters using the training set. \u2022 DeepCT [12]: an approach that uses BERT to compute a weight for each word in the vocabulary for each document and query. Then words with highest weights are then selected and added to the inverted index with their weights. This approach can be seen as a contextual bag-of-words query and document expansion approach. \u2022 docT5query [32]: a sequence-to-sequence model based on T5 [38] that is trained on MS MARCO to generate queries from any relevance passage. The documents are then expanded using the generated queries. The Passage Retrieval Results. The passage retrieval results are presented in Table 2. According to the table, all dense retrieval models perform substantially better than BM25 and DeepCT, demonstrating the effectiveness of such approaches for in-domain passage retrieval tasks. We observe that the approaches that use knowledge distillation (i.e., every dense retrieval model, except for ANCE, \fSIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Hamed Zamani and Michael Bendersky Table 3: A comparison of storage requirement and query latency between single vector and multi vector dense retrieval models with DistilBERT encoders on MS MARCO collection with 8.8 million passages. We ran this experiment on a machine with 16 Core i7-4790 CPU @ 3.60GHz. storage requirement query latency Single vector DR 26GB 89 ms / query Multi vector DR 192GB 438 ms / query Table 4: Sensitivity of MRL\u2019s retrieval performance to different values of \ud835\udefd. MS MARCO DEV TREC-DL\u201919 TREC-DL\u201920 MRR@10 MAP NDCG@10 MAP NDCG@10 MAP \ud835\udefd= 0.1 0.385 0.384 0.723 0.448 0.693 0.466 \ud835\udefd= 0.25 0.399 0.415 0.743 0.468 0.704 0.478 \ud835\udefd= 0.5 0.403 0.408 0.742 0.481 0.703 0.486 \ud835\udefd= 1 0.403 0.412 0.748 0.480 0.711 0.489 \ud835\udefd= 5 0.405 0.421 0.749 0.484 0.716 0.489 \ud835\udefd= 10 0.402 0.421 0.758 0.489 0.701 0.483 ADORE, and Contriever-FT) generally perform better than others. The recent CLDRD model shows the strongest retrieval results among all single vector dense retrieval models. The multi vector dense retrieval approach (ColBERTv2) outperforms all single vector dense retrieval baselines. Note that ColBERTv2 stores a vector for each token in the documents and thus it requires significantly larger storage for storing the ANN index and also suffers from substantially higher query latency (see Table 3 for more information). We show that MRL outperforms all baselines in terms of all the evaluation metrics used in the study. The improvements compared to all baselines are statistically significant, except for NDCG@10 in TREC-DL\u201919; the \ud835\udc5d_\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc52(corrected using Bonferroni correction) for MRL versus CLDRD in this case was 0.07381. Note that this dataset only contains 43 queries and significance tests are impacted by sampled size. MRL performs significantly better than any other baseline in this case. Parameter Sensitivity Analysis. To measure the sensitivity of MRL\u2019s performance to the value of \ud835\udefd, we change \ud835\udefdfrom 0.1 to 10 and report the results in Table 4. To get a sense of the impact of these values, please see Figure 2. The results show that the model is not sensitive to the value of \ud835\udefdunless it is smaller than or equal to \u22640.25. Therefore, for a \ud835\udefdvalue of around 1 or larger, the model shows a robust and strong performance. The Zero-Shot Retrieval Results. All datasets used in Table 2 are based on the MS MARCO passage collection and their queries are similar to that of our training set. To evaluate the model\u2019s performance under distribution shift, we conduct a zero-shot retrieval experiment on four diverse datasets: SciFact, FiQA, TREC COVID, and CQADupStack (see Section 6.1). In this experiment, we do not re-train any model and the ones trained on MS MARCO training set and used in Table 2 are used for zero-shot evaluation on these datasets. The results are reported in Table 5. We observe that many neural retrieval models struggle with outperforming BM25 -6 -4 -2 0 2 4 6 x 0 2 4 6 8 10 12 (1/-) * log(1 + exp(x)) = 0.1 = 0.25 = 0.5 = 1 = 5 = 10 Figure 2: The softplus curve that is used to compute the variance vector for different values of \ud835\udefd. Softplus is a monotonic and increasing function with a lower bound of zero. It\u2019s value for large \ud835\udc65values can be approximated using the linear function \ud835\udc66= \ud835\udc65for numeric stability. Table 5: The zero-shot retrieval results obtained by the proposed approach and the baselines, in terms of NDCG@10. The highest value in each column is bold-faced. The superscript \u2217denotes statistically significant improvements compared to all the baselines based on two-tailed paired t-test with Bonferroni correction at the 95% confidence level. Model SciFact FiQA TREC CQA COVID DupStack Single Vector DR Models ANCE [53] 0.507 0.295 0.654 0.296 ADORE [58] 0.514 0.255 0.590 0.273 RocketQA [37] 0.606 0.319 0.658 0.316 Contriever-FT [20] 0.677 0.329 0.596 0.321 TCT-ColBERT [28] 0.614 0.316 0.661 0.309 Margin-MSE [17] 0.608 0.298 0.673 0.297 TAS-B [18] 0.643 0.300 0.481 0.314 CLDRD [57] 0.637 0.348 0.571 0.327 MRL (ours) 0.683\u2217 0.371\u2217 0.668 0.341\u2217 Some Sparse Retrieval Models (For Reference) BM25 [39] 0.665 0.236 0.656 0.299 DeepCT [12] 0.630 0.191 0.406 0.268 docT5query [32] 0.675 0.291 0.713 0.325 Multi Vector DR Models (For Reference) ColBERTv2 [43] 0.682 0.359 0.696 0.357 (DistilBERT) on SciFact and TREC COVID datasets. In general, the improvements observed compared to BM25 by the best performing models are not as large as the ones we observe in Table 2. This highlights the difficulty of handling domain shift by neural retrieval models. Generally speaking, the multi vector dense retrieval model (ColBERTv2) \fMultivariate Representation Learning for Information Retrieval SIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Table 6: Pre-retrieval query performance prediction results in terms of Pearson\u2019s \ud835\udf0cand Kendall\u2019s \ud835\udf0fcorrelations. The superscript \u2020 denotes that the obtained correlations by MRL |\u03a3\ud835\udc44| are significant. QPP Model TREC-DL\u201919 TREC-DL\u201920 P-\ud835\udf0c K-\ud835\udf0f P-\ud835\udf0c K-\ud835\udf0f Max VAR [5] 0.138 0.148 0.230 0.266 Max SCQ [60] 0.119 0.109 0.182 0.237 Avg IDF [5] 0.172 0.166 0.246 0.240 SCS [16] 0.160 0.174 0.231 0.275 Max PMI [15] 0.098 0.116 0.155 0.194 \ud835\udc43clarity [40] 0.167 0.174 0.191 0.217 Max DC [2] 0.341 0.294 0.234 0.244 MRL |\u03a3\ud835\udc44| 0.271\u2020 0.259\u2020 0.272\u2020 0.298\u2020 shows a more robust performance in zero-shot settings. It outperforms all single vector dense retrieval models on TREC COVID and CQADupStack. MRL performs better on the other two datasets: SciFact and FiQA. Again, we highlight that MRL has substantially lower storage requirements compared to ColBERTv2 and it also has significantly faster query processing time. Refer to Table 3 for more information. Exploring the Learned Variance Vectors. In our exploration towards understanding the representations learned by MRL, we realize that the norm of our covariance matrix for each query is correlated with the ranking performance of our retrieval model for that query. This observation motivated us to use the learned |\u03a3\ud835\udc44| for each query as a pre-retrieval query performance predictor (QPP). Some other well known pre-retrieval (i.e., based solely on the query and collection content, not any retrieved results) performance predictors include distribution of the query term IDF weights, the similarity between a query and the underlying collection; and the variability with which query terms occur in documents [60]. We compare our prediction against some of these commonly used unsupervised pre-retrieval QPP methods in Table 6. They include: \u2022 Max VAR [5]: VAR uses the maximum variance of query term weight in the collection. \u2022 Max SCQ [60]: It computes a TF-IDF formulation for each query term and returns the maximum value. \u2022 Avg IDF [5]: This baseline uses an inverse document frequency formulation for each query term and return the average score. \u2022 SCS [16]: It computes the KL divergence between the unigram query language model and the collection language model. \u2022 Max PMI [15]: It uses the point-wise mutual information of query terms in the collection and returns the maximum value. \u2022 \ud835\udc43clarity [40]: This baseline uses Gaussian mixture models in the embedding space as soft clustering and uses term similarity to compute the probability of each query term being generated by each cluster. \u2022 Max DC [2]: This approach uses pre-trained embeddings to construct an ego network and computes Degree Centrality (DC) as the number of links incident upon the ego. Following the QPP literature [5, 11, 15, 54], we use the following two evaluation metrics: Pearson\u2019s \ud835\udf0ccorrelation (a linear correlation metric) and Kendall\u2019s\ud835\udf0fcorrelation (a rank-based correlation metric). We only report the results on the TREC DL datasets, since MS MARCO DEV only contains one relevant document per query and may not be suitable for performance prediction tasks. We observe that relative to existing pre-retrieval QPP approaches, MRL |\u03a3\ud835\udc44| has a high correlation with the actual retrieval performance. All of these correlations are significant (\ud835\udc5d_\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc52< 0.05). Note that MRL is not optimized for performance prediction and its goal is not QPP and these results just provide insights into what a model with multivariate representation may learn. 7" + }, + { + "url": "http://arxiv.org/abs/2205.01230v1", + "title": "Retrieval-Enhanced Machine Learning", + "abstract": "Although information access systems have long supported people in\naccomplishing a wide range of tasks, we propose broadening the scope of users\nof information access systems to include task-driven machines, such as machine\nlearning models. In this way, the core principles of indexing, representation,\nretrieval, and ranking can be applied and extended to substantially improve\nmodel generalization, scalability, robustness, and interpretability. We\ndescribe a generic retrieval-enhanced machine learning (REML) framework, which\nincludes a number of existing models as special cases. REML challenges\ninformation retrieval conventions, presenting opportunities for novel advances\nin core areas, including optimization. The REML research agenda lays a\nfoundation for a new style of information access research and paves a path\ntowards advancing machine learning and artificial intelligence.", + "authors": "Hamed Zamani, Fernando Diaz, Mostafa Dehghani, Donald Metzler, Michael Bendersky", + "published": "2022-05-02", + "updated": "2022-05-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "cs.IR" + ], + "main_content": "INTRODUCTION The vast majority of existing machine learning (ML) systems are designed to be self-contained, with both knowledge and reasoning encoded in model parameters. Consequently, increasing the capacity of machine learning models by increasing their parameter size generally leads to higher accuracy [17]. For example, the number of parameters used in state-of-the-art language models has increased \u2217Both authors contributed equally to the paper. This work is licensed under a Creative Commons Attribution International 4.0 License. SIGIR \u201922, July 11\u201315, 2022, Madrid, Spain. \u00a9 2022 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-8732-3/22/07. https://doi.org/10.1145/3477495.3531722 from 94 million in ELMo [45] to 1.6 trillion in Switch Transformers [13], an over 16\u00d7 increase in just three years (2018 \u2013 2021). Despite these successes, improving performance by increasing the number of model parameters can incur significant cost and limit access to a handful of organizations that have the resources to train them [4]. As such, focusing model development on the number of parameters is neither scalable nor sustainable in the long run. Motivated by recent work demonstrating both that high capacity models memorize training data [6] and that using retrieval-style methods can offload memorization to storage [5], we propose the augmenting ML models with access to stored information through information retrieval (IR) techniques. Whereas IR has proven an effective tool to support people accessing large text corpora, we believe that IR can be extended to support machines accessing not just large text corpora but more abstractly-represented knowledge stores. By designing machine learning architectures that have explicit access to an information retrieval system, we can decouple reasoning from memory, reducing the required model parameters and leveraging the efficiency, scalability, and effectiveness of IR techniques. We refer to this class of approaches as retrieval-enhanced machine learning (REML). In this paper, we describe how core principles of indexing, representation, retrieval, and ranking can be used to develop REML models. Using retrieval to improve model accuracy is not without precedent. Predating modern machine learning methods, the IR community developed some of the earliest known retrieval-enhanced machine learning models. For example, pseudo-relevance feedback [2, 8] leverages a retrieval system to analyze results of an \u2018initial\u2019 search query before producing a final ranking. This purely algorithmic use of a retrieval system in order to improve ranking model performance foreshadows its usefulness in modern applications. More recently, natural language processing models that incorporate retrieval capabilities have been shown to improve model performance [21, 39]. Although leveraging rather basic retrieval models, these approaches present an opportunity for ML systems to be further improved with more sophisticated IR methods. We introduce a generic framework that enables ML models to be augmented with IR capabilities that support querying a corpus for useful information, utilizing retrieved results, providing feedback to the retrieval model, and, if necessary, storing information for future access. This framework is flexible enough to both represent several existing ML models and scaffold future models. This paper is organized in order to motivate, describe, and ground REML as a research program. We begin in Section 2 by describing arXiv:2205.01230v1 [cs.LG] 2 May 2022 \fthe motivation for REML, specifically demonstrating why IR techniques provide a unique opportunity for ML. In Section 3, we discuss the challenges in developing each component of the proposed framework and suggest three categories of optimization approaches for REML models: (1) independent optimization of prediction and retrieval models, (2) their conditional optimization, and (3) their joint end-to-end optimization. Using this framework, in Section 4, we review several existing ML models in order to draw connections to REML. And, although these related models suggest the potential benefit of REML, substantial open research questions limit the applicability and effectiveness of contemporary IR methods. In Section 5, we conclude with a broad research program in REML, touching on the opportunity for the different subareas of IR research to contribute to the advancement of ML model performance. 2 MOTIVATION Despite the success of modern high capacity models, focusing on the number of parameters as a primary mechanism to improve performance can be brittle, unsustainable, and opaque [4]. We argue that these concerns can be addressed by developing ML models that, instead of encoding knowledge in parameters, can access large collections of information items using efficient, effective, and robust retrieval technologies. Some of the major applications of REML is presented below: Generalization. Recent work has shown that many ML models can significantly benefit from simple retrieval augmentation approaches. For instance, KNN-LM [32] linearly interpolates large language model predictions with the nearest neighbors of the given context input. This approach does not even require further training or finetuning. The authors showed substantial improvements in terms of language model perplexity in both in-distribution and out-ofdistribution test sets, demonstrating the generalizability of this approach. KNN-LM together with several other examples reviewed in Section 4 suggest that enhancing ML models using retrieval models will have a large impact on the generalizability of the models. Retrieval enhancement is expected to have large impact on domain adaptation, zero-shot, and few-shot learning tasks. Scalability. ML models compress information from training data to support accurate prediction at inference time. Although increasing model capacity by adding parameters often translates into an improvement in predictive power, recent studies demonstrate that large deep learning models often memorize training instances and concepts associated with them in their model parameters [6]. As an alternative to such implicit memorization, retrieval systems can explicitly store information either directly from the training set or from concepts derived during the learning process. Because retrieval architectures are often designed to scale, a retrieval system can provide efficient access to this information, substantially reducing the need for high capacity models and increasing throughput. Collection Updates and the Temporal Aspect. Current ML models make predictions solely based on the data observed during training. Although effective in stationary domains, this approach can be brittle in nonstationary domains, such as news, where new information constantly emerges. And, while periodic retraining is possible in some slowly-changing domains, for quickly-changing domains, this solution is impractical. An information access system can decouple reasoning from knowledge, allowing it to be maintained and updated independent of model parameters at a cadence aligned with the corpus. Interpretability and Explainability. Because the knowledge in training data is encoded in learned model parameters, explanations of model predictions often appeal to abstract and difficultto-interpret distributed representations. By grounding inference on retrieved information, predictions can more easily be traced specific data, often stored in a human-readable format such as text. On-Device Machine Learning. State-of-the-art ML models require significant computational power and memory availability, which are not available on devices such as smartphones. Retrievalenhanced ML models can potentially decouple memorization from generalization and store a large collection (memory) of information items on a remote server. Thus, a small, efficient ML model can be hosted on-device. By minimizing the interactions between the retrieval component and the ML model, this can potentially revolutionize the applications of on-device machine learning. If privacy is an issue, the information items stored on the remote server can be encrypted and methods, such as the recently developed distancepreserving encryption schemes for nearest neighbor search [16], can be adopted for privacy-preserving retrieval. Collectively, these properties of IR techniques suggest the development of REML, which we pursue in the subsequent sections. 3 RETRIEVAL-ENHANCED MACHINE LEARNING This paper focuses on predictive ML models. Let X be the input (feature) space for the task and Y be the output (prediction) space. Given an input \ud835\udc65\u2208X, a ML model produces a prediction in the output space b \ud835\udc66\u2208Y. Supervised learning models are often trained by minimizing an empirical prediction loss (error) over instances in a training set \ud835\udc47= {(\ud835\udc65,\ud835\udc66) \u2208X \u00d7 Y}. Retrieval-enhanced machine learning (REML) refers to models composed of two coupled components: one model that makes predictions by communicating with \ud835\udc41models each mediating access to a repository of information or knowledge. A REML model is defined as \ud835\udc53\ud835\udf03(\ud835\udc65;\ud835\udc45\ud835\udf141, \ud835\udc45\ud835\udf142, \u00b7 \u00b7 \u00b7 , \ud835\udc45\ud835\udf14\ud835\udc41). The model \ud835\udc53\ud835\udf03parameterized by \ud835\udf03is called the prediction model and \ud835\udc45\ud835\udf14\ud835\udc56denotes the \ud835\udc56th information access model parameterized by \ud835\udf14\ud835\udc56. Thus, to produce b \ud835\udc66, the prediction model can interface with \ud835\udc41information access models. Each \ud835\udc45\ud835\udf14\ud835\udc56includes a collection or repository \ud835\udc36\ud835\udc56that is available\u2014through an information access model\u2014to the prediction model. This repository could be composed of natural language documents\u2014as with text retrieval\u2014or some other indexed representation. As such, \ud835\udc36\ud835\udc56s reflect a large set of parameters available to the model that can be leveraged ad hoc, as with many non-parametric and lazy learning techniques. The goal of retrieval-enhanced supervised learning models is to minimize the empirical risk, 1 |\ud835\udc47| \u2211\ufe01 (\ud835\udc65,\ud835\udc66) \u2208\ud835\udc47 L \u0000\ud835\udc53\ud835\udf03(\ud835\udc65;\ud835\udc45\ud835\udf141, \ud835\udc45\ud835\udf142, \u00b7 \u00b7 \u00b7 , \ud835\udc45\ud835\udf14\ud835\udc41),\ud835\udc66\u0001 (1) where L is a loss function for each training instance. 3.1 Overview We define the following necessary requirements (Reqs) for REML: \fPred. Model query response input output Info. Access Pred. Model query response input output Info. Access store Pred. Model query response input output Info. Access feedback Pred. Model query response input output Info. Access feedback store (a) Cat 1: Retrieval-only Pred. Model query response input output Info. Access Pred. Model query response input output Info. Access store Pred. Model query response input output Info. Access feedback Pred. Model query response input output Info. Access feedback store (b) Cat 2: Retrieval with memory Pred. Model query response input Info. Access Pred. Model query response input Info. Access store Pred. Model query response input output Info. Access feedback Pred. Model query response input output Info. Access feedback store (c) Cat 3: Retrieval with feedback Pred. Model query response input Info. Access Pred. Model query response input Info. Access store Pred. Model query response input output Info. Access feedback Pred. Model query response input output Info. Access feedback store (d) Cat 4: Retrieval with memory & feedback Figure 1: Retrieval-enhanced machine learning models should implement three necessary requirements (querying, retrieval, and response utilization) and may implement two optional properties (storing information and providing feedback to the information access model). This results in four categories of REML models presented above. Prediction Model Query Generation Retrieval Model Response Processing Feedback Handler Collection / Memory / Index query response store input output \u2a09 0 .. N Storage Handling feedback Information Access Figure 2: A generic framework for REML. Req 1 Querying: the prediction model \ud835\udc53\ud835\udf03should be able to submit input-dependent queries to the information access models, i.e., \ud835\udc45\ud835\udf14\ud835\udc56s. Req 2 Retrieval: each information access model \ud835\udc45\ud835\udf14\ud835\udc56should be able to efficiently process the prediction model\u2019s queries and retrieve relevant information items from a memory or collection \ud835\udc36\ud835\udc56. Req 3 Response Utilization: the prediction model \ud835\udc53\ud835\udf03should utilize the response returned by the information access models for making predictions. Considering these three requirements, we can envision the first category of REML models. A high-level overview of models in this category is presented in Figure 1(a). Most existing retrievalenhanced ML models, such as REALM [21], belong to this category. REML may also benefit from two additional optional properties: Opt 1 Storing: the prediction model may store some information items in a memory for future access during both training and inference. Such information items will be accessible to the model through querying (Req 1). Opt 2 Feedback: the prediction model may be able to provide feedback to the information access models. This enables the information access models to improve based on the feedback. Figure 1(b) depicts the second category of REML models that take advantage of Opt 1 by storing information in a memory and accessing the information later. On the other hand, Figure 1(c) demonstrates a high-level overview of the third category of REML models that can provide feedback (Opt 2) to the information access systems. The last category (Figure 1(d)) implements both of these optional properties and supports querying, utilizing retrieval responses, storing information, and providing feedback to the information access systems. Based on these requirements and optional properties, Figure 2 envisions a generic framework for REML. The framework consists of two major parts: the prediction model \ud835\udc53\ud835\udf03and the information access models\ud835\udc45\ud835\udf14\ud835\udc56s. For each input\ud835\udc65, the model \ud835\udc53\ud835\udf03may decide to run multiple retrieval processes by either submitting multiple queries, accessing multiple data repositories and/or memories, providing feedback to the information access component, or a combination of the above. The number of retrieval processes can be zero for some inputs, and thus REML generalizes typical predictive modeling. 3.2 Information Access in REML In its most generic form, each information access system in the proposed REML framework consists of five components: (1) Query Generation, (2) Retrieval Model, (3) Response Processing, (4) Feedback Handler, and (5) Storage Handler. In the following subsections, we discuss potential implementations for each component. 3.2.1 Query Generation. In current information access systems, queries mostly take the form of unstructured text (e.g., keyword queries or natural language questions), structured query language (e.g., SQL), or multi-media items (e.g., images). Such query languages and formats can be also adopted by retrieval-enhanced ML models. The Query Generation component is responsible for generating one of these query formats. Note that depending on the application and due to efficiency or effectiveness requirements, one may simply cast the query generation problem to query selection from a set of pre-defined queries. In either case, the Query Generation (or Selection) component should be able to translate the information need of the prediction model \ud835\udc53\ud835\udf03to a query language or format that can be efficiently processed by the information access model \ud835\udc45\ud835\udf14\ud835\udc56. Since retrieval models accessible by \ud835\udc53\ud835\udf03may accept different query languages, the Query Generation component may be unique to each retrieval model. Existing information access systems are designed for people and, therefore, existing query formats (mentioned above) are understandable by people. In the context of REML, we can relax the requirement of an interpretable query language. Besides the common query languages and formats, the prediction models can produce any latent representation (e.g., a high-dimensional dense vector) as a query. For instance, any hidden layer representation produced by the prediction model \ud835\udc53\ud835\udf03may be used as a query for retrieval. That \fbeing said, queries may also be generated from the input \ud835\udc65itself without any involvement of the prediction model parameters. Under REML, prediction models do not have restrictions on the number of queries that can be submitted for each input \ud835\udc65. As a result, a model may generate multiple, sequential queries produced for each input \ud835\udc65, resulting in a query session analogous to human search sessions. While current search engines base sessions on temporally-adjacent user queries, REML prediction models can, when querying, explicitly indicate a unique session ID associated with the input \ud835\udc65. 3.2.2 Retrieval Model. The retrieval model component aims at retrieving information items from the given collection, repository, or memory in response to each query produced by the Query Generation component. Existing retrieval models are mostly designed based on the probability ranking principle (PRP) [49], in which documents are ranked based on their probability of relevance to the query. In the IR literature, relevance can be defined in five levels [52]: (1) systematic or algorithmic relevance, (2) topical relevance, (3) cognitive relevance or pertinence, (4) situational relevance or utility, and (5) motivational or affective relevance. However, these definitions assume that the retrieved documents are consumed by humans. This assumption no longer holds for REML models, thus the notion of relevance needs to be revisited for REML. When designing retrieval models for REML, relevance can be thought of as the utility that the prediction model obtains by consuming the results produced by the retrieval model; this is similar to task-based perspectives on (human) information retrieval [30]. For simplicity and without loss of generality, assume for each input \ud835\udc65, the prediction model \ud835\udc53\ud835\udf03only submits a single query \ud835\udc5eto a retrieval model that returns a result list \ud835\udc3f\ud835\udc5e= {(\ud835\udc511,\ud835\udf19(\ud835\udc511)), (\ud835\udc512,\ud835\udf19(\ud835\udc512)), \u00b7 \u00b7 \u00b7 , (\ud835\udc51\ud835\udc58,\ud835\udf19(\ud835\udc51\ud835\udc58))}, where each \ud835\udc51\ud835\udc56is a document1 in the collection and \ud835\udf19(\ud835\udc51\ud835\udc56) encodes a list of features and properties associated with document \ud835\udc51\ud835\udc56. For instance, \ud835\udf19(\ud835\udc51\ud835\udc56) may contain the document score produced by the retrieval model in addition to a number of features used by the retrieval model to compute the score. With a slight abuse of notation, let \ud835\udc53(\ud835\udc65; \ud835\udc3f\ud835\udc5e) denote the prediction function that submits the query \ud835\udc5eto a retrieval model and uses its response (i.e., \ud835\udc3f\ud835\udc5e) to make a prediction. Then, the utility gain can be defined as: UtilityGain(\ud835\udc5e, \ud835\udc3f\ud835\udc5e; \ud835\udc53\ud835\udf03,\ud835\udc65) = \ud835\udc48(\ud835\udc53\ud835\udf03(\ud835\udc65; \ud835\udc3f\ud835\udc5e),\ud835\udc66) \u2212\ud835\udc48(\ud835\udc53\ud835\udf03(\ud835\udc65; \u2205),\ud835\udc66) (2) where \ud835\udc48(\u00b7, \u00b7) represents some desired utility function. This definition assumes that data points (\ud835\udc65,\ud835\udc66) are i.i.d. samples. Utility gain depends on how the prediction model \ud835\udc53\ud835\udf03consumes \ud835\udc3f\ud835\udc5efor producing b \ud835\udc66. Utility gain can take on both positive and negative values. A negative gain means that the retrieval results \ud835\udc3f\ud835\udc5ehave negative impact on predicting the ground truth label. This definition can be extended to multiple queries per \ud835\udc65. The implementation of retrieval models for REML depends on the nature of documents in the collection. For instance, one can use the vector space model and employ the inner product as the similarity function between query and document vectors. Section 3.3 provides more information on the optimization of retrieval models in REML. 1In this paper, we refer to retrievable items, e.g., unstructured text, image, or even latent vectors, as documents. 3.2.3 Response Processing. The way the prediction models consume the retrieved items has a substantial impact on their end-toend performance. The Response Processing component takes the results returned by the retrieval models for each query \ud835\udc5e(i.e., \ud835\udc3f\ud835\udc5es) and prepares it for consumption by the prediction model. This component can be implemented by returning the content of the retrieved documents, synthesizing a summary of their content, producing one or more semantic representations of their content, combining all the information presented in \ud835\udc3f\ud835\udc5ein some way, and so on. There are many design choices here and the best choice will largely depend on the nature of the machine learning model and the task it is being applied to. 3.2.4 Feedback Handler. When training retrieval models, it is often desirable to get feedback from the machine learning model. Such feedback can then be used as a signal for optimizing the retrieval model. We can imagine various forms of feedback in this context. For example, the model can compute the utility gain of documents returned by the retrieval model using Equation (2). As another example, the feedback may be computed based on the gradients of the prediction loss with respect to the retrieved information. Section 3.3 discusses how the model\u2019s feedback can be used for optimizing retrieval models in REML. 3.2.5 Storage Handler. If the prediction model has the ability to store information in the repository (or memory), the Storage Handler can expand the collection by storing the information item into the memory. However, for efficient storage and access of a large number of items, careful consideration of memory management techniques, hardware requirements, and storage data structures beyond existing technologies (e.g., inverted indexes) is required. Besides information storage, this component is also responsible for storage management. Thus, it should implement caching, compression, access controls, and time-to-live requirements as necessary. 3.3 REML Optimization We envision three optimization approaches for REML: (1) independent optimization of prediction and information access models, (2) conditional optimization of these models such that the quality of one impacts the optimization of the other, and (3) joint end-to-end optimization of both models. Without loss of generality, here we assume that there only exists one information access model. 3.3.1 Independent Optimization of Prediction and Information Access Models. In independent optimization, the training process of the prediction model \ud835\udc53\ud835\udf03is independent of the retrieval performance. For example, we can assume that the retrieval model is optimal. Formally, we can optimize the prediction model of REML as: \ud835\udf03\u2217= arg min \ud835\udf03 1 |\ud835\udc47| \u2211\ufe01 (\ud835\udc65,\ud835\udc66) \u2208\ud835\udc47 L(\ud835\udc53\ud835\udf03(\ud835\udc65;\ud835\udc45opt),\ud835\udc66) (3) where \ud835\udc45opt denotes an optimal retrieval model and can be modeled using ground truth relevance information, if available. Similar to [72], we can also model imperfect retrieval models by introducing noise to an optimal ranking behavior. The retrieval model can be trained using typical learning-to-rank (LTR) formulation, independent of \ud835\udc53\ud835\udf03. For the same of space, we refer the reader to Liu [42] for more information on LTR models. \f3.3.2 Conditional Optimization of Prediction and Information Access Models. In conditional optimization, the prediction model parameters get updated conditioned on the retrieval model\u2019s performance and vice versa. This process can be done iteratively until a stopping criterion is met (e.g., convergence or early stopping based on performance on a held-out validation set). Therefore, the prediction model can be optimized as: \ud835\udf03(\ud835\udc61) = arg min \ud835\udf03 1 |\ud835\udc47| \u2211\ufe01 (\ud835\udc65,\ud835\udc66) \u2208\ud835\udc47 L(\ud835\udc53\ud835\udf03(\ud835\udc65;\ud835\udc45\ud835\udf14(\ud835\udc61) ),\ud835\udc66) (4) \ud835\udf14(\ud835\udc61+1) = arg min \ud835\udf14 1 |\ud835\udc47| \u2211\ufe01 (\ud835\udc65,\ud835\udc66) \u2208\ud835\udc47 L(\ud835\udc53\ud835\udf03(\ud835\udc61) (\ud835\udc65;\ud835\udc45\ud835\udf14),\ud835\udc66) (5) where \ud835\udf03(\ud835\udc61) and \ud835\udf14(\ud835\udc61) denote the parameters of the prediction model and the information access model at the \ud835\udc61th iteration, respectively. These equations assume that both models are being optimized. In case of using unsupervised retrieval models, the second optimization process would be skipped (i.e., \ud835\udf14\ud835\udc61+1 = \ud835\udf14\ud835\udc61). 3.3.3 Joint End-to-End Optimization. In end-to-end optimization of REML, both ML and information access models are trained jointly by optimizing a single objective function. Formally, it is defined as: \ud835\udf03\u2217,\ud835\udf14\u2217= arg min \ud835\udf03,\ud835\udf14 1 |\ud835\udc47| \u2211\ufe01 (\ud835\udc65,\ud835\udc66) \u2208\ud835\udc47 L(\ud835\udc53\ud835\udf03(\ud835\udc65;\ud835\udc45\ud835\udf14),\ud835\udc66) (6) For optimizing this objective via gradient descent-based optimizers, the whole REML process (both models and their interactions) is required to be differentiable. End-to-end optimization is expected to perform better than the last two optimization approaches, but given the complexity of retrieval from large collections, this requirement may be difficult to satisfy in some cases. 3.4 Extending REML to Multiple ML Models Previous sections consider only a single prediction model that interacts with multiple retrieval processes (see Figure 2). This section extends the REML framework to multiple prediction models. Similar to current search engines that provide service to many users, retrieval models can be also employed by multiple ML models. Assume there are \ud835\udc40prediction models \ud835\udc53\ud835\udf031, \ud835\udc53\ud835\udf032, \u00b7 \u00b7 \u00b7 , \ud835\udc53\ud835\udf03\ud835\udc40that use \ud835\udc41information access models denoted by \ud835\udc45\ud835\udf141, \ud835\udc45\ud835\udf142, \u00b7 \u00b7 \u00b7 , \ud835\udc45\ud835\udf14\ud835\udc41. Each \ud835\udc45\ud835\udf14\ud835\udc56should provide service to multiple prediction models. This introduces the following challenges: Shared Query Language: All prediction models may need to share the same query language for interacting with retrieval systems. Shared Response Formats: The responses produced by each retrieval system will be used by all prediction models. Therefore, the prediction models should be able to utilize the response format used by each retrieval model. Shared Storage: The storage used by each retrieval model is shared between all prediction models. Storage is a limited resource, thus a policy may be required to regulate storage usage for each prediction model. Moreover, the data stored by each prediction model may not be interpretable by other models or may not be shared due to privacy restrictions. The Storage Handling component should develop memory management and access restriction policies and functionalities for each storage request. Personalization:2 The prediction models have special needs and they utilize the retrieval responses differently. Therefore, in response to a query \ud835\udc5esubmitted by two prediction models \ud835\udc53\ud835\udf03\ud835\udc56and \ud835\udc53\ud835\udf03\ud835\udc57, the retrieval models may want to respond differently. In this case, retrieval models would need to implement models and techniques for personalizing the search results. Comparable Feedback Across Prediction Models: Comparable feedback across prediction models enables us to easily aggregate the obtained feedback. Otherwise, the feedback can be used for each individual prediction model as a form of personalization. Optimizing Retrieval Models: In case of dealing with trainable retrieval models, the optimization solutions introduced in Section 3.3 need further adjustments. Let L\ud835\udc56denote the loss function associated with the \ud835\udc56th prediction model. Thus, the joint end-to-end optimization of models can be achieved as follows: arg min \ud835\udf03,\ud835\udf14 1 \ud835\udc40 \ud835\udc40 \u2211\ufe01 \ud835\udc56=1 1 |\ud835\udc47\ud835\udc56| \u2211\ufe01 (\ud835\udc65,\ud835\udc66) \u2208\ud835\udc47\ud835\udc56 \ud835\udefc\ud835\udc56L\ud835\udc56(\ud835\udc53\ud835\udf03\ud835\udc56(\ud835\udc65;\ud835\udc45\ud835\udf14),\ud835\udc66) (7) where \ud835\udc47\ud835\udc56denotes the training data for the \ud835\udc56th prediction task. This formulation assumes that the loss values are comparable across prediction models. The hyper-parameter \ud835\udefc\ud835\udc56s control the weight of each loss function. The conditional optimization formulation can be adjusted, similarly. 3.5 Information Access Evaluation in REML The prediction model should be evaluated based on its performance on the downstream task, and appropriate evaluation methodologies and metrics should be chosen considering the downstream task. This evaluation is the same for any predictive model designed for that task. Therefore, we skip the evaluation of prediction models and discuss approaches for evaluating the information access models. Evaluating information access in REML is particularly important for diagnosing the retrieval process and designing retrieval systems that provide service to multiple prediction models (see Section 3.4). The retrieval component in REML can be evaluated either extrinsically or intrinsically: Extrinsic Evaluation: The information access quality can be quantified by measuring its impact on the prediction model for the downstream task. This is perhaps the most important factor in evaluating information access in REML. Note that in case of having multiple prediction models, extrinsic evaluation is defined for each prediction model independently. However, aggregating the downstream performances for different prediction models is challenging, because prediction models may be evaluated based on various metrics and methodologies and they may not aggregate easily. Extrinsic evaluation can be done both through offline and online evaluation. Intrinsic Evaluation: In intrinsic evaluation, the retrieval model is evaluated independent of the prediction models. To do so, one may define relevance based on the desired documents expected to be retrieved for a prediction model. This definition may be obtained from experts or by analyzing observations from prediction models\u2019 behavior. Then presumably an annotation process, e.g., through pooling, may be employed for creating data collections for intrinsic evaluation of the information access model. Metrics used in 2Personalization is often used for humans. We stick to the same terminology to be consistent with the IR literature. \fintrinsic evaluation are expected to have high correlations with the downstream performance of the prediction models. We highlight that most metrics used in the IR literature have been developed based on user behaviors with search engines. For instance, many of them assume that users assess documents sequentially. However, such assumptions may not hold for many ML models. Thus, new evaluation metrics may need to be developed. 4 CASE STUDIES Since REML is a general framework, we can discuss related approaches as special cases of REML. This exercise helps us understand how and when REML might work and suggests opportunities for extending existing work. 4.1 Knowledge Grounding Fully data-driven ML models, despite demonstrating success across a wide number of tasks, still lack grounding in the real world. Access to external knowledge, via knowledge grounding, may help with this issue [11, 22, 34, 39, 76]. Knowledge grounding models make predictions based on the results returned by a retrieval model. In the context of language modeling, one class of methods uses retrieval results as evidence to support reasoning. For example, the knowledge retriever module in REALM [21] accesses information from an encoded Wikipedia corpus during pre-training. In text generation, RetGen [75] combines a grounded text generator with a document retriever. Grounding the generation helps with the issue of hallucinated facts, and the retrieval component makes the grounding effective and efficient. Lewis et al. [39] highlighted the importance of retrieval in knowledge-intensive NLP tasks and introduced retrieval-augmented generation (RAG) by augmenting a generator with the output of a non-parametric retriever that uses maximum inner product search. Entities as Experts (EaE) [15] introduces an entity memory that can be accessed by the model and the retrieved representations of entities are combined with the input representation for entity linking, mention detection, and masked language modeling tasks. Similarly, Fact as Experts (FaE) [61] incorporates a fact memory for language modeling. Such a mechanism gives access to factual information, that may expand or change over time, while there is no need for additional training or fine-tuning. In open-domain QA, a common approach is to retrieve documents or passages from Wikipedia or even the Web and then extract answers [27, 47]. Lee et al. [37] used an encoded Wikipedia corpus to train a retrieval model and then fine-tune the prediction model for a QA objective. Khattab et al. [33] used a retrieval component for multi-hop reasoning, where the retrieved facts from each hop are summarized into a short context and becomes a part of the query for the subsequent hops. Similarly, Das et al. [10] performed iterative retrieval for expanding and rewriting multi-hop questions. This is also the case for task-oriented dialogues. For instance, LaMDA [58] shows the benefit of granting dialogue systems access to external knowledge for reducing unsourced statement hallucination [53]. The approaches presented in this subsection mostly use simple retrieval models, e.g., TF-IDF or inner product of learned representations, for finding factual information from external knowledge bases. Therefore, one can look at knowledge grounding as an implementation of REML, mostly based on Category 1: Retrieval-only (Figure 1(a)) or Category 3: Retrieval with feedback (Figure 1(c)). 4.2 Memory-Augmented Machine Learning Using a memory component where the model can read from and/or write into is one of the most common ways of implementing REML in neural networks. The main motivation is to use an explicit storage buffer to make it easier for the network to rapidly incorporate new information and not to forget in the future. A model may use an internal memory where it compresses and accumulates information to access them in later stages of the process. This has been the base of several neural architecture classes. For instance, Long Short-Term Memory networks (LSTMs) [25] or Gated Recurrent Networks [7] that use a latent state as a memory to collect information from previous time steps. Attention-based models [3, 60] also treat different parts of the input as memories and use soft access as the retrieval mechanism to manage the interaction between them. However, memory-augmented neural networks refers to cases of using an external memory [51]. Among main works in this area, memory networks [55] explicitly store information in a form that is element-wise addressable. Neural Turing machines [18, 19] are well-known examples of ML models that can read from and write into an external memory matrix in order to represent and manipulate complex data structures. The common target property of memory-augmented neural networks is incorporating an external memory that is trained endto-end with the objective and data from downstream tasks. This most resonates with the fourth category of REML: Retrieval with memory and feedback (Figure 1(d)). However, the memory size in existing models is relatively small and extending the memory size is an exciting and challenging research direction. 4.3 Retrieval-Enhanced Input Representation A number of retrieval-enhanced models use the retrieved items to update the representations of the model\u2019s input. This is different from knowledge grounding in the sense that the information items do not necessary include the knowledge required for accomplishing the task. Instead, the retrieved information contains patterns that can help the model to learn more expressive representations. Pseudo relevance feedback (PRF) is an example of such models. It uses the top retrieved documents for updating the query representation through query expansion. It has shown successful results in a wide range of retrieval tasks [2, 8, 14, 28, 36, 68, 73], demonstrating the quality of the produced query representations for retrieval. Recently, Hashemi et al. [22] proposed Guided Transformer, an extension to the Transformer network that includes cross attention for contextualizing inputs with retrieved information from multiple information sources to learn more accurate representations of the model\u2019s input. In their subsequent work [23], the authors proposed an approach for learning multiple representations for query intents by utilizing the retrieval results and taking advantage of the Guided Transformer network for representation adjustment. More recently, Borgeaud et al. [5] proposed RETRO for language modeling and showed that by using networks like Guided Transformer one can enable access to a trillion-scale database for a relatively small model. \fRelated approaches have been also used in computer vision [20, 35, 40, 54]. For example, Xu et al. [69] studied the task of image inpainting whose goal is to restore missing regions of an image. They introduced a \u201ctexture memory\u201d that augments a neural network with access to patches extracted from unmasked regions of the input image. For the task of 3D scene reconstruction, Siddiqui et al. [54] used retrieval for creating multiple approximate reconstructions and then fusing them with an attention-based blending module to generate the output. For object detection, Kuo et al. [35] used retrieval from a large-scale dataset of 3D models to understand the underlying 3D structure of objects seen in a 2D image. Similar to knowledge grounding, retrieval-enhanced representation learning can take advantage of information items that are similar to the input by learning from patterns observed in the retrieved results. Thus, the first (retrieval-only) and the third (retrieval with feedback) REML categories are often used for this purpose. 4.4 Generalization through Memorization Combining retrieval-based and generative approaches has been explored in a number of applications. In this case, the retrieval component can contribute by producing accurate responses when memorization is sufficient. Motivated by the goal of memorizing rare patterns explicitly, Khandelwal et al. [32] introduced KNN-LM, where a retrieval mechanism is used to find the nearest neighbor tokens given the prefix as query. KNN-LM linearly interpolates the predicted distribution for the next token using distance information from the retrieval mechanism. BERT-KNN [29] employs a similar nearest neighbor algorithm to augment a BERT model to learn better representations for rare facts. This idea has also been extended to machine translation [31]. It is shown that retrieval augmentation improves domain adaptation by using a domain-specific datastore for retrieval. Tay et al. [57] proposed training a large model that memorizes the mapping of document content to document ids, which can be used to retrieve relevant document ids given a query at inference time. This model could be an alternative to KNN based models we discussed above to serve a REML system as a differential index. In dialogue systems, given a dialogue history as a query, a retrieval unit can be used to return the top ranked candidate response as the next dialogue utterance [50]. Such retrieval-based approaches can also be combined with response generation models and form a hybrid solution for dialogue systems [70]. Another approach to improve generalization through memorization is through updating retrieval results. In some cases, editing an existing candidate output is easier than generating it from scratch, especially in complex structured output generation tasks, like code generation. Hashimoto et al. [24] proposed to retrieve a training example given the input and edit it to the desired output. The retriever and the editing modules are trained jointly. Pasupat et al. [44] proposed using exemplar retrieval for semantic parsing. In their setup, given a query, the parser retrieves a set of related exemplars, augments the query using the retrieved information, and then incorporates a seq2seq model [56] to produce an output parse. The aforementioned methods try to use a retrieval component to handle memorization cases. It is found useful, especially for cases where sufficient training data is not available. Many existing models are based on a retrieval-only implementation of REML. 4.5 Efficient Access to Longer Context Due to memory constraints as well as efficiency and effectiveness reservations, consuming and representing large inputs, e.g., long text documents or videos, are challenging. REML offers a solution to address this issue by giving access to the context of any size via a retrieval mechanism. Here we mention a few examples of studies that exploit this idea. Wu et al. [63] proposed using a long-term feature bank for detailed video understanding. The long-term feature bank stores a rich, time-indexed representation of a long video. Then the video understanding model consults with the bank through a retrieval module to get features that encode information about past and future scenes, objects, and actions. Similarly, MemViT [64], proposes to process videos in an online fashion and store information in memory at each iteration. The model can retrieve prior context from the memory to enable long-term modeling for the recognition task. Similar approaches have also been used for video object segmentation [43] and video summarization [38]. For processing long documents, researchers often split the documents into passages. For instance, Dai and Callan [9] only used the first passage of each document for document retrieval. Xiong et al. [66] used the passage with the maximum similarity score with the query. The end-to-end intra-document cascading model [26] can be seen as a REML model with feedback. It first selects (retrieves) a number of passages from the document and then consumes the selected passages for scoring the document. The methods presented in this subsection are perhaps the simplest implementations of REML: the retrieval collection is not large, and some of them do not use feedback. 4.6 Retrieval-Enhanced Optimization All the methods mentioned above use a retrieval component at the inference time for making accurate predictions. Some approaches use retrieval components solely for the purpose of optimization, e.g., for producing training data and/or computing loss functions. Thus, the retrieval model will not be used during inference. A natural application of retrieval-enhanced optimization is for retrieval tasks. Dehghani et al. [12] introduced a weak supervision approach for IR by producing large-scale training data through BM25 and training ML models for document ranking. Zamani and Croft [71] used the top retrieved documents to produce a relevance model distribution for training queries and learn relevance-based word embedding. Producing hard negative instances for training learning-to-rank models is another application of REML. For instance, ANCE [66] and its extensions [41, 46] are dense retrieval models that iteratively use the model parameters to retrieve documents for producing \u2018hard\u2019 negative samples for training the model. Wu et al. [65] used a retrieval unit to enable unsupervised training of machine translations, i.e., using two monolingual corpora in the source and target languages with no alignment. As an alternative to back translation, they proposed retrieving a sentence from the target corpus using the source sentence and applying some \fchanges using an editing mechanism to the retrieved target to generate source-target pairs and train the MT model. Triantafillou et al. [59] proposed an approach for few-shot learning through retrieval. This approach retrieves items for each input and uses them for making predictions. Via this approach, a model can adapt to a new domain without additional training or new data. An interesting use case of REML is the pre-training task of CLIP [48] and VideoCLIP [67] which are practically optimizing for text-image and text-video retrieval, respectively. They are in fact capturing cross-modal relevance that led to learning representations that are effective in various setups, like zero-shot classification. 5 A RESEARCH AGENDA While Section 4 provides evidence of the efficacy and broad applicability of REML, there remain significant open research challenges in fully realizing the general REML vision, some of which are already mentioned in previous sections. 5.1 Querying In developing a prediction model that supports retrieval, understanding how to query becomes a core research question. First, this involves knowing when to query. In some situations, a prediction model may not benefit from a retrieval operation (even if it benefits on average). Although current retrieval-enhanced systems issue the equivalent of queries for every instance, when querying incurs some cost, be it in the form of latency or financial expense, developing models that \u201cknow when they don\u2019t know\u201d would allow the prediction algorithm to explicitly trade off cost and benefit. A prediction model that has access to multiple information access services can make this decision for individual corpora, perhaps select the appropriate source for the instance. Second, at a more granular level, how retrieval might benefit a model may vary by instance \ud835\udc65. For example, retrieval may support uncertainty in one part of the \ud835\udf03for one instance and uncertainty in another part of \ud835\udf03for another instance. This self-interrogation can be explicitly designed or implicitly learned. Nevertheless, even learnable behavior requires an architecture and parameters to adapt. Finally, many retrieval situations can benefit from the searcher conveying non-semantic meta-retrieval information such as uncertainty in (aspects of) the query or context of the retrieval itself. People often convey similar information to human intermediaries [1] and we suspect that more expressive querying can also emerge in REML. In developing an information access model to support a prediction model, similar questions arise. First, developing or learning a query language requires expressiveness that captures the breadth of model needs. At the same time, it should allow for communication of meta-retrieval or structured properties of the retrieved set. Moreover, these properties need to be explored within the effectiveness and efficiency constraints. Second, although a query may be effective and efficient in general, it may be ambiguous or imprecise for a particular retrieval scenario. This is especially likely in situations where multiple models may develop inconsistent uses of the query language (Section 3.4). 5.2 Storing The ability of the prediction model to store items presents unique problems not encountered in traditional retrieval or ML research. Although architectures like memory networks [62] provide modestly sized storage, we anticipate models storing or serializing on a larger scale with more permanence. In situations with multiple models (Section 3.4), we anticipate the corpus operating as a means to share derived knowledge (to avoid re-computation during inference) or prediction model parameters (to support learning). In developing a prediction model that supports storage, understanding how to store becomes a core research question. Just as with querying, a model needs to reason about when to store, what to store from its parameters or reasoning, and how to represent that information. Each of these questions is relevant both to sharing derived knowledge as well as model parameters. Like queries, stored items may include auxiliary information such as the model\u2019s confidence in the derived data or parameter values, the prediction task, and other information that may be valuable for an information access system to make retrieval decisions. More so than with queries, a model might need to be more judicious in storage operations, since injecting irrelevant or erroneous content into the corpus can significantly degrade its usefulness. In developing an information access model to support storage, classic problems related to indexing arise. First, as with queries, the language, schema, or representation of an item requires careful construction to optimize for effectiveness and efficiency. Second, in accepting a storage request from a prediction model, the information access system needs to model the value of the content. Redundant items can either add noise or improve coverage, depending on the task. Or, an item may require processing to make indexing and retrieval more effective. These decisions can be based on the content of the item or meta-data about the item, such as the confidence of the model or, in the case of multiple models, confidence in the prediction model itself. Third, if an item should be stored, there is the question of how to store it. This includes questions of item compression and representation, both of which need to occur incrementally but improve with batch, corpus-wide computation. Finally, in the case of limited capacity in the retrieval index, storage operations may necessitate purging less effective content. This requires that the information access model reason about how collection management decisions impact prediction models. 5.3 Searching Ranking functions, a fundamental property of traditional information access systems, influence design decisions about how to store content compactly, how to search that content quickly, and how to return results effectively. In moving toward REML, several fundamental research questions need to be addressed in order to satisfy these properties for machines. First, items in REML indexes are likely to be differently structured than existing text documents (see Section 5.2). Although representations like dense, fixed dimensional vectors are amenable to efficient storage and retrieval, structures that include uncertainty and other attributes may require embedding as a representation amenable to fast retrieval (e.g., vectors) or different indexing schemes altogether. Second, the representations of items in the index themselves should be selected for effectiveness \fin supporting prediction models, as well as the space and runtime efficiency. In some cases, this means accurate and fast score computation. When a retrieval involves more elaborate post-processing before returning results, this may mean decomposing items before indexing (as is often done when retrieving passages, as opposed to documents). Third, in situations where there are multiple prediction models, the information access system can use the identity of the model in order to \u2018personalize\u2019 results for that model. Similarly, we can interpret the feedback from prediction models based on where it comes from; some models may not provide actionable feedback early in learning, others may be quite reliable, while others yet might be adversarial. Third, these representations and their associated ranking functions themselves should be tunable given feedback from prediction models (see Section 5.5). Adjustments to representations and model weights should be sensitive to confidence in the feedback signal in situations where feedback includes a confidence estimate or if the information access model can estimate the reliability of the feedback. 5.4 Information Presentation & Consumption Representing the retrieval results in traditional information access involves returning a ranked list of items. Although items include scores, these are often only used to sort items and are rarely presented to the user. In the context of REML, we can consider more elaborate representations of retrieval results because they are being consumed by machines. This introduces a number of exciting research directions. First, system designers will need to understand the appropriate information to communicate to prediction models, be it an item ranking, a scored set, a set where each item is associated with a score distribution, a graph of inter-item relationships, or some other object derived from the retrieval. Each of these choices needs to satisfy improving the prediction model\u2019s effectiveness, within any cost constraints (e.g., bandwidth, compute). Moreover, in situations with multiple prediction models, the consistency, interpretability, and maintainability of this representation language become extremely important. Second, from an efficiency perspective, just as computing a top \ud835\udc58ranking can suggest fast document scoring, information about the representation can introduce opportunities for more efficient computations of objects like graphs and score distributions. Third, a prediction model with access to multiple information access models needs to reason over multiple sets of results. Information encoded in the results\u2013explicitly or not\u2013 can allow the prediction model to consider the reliability of results before incorporating them into inference. Finally, from a machine learning perspective, how to incorporate results into inference will become an important area of work. Current approaches based on neighbors provide a simple approach, although more sophisticated techniques are likely to improve performance. 5.5 Feedback Modern information access systems use implicit user feedback in order to optimize model parameters. Although we can imagine a prediction model providing loss information in its feedback similar to how users might provide slate-level feedback, machines may be able to convey more granular and expressive feedback to the information access model. As such, the first area of research centers on forms of feedback, including scalar values, vectors of values, and more expressive data with goal of helping the information access model improve. While single scalar feedback values seem simplest, even modern search engines exploit implicit item-level feedback. We can imagine more targeted and attributed feedback provided by the prediction model. This structured feedback can include attribution to different components of the retrieval structure (Section 5.4). Of course, this requires the prediction model being able to identify the relationship between prediction error and different parts of the retrieval result; in the case of multiple information access services, attribution to individual corpus results. The second area of research focuses on how an information access model might adjust model parameters given rich feedback from the prediction model. Current ranking models, with appropriate treatment of different biases, can interpret user feedback as attributed to individual items in the ranking. A machine may be able to provide feedback that has fewer biases and better calibration than human feedback. This includes exploring a new space of feedback beyond scalar item-level values. This also calls for novel approaches for optimizing information access models based on the provided feedback. 5.6 Evaluation The objective of REML is to support machines. As such, standard methods of evaluating modeling performance (e.g., Equation 1) can be adopted to assess prediction model performance. Nevertheless, REML introduces several research directions around model evaluation. First, because of the large, flexible storage capacity, REML can memorize training data or cache previous predictions, resulting in performance metrics (e.g., accuracy) conflating a model\u2019s ability to reason (i.e., the prediction model) and its ability to remember (i.e., the information access model). Methods of selecting evaluation instances or ablation experiments can isolate the contribution of each component. Second, in situations with multiple prediction models, we need methods to assess performance changes for a group of models with a shared information access service. Although these per-model losses can be aggregated into a simple average, this may obscure modelor task-specific under-performance. That said, in some situations, storage operations might result in sharing information, boosting collective performance, and necessitating an evaluation method that decouples reasoning from memorization. Finally, efficiency metrics that capture the cost of query and response operations (e.g., latency, financial) will need to be developed. In some cases, we are interested in evaluating the information access model in isolation to make a claim about generalizability of a specific retrieval model to new prediction models, just as we traditionally consider evaluation queries as a sample from the full set of queries we would like to apply a system to. Although we can evaluate information access models using the existing information access evaluation methods (e.g., Cranfield-style offline evaluation, click feedback), we anticipate the opportunity\u2014and sometimes need\u2014to develop entirely new evaluation schemes. First, although a prediction model can be evaluated by its loss function, an information access model can be evaluated by its adoption. Indeed, if a retrieval component is not used, then perhaps it can be removed altogether. To see why retrieval systems may be more or less valuable over time, consider the situation where a prediction model can store \fitems such as partial inference or complete inference; in this case, the storage can act like a cache, with queries likely to grow with time, depending on the data. Or, if there are multiple information access services, the usefulness of some may increase or decrease over time. Nonstationarity can also arise when instances have serial dependencies, such as when a retrieval system is repeatedly queried during a dialog or multi-hop task. Second, estimating an information access model\u2019s performance on out of sample domains or tasks requires careful selection of training and evaluation tasks. Third, in developing offline or batch evaluation methods, although we can avoid some issues, labeling items for relevance and designing metrics reflective of model use becomes difficult, since existing ranking metrics are unlikely to approximate how a machine would consume results (see Section 5.4). Finally, REML presents a tremendous opportunity to study these questions in silico. This means that experimentation and analysis, although more complicated, will be much faster than systems serving people, without safety concerns, since experiments can be run isolated from people. 6" + }, + { + "url": "http://arxiv.org/abs/2201.08808v2", + "title": "Conversational Information Seeking", + "abstract": "Conversational information seeking (CIS) is concerned with a sequence of\ninteractions between one or more users and an information system. Interactions\nin CIS are primarily based on natural language dialogue, while they may include\nother types of interactions, such as click, touch, and body gestures. This\nmonograph provides a thorough overview of CIS definitions, applications,\ninteractions, interfaces, design, implementation, and evaluation. This\nmonograph views CIS applications as including conversational search,\nconversational question answering, and conversational recommendation. Our aim\nis to provide an overview of past research related to CIS, introduce the\ncurrent state-of-the-art in CIS, highlight the challenges still being faced in\nthe community. and suggest future directions.", + "authors": "Hamed Zamani, Johanne R. Trippas, Jeff Dalton, Filip Radlinski", + "published": "2022-01-21", + "updated": "2023-01-25", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.HC" + ], + "main_content": "Introduction 1.1 Motivation Over the years, information retrieval and search systems have become more conversational: For instance, techniques have been developed to support queries that refer indirectly to previous queries or previous results; to ask questions back to the user; to record and explicitly reference earlier statements made by the user; to interpret queries issued in fully natural language, and so forth. In fact, systems with multi-turn capabilities, natural language capabilities as well as robust long-term user modeling capabilities have been actively researched for decades. However, the last few years have seen a tremendous acceleration of this evolution. This has been driven by a few factors. Foremost, progress in machine learning, speci\ufb01cally as applied to natural language understanding and spoken language understanding, has recently surged. Whereas the possibility of a conversational information seeking (CIS) system robustly understanding conversational input from a person was previously limited, it can now almost be taken for granted. In concert with this, consumer hardware that supports and encourages conversation has become common, raising awareness of \u2014 and the expectation of \u2014 con2 Draft Version 1.2 \f1.2. Guide to the Reader 3 versational support in IR systems. From the research community, this has been accompanied by signi\ufb01cant progress in de\ufb01ning more natural CIS tasks, metrics, challenges and benchmarks. This has allowed the \ufb01eld to expand rapidly. This monograph aims to summarize the current state of the art of conversational information seeking research, and provide an introduction to new researchers as well as a reference for established researchers in this area. 1.2 Guide to the Reader The intended audience for this survey is computer science researchers in \ufb01elds related to conversational information seeking, as well as students in this \ufb01eld. We do not assume an existing understanding of conversational systems. However, we do assume the reader is familiar with general concepts from information retrieval, such as indexing, querying and evaluation. As this monograph is not a technical presentation of recent machine learning algorithms, we also assume a basic understanding of machine learning and deep learning concepts and familiarity with key algorithms. The reader will be provided with a summary of the open CIS problems that are currently attracting the most attention, and many promising current results and avenues of investigation. We will also provide an overview of applications attracting interest in the community, and the resources available for addressing these applications. When discussing the structure of conversations we adopt terminology used in the speech and dialogue research community. The most basic unit is an utterance (analogous to a single query in retrieval). All contiguous utterances from a single speaker form a single turn (Traum and Heeman, 1996), with a conversation consisting of multiple turns from two or more participants. For the reader we note that somewhat confusingly, a commonly adopted de\ufb01nition in CIS publications de\ufb01nes a turn as the pair of a user turn and a system response turn (a user query and system answer). The focus of this work di\ufb00ers from recent related surveys. We draw the reader\u2019s attention to the following most related examples. Gao et al. Draft Version 1.2 \f4 Introduction (2019) presented an overview focused on speci\ufb01c neural algorithmic solutions for question answering, task-oriented and chat agents. Freed (2021) also focused on the development of chatbots, often for customer support. Our focus is more on characterizing the problem space related to information seeking conversations and providing a broad overview of di\ufb00erent problems, metrics and approaches. Moreover, the report from the third Strategic Workshop on Information Retrieval in Lorne (SWIRL 2018) (Culpepper et al., 2018) provided a broader summary of important open challenges in information retrieval, where various challenges associated with CIS were ranked \ufb01rst. That document provides a briefer overview and reading list, more concretely aimed at summarizing open challenges. A more recent report from the Dagstuhl Seminar on Conversational Search (Anand et al., 2020) reiterated these challenges in more detail. Beyond these, more focused recent relevant workshops include SCAI (Penha et al., 2022), KaRS (Anelli et al., 2022), Sim4IR (Balog et al., 2022), Future Conversation (Spina et al., 2021) and MuCAI (Hauptmann et al., 2020) among others. Concurrent to this work, Gao et al. (2022) published a book draft on deep learning approaches for conversational information retrieval. This monograph provides a holistic overview of CIS systems, state-of-the-art CIS approaches, and future directions in CIS research. In contrast, Gao et al.\u2019s book focuses speci\ufb01cally on deep learning solutions for various subtasks in conversational IR, therefore provides a complementary view to ours. 1.3 Scope This monograph focuses on a particular class of conversational systems, namely those that exhibit key attributes of human conversation. We take a cue from Radlinski and Craswell (2017), who propose that a conversational system should incorporate mixed initiative (with both system and user able to take initiative at di\ufb00erent times), memory (the ability to reference and incorporate past statements), system revealment (enabling the system to reveal its capabilities and corpus), user revealment (enabling the user to reveal and/or discover their information need), and set retrieval (considering utility over sets of complementary items). Here, we study approaches that exhibit at least some of these Draft Version 1.2 \f1.3. Scope 5 properties. In particular, we do not delve deeply into dialogue systems that restrict themselves largely to identifying slot/value pairs in back and forth exchanges between the system and user. Additionally, we focus on information seeking, which refers to the process of acquiring information through conversation in order to satisfy the users\u2019 information needs. This implies that the conversation should exhibit a clear goal or assist the human user in completing a speci\ufb01c task through \ufb01nding information. While signi\ufb01cant progress has been recently made on chit-chat systems, with a primary goal of keeping users engaged in realistic conversational exchanges over a prolonged time (for more information, see (Yan et al., 2022)), we do not attempt to cover such work in depth. Our focus thus aligns more with traditional search concepts such as the presence of an information need or user agenda that existed before they engaged with the CIS system, and which can be satis\ufb01ed through a conversation. On the other hand, we do not make a strong distinction between search and recommendation tasks. Rather, we cover both types of conversational information seeking interactions. We see these as strongly related tasks that are becoming more closely related as time passes. Indeed, we believe that the same task can often be characterized as either. For instance, a query \u201chotels in London\u201d can be seen as either a search task (e.g. on a desktop interface, for a potential future tourist considering a\ufb00ordability in di\ufb00erent areas) or a recommendation task (e.g. using a smart watch while standing in heavy rain in central London). Clearly device, interface and context play an important role in determining the best next conversational step. Finally, we draw attention to three key aspects of CIS that, while having received signi\ufb01cant attention, remain largely unsolved. First, the level of natural language understanding in conversational systems remains far from human-level, particularly over long sequences of exchanges. Even over adjacent conversational steps, question/answer interpretation remains challenging. Second, robust evaluation of conversational systems remains a critical research challenge: The highly personalized and adaptive nature of conversations makes test collection construction highly challenging. We will cover many of the common approaches, and their limitations. Third, conversation is sometimes taken to imply voice Draft Version 1.2 \f6 Introduction or speech interactions. We do not make this assumption, recognizing that conversations can happen in many types of interfaces and modalities. We discuss research of conversations combining di\ufb00erent types of interfaces and presentations in depth. Three particularly important aspects of CIS that are very active areas of research include obtaining human-level natural language understanding, robust evaluation of CIS systems, and moving beyond simple text and speech interactions. There are a number of particularly important aspects of conversational information seeking that despite their importance are not covered in depth here, as they apply broadly across many non-conversational search and recommendation tasks. The \ufb01rst is the question of privacy. Clearly this is an essential aspect of all search tasks \u2013 and should be considered in depth in any practical system. We refer readers to Cooper (2008) and Zhang et al. (2016) as a starting point for privacy considerations as applied to logging and log analysis. Similarly, we do not consider the type of information that a user may request or receive \u2013 including information that might be considered offensive or harmful. As this issue is not speci\ufb01c to conversational systems and is heavily studied; A detailed consideration of such information access is thus beyond our scope. We refer readers to Yenala et al. (2018) and Pradeep et al. (2021) as starting points of recent work in this space. Along the same lines, fairness is an essential aspect for information seeking and recommendation tasks, yet largely beyond our scope. We note that this includes both fairness in terms of biases that may exist in recommendation to di\ufb00erent groups (Ge et al., 2021) as well as fairness when considering both consumers of recommendations as well as producers of items being recommended (Abdollahpouri et al., 2020). We refer interested readers to Ekstrand et al. (2022) for a complete recent overview. Draft Version 1.2 \f1.4. Applications 7 1.4 Applications An alternative way to characterize the scope of this work could be in terms of the relevant applications that are addressed. Section 2 will focus on this formulation, starting with a brief introduction on conversational information seeking (Section 2.3). This includes a discussion of di\ufb00erent modalities\u2019 (that is, text, speech, or multi-modal) impact on the seeking process, as for instance studied by Deldjoo et al. (2021). We then continue with the topic of conversational search and its various proposed de\ufb01nitions (Section 2.5), culminating with one that relates CIS to many other related settings (Anand et al., 2020). Section 2.6 introduces conversational recommendation (Jannach et al., 2021a) followed by conversational question answering in Section 2.7, where for instance Qu et al. (2019b) provide a powerful characterization of the relationships between these areas of study. We continue Section 2 by explaining how CIS applications can be used in di\ufb00erent domains, and focus on e-commerce, enterprise, and health in Section 2.8. The section concludes with details on intelligent assistants with relation to CIS. 1.5 A High-Level Architecture for CIS Systems To create a structure for the remainder of this work, we follow the general structure of most CIS systems. This choice guides the main body of this monograph: Each section in this part focuses on a core technological competency that is essential to a modern CIS system. In particular, a CIS system must \ufb01rst choose an interface (\u00a71.5.1). It must then have an approach to maintain the state of a conversation (\u00a71.5.2), and at each system turn determine the system\u2019s next utterance (\u00a71.5.3). One particular challenge that is attracting attention is when the system should take initiative versus responding passively (\u00a71.5.4). Key design considerations of a CIS system include its chosen interface, how it maintains conversational state, and how it selects the system\u2019s next utterance. One particular challenge for the latter is that of when the system should take initiative. Draft Version 1.2 \f8 Introduction 1.5.1 Conversational Interfaces and Result Presentation Section 3 provides an overview of conversational interfaces. We begin with a historical perspective, where we explain di\ufb00erences between existing conversational interfaces such as spoken dialogue systems, voice user interfaces, live chat support, and chatbots. This overview illustrates the use of conversations within closely related CIS applications (McTear et al., 2016). Next, research on result presentation through di\ufb00erent mediums (desktop or small device) and modalities (text, voice, multimodal) are discussed in Section 3.2, such as recent work by Kaushik et al. (2020). This overview emphasizes the di\ufb03culties with highly interactive result presentation and highlights research opportunities. Following this, Section 3.3 introduces di\ufb00erent kinds of initiative in conversational systems, including system-initiative, mixed-initative, and user-initiative, for instance well-characterized by Zue and Glass (2000) and Wadhwa and Zamani (2021). This section aims to explain the di\ufb00erent kinds of initiative, and the consequences on human-machine interactions. We \ufb01nish the section with a discussion of conversational interfaces limitations including, for instance, limitations as experienced by visually impaired searchers (Gooda Sahib et al., 2015). 1.5.2 Tracking and Understanding Flow The focus of Section 4 is on the varying approaches that make it possible to follow conversational structure. We begin with an overview of how to represent a single turn, such as is done with Transformer models (Ra\ufb00el et al., 2020), and how turns are often classi\ufb01ed into dialogue acts (Reddy et al., 2019). Section 4.2 then looks at how the di\ufb00erent turns of a conversation are usually tied together through state tracking and text resolution across turns. In particular, the structure of longer conversations is looked at in-depth in Section 4.3, although noting that existing models are often limited in their ability to capture long-distance conversational structure (Chiang et al., 2020). We cover work that operates over long-term representation of CIS exchanges in Section 4.4, followed by recent work that attempts to model longer conversations in the \ufb01nal section, epitomized by work on selecting the right context for understanding each turn (Dinan et al., 2019a). Draft Version 1.2 \f1.5. A High-Level Architecture for CIS Systems 9 1.5.3 Determining Next Utterances The next step for a canonical conversational system is selecting or generating a relevant response in the conversational context. This is the focus of Section 5. We begin with an overview of the di\ufb00erent types of responses, including short answers, long answers, and structured entities or attributes. The short answer section presents early Conversational QA (ConvQA) systems then discusses the transition to more recent Transformer architectures based on pre-trained language models. Section 5.1.5 then examines how ConvQA is performed over structured knowledge graphs including systems that use key-value networks (Saha et al., 2018), generative approaches, and logical query representations (Plepi et al., 2021). Following this, we discuss open retrieval from large text corpora as part of the QA process. In particular, Section 5.2 goes beyond short answer QA to approaches performing conversational passage retrieval from open text collections including multi-stage neural ranking, for instance recently considered by Lin et al. (2021b). We brie\ufb02y discuss long answer generation approaches in Section 5.3 including both extractive and abstractive summarization methods. We conclude the section with conversational ranking of items in a recommendation context, including models that use multi-armed bandit approaches to trade-o\ufb00between elicitation and item recommendation. 1.5.4 Initiative Section 6 provides a detailed look at mixed-initiative interactions in CIS systems. We start with reviewing the main principles of developing mixed-initiative interactive systems, and describing di\ufb00erent levels of mixed-initiative interactions in dialogue systems (Allen et al., 1999; Horvitz, 1999). We brie\ufb02y review system-initiative interactions with a focus on information seeking conversations, such as the work of Wadhwa and Zamani (2021), in Section 6.1. We then delve deeply into intent clari\ufb01cation as an example of important mixed-initiative interactions for CIS in Section 6.2. We introduce taxonomy of clari\ufb01cation and review models for generating and selecting clarifying questions, such as those by Aliannejadi et al. (2019) and Zamani et al. (2020a). In presenting the work, we include models that generate clarifying questions trained Draft Version 1.2 \f10 Introduction using maximum likelihood as well as clari\ufb01cation maximization through reinforcement learning. Additionally, Section 6.3 discusses preference elicitation and its relation with clari\ufb01cation, followed by mixed-initiative feedback (i.e., getting feedback from or giving feedback to users via sub-dialogue initiation) in Section 6.4. 1.6 Evaluation Beyond the details of how a CIS system functions, fair evaluation is key to assessing the strengths and weaknesses of the solutions developed. Section 7 looks at evaluation in CIS holistically. After considering possible ways of studying this broad space, this section breaks down evaluation by the setting that is evaluated. Speci\ufb01cally, o\ufb04ine evaluation is treated \ufb01rst, in Section 7.2. A variety of frequently used o\ufb04ine datasets are presented (such as Multi-WOZ (Budzianowski et al., 2018)), and strengths and limitations are discussed including the use of simulators to produce more privacy-aware evaluations as well as the use of nontext datasets. Online evaluation is considered next, with Section 7.3 contrasting lab studies, crowdsourcing, and real-world evaluations. An example of these is where commercial systems may ask evaluation questions of their users (Park et al., 2020). Finally, the metrics applied in these settings are covered in Section 7.4. While readers are referred to Liu et al. (2021a) for a full treatment, we present an overview of typical turn-level as well as end-to-end evaluation metrics. 1.7 Open Research Directions Section 8 provides a brief summary of this monograph and discusses di\ufb00erent open research directions. We collate the major themes discussed throughout this manuscript instead of presenting a detailed account of all possible future research problems. We highlight four key areas for future exploration. First, Section 8.1.1 covers challenges related to modeling and producing conversational interactions as a way to transfer information between user and system. Second, we highlight the importance of result presentation and its role in CIS research in Section 8.1.2. Third, we emphasise the importance of di\ufb00erent CIS Draft Version 1.2 \f1.8. Further Resources 11 tasks in Section 8.1.3. Finally, Section 8.1.4 covers measures of success during the highly interactive CIS process and ultimate evaluation of CIS systems. 1.8 Further Resources Beyond the main body of this work, Appendix A brie\ufb02y presents a more holistic historical context for this monograph. This appendix mainly includes information about early research on interactive information retrieval, as well as on dialogue-based information retrieval, such as the I3R (Croft and Thompson, 1987) and THOMAS (Oddy, 1977) systems (see Section A.1). We discuss approaches for theoretical modelling of interactive information retrieval systems, such as game theory-based models (Zhai, 2016) and economic models (Azzopardi, 2011) in Section A.2. We also include introductory information about existing literature on session search, such as the TREC Session Track, and evaluation methodologies for session search tasks (Carterette et al., 2016) in Section A.3. Finally, we brie\ufb02y cover exploratory search (White and Roth, 2009) and discuss its relationship to conversational information seeking in Section A.4, followed by a very brief overview of chit-chat and task-oriented dialogue systems in Section A.5. Newcomers to the \ufb01eld of information retrieval are highly encouraged to review this appendix to develop an understanding of where the core ideas behind CIS originated. This monograph has been used in multiple tutorials on conversational information seeking at top-tier conferences, e.g., at the SIGIR 2022 (Dalton et al., 2022) and the Web Conference 2023 (Dalton et al., 2023). The materials prepared for these tutorials, e.g., presentation slides, interactive demos, and coding practices, are available at https://cis-tutorial.github.io/. Draft Version 1.2 \f2 De\ufb01nitions and Applications In this section, we provide relevant concepts from previous work in conversational information seeking (CIS) and its tasks, contexts, and applications illustrating the multi-dimensional nature of CIS. This introductory section aims to guide the reader with background knowledge on de\ufb01nitions and basic concepts related to CIS. We cover three CIS subdomains, namely conversational search, conversational recommendation, and conversational question answering. These topics are closely related and their boundaries are often blurred. We also introduce some domain-speci\ufb01c applications of CIS, including e-commerce, enterprise, and health, and illustrate their use cases. Lastly, we cover how CIS can be embedded within the subdomain of intelligent assistants. 2.1 Conversation The term \u201cconversation\u201d carries di\ufb00erent de\ufb01nitions in di\ufb00erent contexts. The Merriam-Webster Dictionary de\ufb01nes conversation as \u201coral exchange of sentiments, observations, opinions, or ideas\u201d.1 This refers to the everyday use of conversation by humans. Brennan (2012) de\ufb01ned 1https://www.merriam-webster.com/dictionary/conversation 12 Draft Version 1.2 \f2.1. Conversation 13 conversation as \u201ca joint activity in which two or more participants use linguistic forms and nonverbal signals to communicate interactively\u201d, highlighting the possible use of nonverbal signals in conversations. In contrast, researchers in dialogue systems consider a more pragmatic de\ufb01nition by identifying a few attributes in human conversations. These attributes include turn, speech acts, grounding, dialogue structure, initiative, inference, and implicature (Jurafsky and Martin, 2021, Ch. 24). This monograph provides a new de\ufb01nition of conversation, which we believe is better suited for conversational information seeking research. A conversation is often de\ufb01ned as a sequence of interactions between two or more participants, including humans and machines, as a form of interactive communication with the goal of information exchange. Unlike most de\ufb01nitions of conversation in linguistics and dialogue systems that only focus on natural language interactions, we argue that a conversation can also exhibit other types of interactions with di\ufb00erent characteristics and modalities, such as click, touch, body gestures, and sensory signals. The reason behind including these interactions is the rich history of using them in search technologies that shape the fundamentals of CIS research. That said, long form natural language is still the dominant interaction type in conversations. Therefore, a conversation can be de\ufb01ned as follows. De\ufb01nition 1. Conversation is interactive communication for exchanging information between two or more participants (i.e., humans or machines) that involves a sequence of interactions. While natural language is considered a prerequisite for conversational interactions, conversations can also exhibit other types of interaction with di\ufb00erent characteristics and modalities (e.g., click, touch, and gestures). An important characteristic of conversation is its style: synchronous versus asynchronous. Synchronous conversations happen in real time, where at least two participants (or agents) exchange information. Most human-machine conversations are expected to be synchronous. Asynchronous conversations, on the other hand, happen when information Draft Version 1.2 \f14 De\ufb01nitions and Applications can be exchanged independently of time. Therefore, asynchronous conversations do not require the participants\u2019 immediate attention, allowing them to respond to the message at their convenience. Conversations between humans in forums and email threads are asynchronous. A conversation can also be a mixture of synchronous and asynchronous interactions. For instance, a user can have synchronous interactions with a conversational system. Later, a human representative can reach out to the user to follow up on the conversation and better address the user\u2019s needs if the conversational system fails. Researchers in the area of CIS are interested in information seeking conversations: conversations in which at least one participant is seeking information and at least another participant is providing information. Information seeking conversations are mostly either among humans (e.g., the interactions between users and librarians for \ufb01nding information in a library) or between humans and machines (e.g., the interactions between a user and a CIS system). They can be either synchronous, asynchronous, or a mixture of both. De\ufb01nition 2. Information seeking conversation is a conversation (cf. Def. 1) in which the goal of information exchange is satisfying the information needs of one or more participants. 2.2 Interaction Modality and Language in Conversation According to the above de\ufb01nition of conversation, a conversational system\u2019s input from the users may involve many di\ufb00erent input types, such as touch, speech, or body gestures. These signals can be translated through traditional input devices such as a mouse or keyboard. For more modern input devices, users can also input gestures, motion, or touch. The output channels from the conversational system can vary from 2D screens to audio output to potentially even holograms. Users can interact with a conversational system through a range of input devices, including keyboards for typing, microphones for speech, Draft Version 1.2 \f2.3. Conversational Information Seeking 15 smartphones for touch, or through a mixture of these and other input devices (Deldjoo et al., 2021). Using a mixture of modalities o\ufb00ers numerous bene\ufb01ts. The key is accessibility; for example, systems with spoken interfaces may be more accessible to users for whom traditional search interfaces are di\ufb03cult to use (Weeratunga et al., 2015). Even though research in CIS primarily refers to conversation as textual or spoken input, other modalities and the mixture of modalities are receiving increased research attention (Liao et al., 2021; Hauptmann et al., 2020; Deldjoo et al., 2021). The system output or presentation, similar to the input from the user, can consist of di\ufb00erent output channels. Given the user\u2019s device, context (e.g., time, location, device), and task complexity, conversational systems need to decide which output modality to use for result presentation (Deldjoo et al., 2021). 2.3 Conversational Information Seeking CIS, the process of acquiring information through conversations, can be seen as a subset of information seeking (Wilson, 1999). In the case of information seeking, any interaction that aids the \ufb01nding of information is considered. Hence, searching for information in a book is considered part of information seeking. In contrast, CIS speci\ufb01es the interaction type as conversational in which thoughts, feelings, and ideas are expressed, questions are asked and answered, or information is exchanged. CIS is often partitioned into three subdomains: conversational search, conversational recommendation, and conversational question answering. However, we do not make a strong distinction between these subdomains. The reason is that the boundaries between these subdomains are blurred. For instance, a system that helps a user to \ufb01nd and purchase shoes through a conversational interface can be seen as either a conversational search or conversational recommendation. Or a system that answers a sequence of non-factoid questions by retrieving passages can be seen as either conversational search or conversational question answering. Therefore, this monograph focuses on conversational information seeking in general and describes models, theories, and techniques that can be used across all CIS subdomains. We de\ufb01ne CIS systems as follows: Draft Version 1.2 \f16 De\ufb01nitions and Applications De\ufb01nition 3. A Conversational Information Seeking (CIS) system is a system that satis\ufb01es the information needs of one or more users by engaging in information seeking conversations (cf. Def. 2). CIS responses are expected to be concise, \ufb02uent, stateful, mixed-initiative, context-aware, and personalized. In this de\ufb01nition, we provide several properties that are expected from CIS systems. They are explained in the next subsection. Even though we believe that there is no clear distinction between CIS subdomains (depicted in Figure 2.1), we describe prior work that focused on each of these subdomains in Sections 2.5 \u2013 2.7. Conversational Information Seeking Conversational Search Conversational Recommendation Conversational Question Answering Figure 2.1: Conversational Information Seeking and example subdomains including conversational search, conversational recommendation, and conversational question answering. 2.4 System Requirements of CIS Systems To create a truly conversational system, it has been argued that the system should pro-actively participate in the conversation (Radlinski and Craswell, 2017; Andolina et al., 2018; Avula and Arguello, 2020; Tabassum et al., 2019; Trippas et al., 2018; Vuong et al., 2018; Wadhwa and Zamani, 2021). This requires mixed-initiative, which implies that the system both responds to utterances, but also at times drives the conversation. Furthermore, the user-system interactions should create a multi-turn dialogue where each participant takes multiple turns to state their information need, clarify this need, or maintain communication Draft Version 1.2 \f2.4. System Requirements of CIS Systems 17 functions such as discourse management (Aliannejadi et al., 2019; Deits et al., 2013; Trippas et al., 2020; Zamani et al., 2020a). Indeed, systems can utilize interactive feedback signals such as clarifying questions to optimize the advantages of the conversational technique (Aliannejadi et al., 2019; Vtyurina et al., 2017; Zamani et al., 2020a). Mixed-initiative interactions, and in particular clarifying questions, are thoroughly reviewed in Section 6. The requirements of a system to support the users\u2019 interactions are multiple. For example, the interaction history (e.g., queries, relevance feedback, type of interaction device) has to be saved and, where necessary, retrieved by the system (Reichman, 1985; Vtyurina et al., 2017; Zamani and Craswell, 2020). The interaction history as well as user-speci\ufb01c and contextual information can be adopted to provide personalized and context-aware access to information. A system should also be able to adapt the results presentation strategies depending on the users\u2019 needs. It could be that a user is cognitively engaged, in which case the system can present the results concisely and \ufb02uently with a high comprehensibility. We note that conciseness and \ufb02uency are not speci\ufb01c to natural language and it should be extended to multi-modal conversations. For instance, in speech-only setting, the CIS outputs are expected to be \u201clistenable\u201d (Trippas, 2019). Due to the interactive, adaptive, and conversational nature of these user-system interactions, both user and system turn-time can be less predictable. For example, if the users\u2019 input is natural language-based, it can increase the time needed to convey their information need versus a query-based information need. Simultaneously, a system can ask clarifying questions to overcome errors and thus engage with the user through multiple interactions (Skantze, 2007). One system requirement which is particularly relevant to a speechonly setting is the system\u2019s ability to assist the user when speech recognition errors have occurred (Trippas et al., 2018). These errors may occur due to background noise, speaker accents, dis\ufb02uency, spoken language ability, or out-of-vocabulary words, among other reasons. Speakers often compensate with hyper-articulation or restarting voice inputs (Jiang et al., 2013; Myers et al., 2018). It has been suggested that systems should design in ways to handle the myriad of possible Draft Version 1.2 \f18 De\ufb01nitions and Applications errors and use meta-communication to overcome them (Trippas, 2019). Existing open-source software to create a CIS system is available. Even though many of these systems cannot be seen as truly conversational, they are updated frequently. For instance, RASA2 provides \ufb02exible conversational software for building text and voice-based assistants but, at the time of writing, lacks mixed-initiative functions. Other conversational systems include Amazon Lex3 or botpress4. Macaw (Zamani and Craswell, 2020) provides an extensible framework for conversational information seeking research and supports both mixed-initiative and multi-modal interactions. Overall, a CIS system is concerned with dialogue-like information seeking exchanges between users and system. Furthermore, the system is pro-actively involved with eliciting, displaying, and supporting the user to satisfy their information need through multi-turn transactions, which can be over one or more sessions. We note that given the complexity of the system and properties listed in De\ufb01nition 3, most research articles make several simplifying assumptions. For instance, TREC Conversational Assistance Tracks 2019 2022 (Dalton et al., 2019; Dalton et al., 2020a; Dalton et al., 2021; Owoicho et al., 2023) do not consider some of these properties, including personalization. 2.5 Conversational Search Conversational search, or the process of interacting with a conversational system through natural conversations to search for information, is an increasingly popular research area and has been recognized as an important new frontier within IR (Anand et al., 2020; Culpepper et al., 2018). Furthermore, mobile devices and commercial intelligent assistants 2https://rasa.com/ 3https://aws.amazon.com/lex/ 4https://botpress.com/ Draft Version 1.2 \f2.5. Conversational Search 19 such as Amazon Alexa, Apple\u2019s Siri, and Google Assistant, in which users interact with a system to search for information, are becoming accepted. Among many other use cases, users can use these systems to receive weather updates, directions, calendar items, and information on any topic covered on the Internet by stating our needs in natural language. Information seeking, or the process by which people locate information, has traditionally been viewed as a highly interactive process (Oddy, 1977; Croft and Thompson, 1987). More speci\ufb01cally, searching has been approached as an interactive user-system activity for many years. Furthermore, with the rise in machine learning (ML), natural language processing (NLP), and spoken language comprehension, understanding many users\u2019 natural language statements has become more feasible. Simultaneously, with ever-growing computing power, it has been easier to comprehend, categorize, or analyze major datasets, which helped to develop genuinely interactive systems that go beyond the conventional \u201cquery box\u201d action-reaction search model (Trippas et al., 2018). For example, instead of posing a query word in which the user needs to \ufb01lter through a search engine results page, the user can describe their information need. In addition, the system could inform the user in a more conversational style which documents might be relevant to the query and thus have the system actively involved in the search process. As described by Radlinski and Craswell (2017), the system could reason about the retrieved documents and actively help the user sift through the information. Intuitively, conversational search opens up many possibilities as a new interaction paradigm. For example, we may learn how to optimize traditional browser-based \u201cquery box\u201d searching, improve information accessibility, and decrease information access barriers by incorporating search into everyday dialogues (Balasuriya et al., 2018; Trippas et al., 2021). Consider Figure 2.2, where the statement from a user is short and resembles a keyword-style query and the system response is a long and information-dense passage that is likely hard for the user to consume. In addition, the presentation of the result is not interactive, instead, all the information is presented in one turn, inhibiting the strength of interactivity as an interaction paradigm. Furthermore, the Draft Version 1.2 \f20 De\ufb01nitions and Applications user cannot interact with the content through query re\ufb01nements or clari\ufb01cations. This also reinforces the perceived importance of the initial user query, requiring them to formulate \u201cexcellent\u201d queries from the beginning (Gooda Sahib et al., 2015). Figure 2.2: Example information seeking task where someone inquires whether superannuation is compulsory in Australia. The user asks a keyword-style query and the system response is an information-dense passage. In contrast to the Figure 2.2 example, the example in Figure 2.3 shows a conversational search dialogue that enables the user to provide their query in a more natural style. The dialogue is more natural and involves greater natural language exchanges. The dialogue is intuitively divided into pieces to minimise information overload. Furthermore, the system recognizes the user\u2019s context, assisting them in re\ufb01ning their inquiry, and maintains an account of prior encounters, eliminating the need for repetition. In addition, the system creates a model of the user and their information needs through problem elicitation. All of these user-system interactions are made possible by both sides conversing in a human-like manner. As part of CIS, several de\ufb01nitions of conversational search have been proposed (Anand et al., 2020; Radlinski and Craswell, 2017; Azzopardi et al., 2018; Trippas et al., 2019), which are all inline with the CIS de\ufb01nition provided earlier in this section. For example, researchers who attended the Dagstuhl seminar on Conversational Search created a typology based on existing systems as a de\ufb01nition (Anand et al., 2020). Radlinski and Craswell (2017) and Azzopardi et al. (2018) viewed the Draft Version 1.2 \f2.5. Conversational Search 21 Figure 2.3: Example conversation when someone inquires whether superannuation is compulsory in Australia within a more ideal dialogue. process mainly from a theoretical and system perspective, while Trippas (2019) viewed it from a cognitive, user-system, and empirical perspective. As seen in Figure 2.4, the Dagstuhl typology aimed to position conversational search with respect to other disciplines and research areas. For instance, they drew the lines from IR systems and added properties such as statefulness to derive IIR systems and thus specify conversational search as: \u201cA conversational search system is either an interactive information retrieval system with speech and language processing capabilities, a retrieval-based chatbot with user task modeling, or an information seeking dialogue system with information retrieval capabilities.\u201d (Anand et al., 2020, p. 52) Meanwhile Radlinski and Craswell (2017) de\ufb01ne conversational search systems with a more focused and applied view on which properties need to be met. Draft Version 1.2 \f22 De\ufb01nitions and Applications Figure 2.4: The Dagstuhl Conversational Search Typology de\ufb01nes the systems via functional extensions of IR systems, chatbots, and dialogue systems (Anand et al., 2020). \u201cA conversational search system is a system for retrieving information that permits a mixed-initiative back and forth between a user and agent, where the agent\u2019s actions are chosen in response to a model of current user needs within the current conversation, using both shortand long-term knowledge of the user.\u201d (Radlinski and Craswell, 2017, p. 120) Lastly, Trippas (2019) expanded on Radlinski and Craswell\u2019s de\ufb01nition and stated that for spoken conversational search: \u201cA spoken conversational system supports the users\u2019 input which can include multiple actions in one utterance and is more semantically complex. Moreover, the conversational system helps users navigate an information space and can overcome standstill-conversations due to communication breakdown by including meta-communication as part of the interactions. Ultimately, the conversational system multiturn exchanges are mixed-initiative, meaning that systems also can take action or drive the conversation. The system also keeps track of the context of particular questions, ensuring a natural \ufb02ow to the conversation (i.e., no need to repeat previous statements). Thus the user\u2019s information need can Draft Version 1.2 \f2.6. Conversational Recommendation 23 be expressed, formalised, or elicited through natural language conversational interactions.\u201d (Trippas, 2019, p. 142) All of these de\ufb01nitions look at the provided CIS de\ufb01nition from a search perspective by focusing on retrieving/selecting information items. 2.6 Conversational Recommendation Recommender systems can be seen as information seeking systems that provide users with potentially relevant items based on historical interactions. Unlike a conventional search engine that takes a query as input, most recommender systems use past user-item interactions to produce relevant recommendations (Konstan and Riedl, 2012). As such, traditional recommender systems aim to help users \ufb01lter and select items for their information need, often in a closed domain such as books, restaurants, or movies. These systems select possible items from an extensive database and \ufb01lter them to present the user with the best suitable option (Resnick and Varian, 1997; Thompson et al., 2004). Recently, two survey papers on conversational recommender systems have proposed de\ufb01nitions of this research area as: \u201cA conversational recommender system is a software system that supports its users in achieving recommendationrelated goals through a multi-turn dialogue.\u201d (Jannach et al., 2021a, p. 105) and \u201cA recommendation system [ed. conversational recommender system] that can elicit the dynamic preferences of users and take actions based on their current needs through real-time multi-turn interactions.\u201d (Gao et al., 2021a, p. 101) Based on the above de\ufb01nitions and similar to conversational search, conversational recommender systems ultimately should be multi-turn, meaning that there is more than one interaction or two utterances Draft Version 1.2 \f24 De\ufb01nitions and Applications (i.e., one utterance from the user and one from the system). Current conversational recommender systems can answer recommendation requests reasonably well, but often have di\ufb03culties maintaining multi-turn conversations (Jannach et al., 2021a). Even though the usage of multi-turn interactions could imply some kind of memory that can keep track of the communication and current state, most previous de\ufb01nitions fail to mention this fundamental requirement for conversational recommendation. Indeed, some form of user-system interaction history with conversational recommender systems is necessary for a system to be able to provide recommendations based on those previous interactions. Thus, storing past interactions to refer to is a key component, similarly to conversational search. At the same time, it is important to simultaneously consider privacy implications of such an interaction history: What exactly is being retained, how it may be used in future, and how people can control what is stored. This is currently an open area of research. Conversational recommender systems are sometimes referred to as a \u201csystems ask, users answer\u201d paradigm (Sun and Zhang, 2018; Zhang et al., 2018). This means that only the recommender system could ask questions to elicit users\u2019 preferences. Furthermore, this one-way elicitation approach can have di\ufb03culties thoroughly capturing the users\u2019 needs. However, more recent work in conversational recommender systems has investigated this rigid paradigm, introducing the mixed-initiative approach (Ren et al., 2020). Indeed, a conversational recommender system should be able to elicit, acquire, store, and utilize user preferences through implicit (e.g., clicking) or explicit (e.g., rating) user feedback (Pommeranz et al., 2012; Christakopoulou et al., 2016). This implies that conversational recommender systems should be capable of taking the initiative and thus support mixed-initiative interactions. An example of acquiring the user\u2019s preference can be seen in Figure 2.5. A fundamental characteristic of conversational recommender systems is that they support speci\ufb01c tasks and goals. The system should suggest recommendations while the user interacts with that system to help them \ufb01nd relevant information and thus support the user\u2019s decision making process. Another way to elicit user preferences is through product reviews. Draft Version 1.2 \f2.6. Conversational Recommendation 25 Figure 2.5: An example of user interactions with conversational recommender systems from Lei et al. (2020b) with each interaction demonstrating the reasoning. However, one drawback of this method is that the user must have reviewed for the system to create a user pro\ufb01le (Chen et al., 2015). Conversational interactions may overcome this issue by simply engaging the user in a conversation about products they liked or disliked in the past or the most important features of products for them (Iovine, 2020), or asking users questions based on others\u2019 reviews (Kostric et al., 2021). Another advantage of the conversational format for recommendations is to explain why (and/or how) particular items are retrieved (Laban and Araujo, 2020). In conversational search, users submit a query and explain their information need, which means there can be some transparency on why the system retrieves the given results. However, the decisionmaking process in recommender systems is much less visible to the users since it is based on prior interactions (Paraschakis, 2016). Further research on systems that reason and explain through natural language and conversational actions why particular results are retrieved, how they yield ethically sourced recommendations that are culturally relevant, and respect laws and societal norms are warranted (Krebs et al., 2019; Di Noia et al., 2022). By providing explanations, conversational systems will enhance human decision-making and will also be improved from an ethical standpoint. Draft Version 1.2 \f26 De\ufb01nitions and Applications Conversational search and conversational recommender systems share many commonalities. Essentially, both tasks aim to provide users with relevant items based on a ranking, either through a query (search) or user preference (recommendation). This point has been raised in the 1990s by Belkin and Croft (1992) and has recently been revisited in (Zamani and Croft, 2020a; Zamani and Croft, 2020b). Furthermore, both systems will interact through conversations with the system and share the same characteristics of interaction modality (see Section 2.2). In conclusion, as been repeatedly mentioned, the boundaries between these CIS applications are often blurred, mainly because many comparable technological and computing methods are applied. Using the strengths and advances from each CIS subdomain will move the area of conversational systems forward. 2.7 Conversational Question Answering Question answering (QA), the task of providing one of more answer(s) to a given question, has been a longstanding information seeking task within the IR and NLP communities (Dwivedi and Singh, 2013; Kolomiyets and Moens, 2011; Warren and Pereira, 1982; Winograd, 1974). Early QA systems were created in the 1960s and 70s, such as BASEBALL (Green et al., 1961) and LUNAR (Woods et al., 1972). Both interfaced a structured database that could be accessed through very restricted natural language questions. The subject domain was also very restricted, so the user query could be processed and parsed through a manually created domain-speci\ufb01c vocabulary. Other early systems, usually created for a speci\ufb01c domain, include SHRDLU by Winograd (1974) and CHAT-80 QA by Warren and Pereira (1982). The SHRDLU system was designed as an interactive dialogue interface to give commands, ask questions, or make statements while the system could react by carrying out the commands, answering questions, and taking in new information. However, this early system had limited capabilities. For example, as Winograd (1974) explained, the system was Draft Version 1.2 \f2.7. Conversational Question Answering 27 narrow and only accepted a limited range of information, speci\ufb01cally in understanding human language and the reasoning behind these interactions. QA is a speci\ufb01c form of information seeking where the users\u2019 needs are expressed in a form of (natural language) question. For example, \u201cWhich country has the longest period without a government?\u201d. QA questions are also frequently classi\ufb01ed by common properties and can often be classi\ufb01ed as factoid, list, de\ufb01nition, relationship, procedural, and conformation questions (Kolomiyets and Moens, 2011). These particular question types have speci\ufb01c characteristics, such as a factoid question often starts with WH-interrogated words (what, when, where, who) and list questions often start with List/Name [me] [all/at least NUMBER/some] (Kolomiyets and Moens, 2011). In contrast to classical IR, in which full documents are considered relevant to the user\u2019s need, QA is concerned about \ufb01nding and presenting relatively short pieces of information to answer the queries. Therefore, QA uses NLP and IR techniques to retrieve small text snippets containing the exact answer to a query instead of the document lists traditionally returned by text retrieval systems (Voorhees et al., 1999; Gao et al., 2019). The short answers are often retrieved and presented as short text passages, phrases, sentences, or knowledge graph entities (Lu et al., 2019). With the developments around conversational systems, QA work has received increased attention in the context of CIS (Christmann et al., 2019; Qu et al., 2019c; Kaiser et al., 2020). Conversational QA (ConvQA) can be seen as a subsection of CIS but with a narrower focus than conversational search. Even though ConvQA is a popular research topic, we are unaware of any comprehensive de\ufb01nition of ConvQA. The main reason is likely that it is di\ufb03cult to distinguish it from many conversational search tasks. Traditionally, QA has focused on a single question, meaning no historical interaction data is kept. However, it could be argued that conversations should be composed of more than one interaction. Thus, in conversational QA, the user may pose more than one question. Furthermore, as explained in earlier sections, conversational interactions imply that the history of previous dialogues is kept and used to anDraft Version 1.2 \f28 De\ufb01nitions and Applications swer the user\u2019s questions enabling follow-up questions or references to earlier concepts. Using the advantage of the conversational aspect, users can query the system interactively without having to compose complicated queries (Gao et al., 2019). However, to correctly answer the user\u2019s question, ConvQA systems need to handle more complex linguistic characteristics of conversations, such as anaphoras (words that explicitly refer to previous conversational turns) or ellipsis (words that are redundant in the conversation) (Vakulenko et al., 2020). An example of a series of ConvQA interactions is seen in Figure 2.6. Furthermore, ConvQA is often seen in relation to machine comprehension (Yang et al., 2018b), which is often based on questions about a given passage of text. The main di\ufb00erence is that machine comprehension organizes the questions into conversations (Qu et al., 2019b). This means that leveraging the history is crucial to creating robust and e\ufb00ective ConvQA systems. For example, history can help map the state and changes of the information need to inform current or future responses. Recent work from Kaiser et al. (2020) also mentions the importance of dialogue context to improve ConvQA. That is, the user in later interactions can refer to the implicit context of previous utterances. Figure 2.6: An ideal conversational QA interaction example with \ufb01ve turns from Kaiser et al. (2021) where qi and ansi are questions and answers at turn i, respectively. Draft Version 1.2 \f2.8. Conversational Information Seeking in Di\ufb00erent Domains 29 2.8 Conversational Information Seeking in Di\ufb00erent Domains As a paradigm to interact with information, CIS can \ufb01nd items on the web, databases, or knowledge graphs. Conversational information access can also be applied to speci\ufb01c domains such as the \ufb01nancial industry, hospitality, or cooking recipes. This section expands di\ufb00erent domains where CIS can be applied in addition to their unique properties. These domains include e-commerce, enterprise, and health. 2.8.1 Conversational Information Seeking in E-Commerce Finding and buying products through conversational interactions is becoming popular (Papenmeier et al., 2021; Papenmeier et al., 2022). E-commerce transactions, the process of buying and selling goods and services online, are steadily increasing.5 Simultaneously, with the uptake of CIS systems with consumers (e.g., Amazon Alexa or Google Assistant), it becomes increasingly easier to identify consumers\u2019 context (e.g., a user searching for washing instructions or re-ordering washing powder may be located in the laundry), resulting in more accurate context-aware responses. It has been suggested that conversational e-commerce (also referred to as conversational commerce (van Eeuwen, 2017)) search and taskoriented dialogues share commonalities. For example, the dialogue for \ufb02ight reservation and e-commerce will elicit user preferences such as \ufb02ight destinations akin to an e-commerce product (Yang et al., 2018b). However, di\ufb00erences between task-oriented dialogue systems and e-commerce queries have also been observed, making e-commerce information need expression much more complex (Yang et al., 2018b). For instance, e-commerce products often have di\ufb00erent facets, such as brand, color, size, or style, resulting in di\ufb00erent preference slot combinations or shopping schema. Thus, e-commerce schemas can be complex. It is even suggested that they can be incomplete due to the extended range of product facets. Zhang et al. (2018) suggested that user-system interactions in e-commerce CIS systems can be classi\ufb01ed 5https://www.forbes.com/sites/joanverdon/2021/04/27/ global-ecommerce-sales-to-hit-42-trillion-as-online-surge-continues-adobe-reports/ Draft Version 1.2 \f30 De\ufb01nitions and Applications into three stages: initiation, conversation, and display. In their proposed paradigm, the system will loop through questions to understand all user preferences of the product\u2019s facets before presenting the user\u2019s query results. Some advantages of using CIS in e-commerce include accessing products through conversational-enabled devices such as mobile phones or smart devices (van Eeuwen, 2017). Furthermore, instead of going to a shop for support, customers can access help instantly through these devices (Lim et al., 2022). In addition, when users are logged in to their shopping pro\ufb01le, personalization and shopping history can optimize shopping experiences. Conversely, CIS systems embedded in an intelligent assistant have the potential to be virtual shopping assistants. Future conversational commerce systems can also be embedded into other emerging technologies, such as augmented reality (B\u00fcschel et al., 2018). 2.8.2 Conversational Information Seeking in Enterprise An application of CIS which has not received as much attention is searching through conversational interactions in an enterprise setting (Teevan, 2020). CIS enterprise systems aim to help people in a work environment such as meeting rooms and at desks, with predictions that by 2025, 50% of knowledge workers would use a virtual assistant daily. This prediction is up from 2% in 2019.6 Even though there has been an increased interest in workplace-oriented digital assistants in general (e.g., Alexa for Business7 or Cortana Skills Kit for Enterprise8), the uptake has been limited. It is well known that enterprise search applications have di\ufb00erent needs than a traditional web search engine, including challenges such as searching over enterprise Intranets or multiple internal sources (Hawking, 2004). Furthermore, besides using CIS systems in a traditional o\ufb03ce environment, many di\ufb00erent applications of more varied and 6https://blogs.gartner.com/anthony_bradley/2020/08/10/ brace-yourself-for-an-explosion-of-virtual-assistants/ 7https://aws.amazon.com/alexaforbusiness/ 8https://blogs.microsoft.com/ai/cortana-for-enterprise/ Draft Version 1.2 \f2.8. Conversational Information Seeking in Di\ufb00erent Domains 31 complex environments, such as airplane pilots, create an extra layer of complexity (Arnold et al., 2020; Gosper et al., 2021). Many open problems in the intersection of CIS applications and enterprise need further investigation. In particular, issues such as de\ufb01ning appropriate test collections, e\ufb00ective conversational search over distributed information sources, identifying tasks that lend themselves to use a CIS application, and understanding the way employees interact with these systems need to be investigated. 2.8.3 Conversational Information Seeking in Health Searching for health information is another application for CIS. Many people already search for health advice online. For example, people will go to symptom checkers to understand if they have an underlying health condition or to identify whether they need professional advice (Cross et al., 2021). Furthermore, a recent study of a CIS application to enable patients to search for cancer-related clinical trials suggest that CIS could help to make health information more accessible for people with low health or computer literacy skills (Bickmore et al., 2016). A recent survey suggests that the main areas of CIS applications are located in areas for patients such as treatment and monitoring, health care service support, and education (Car et al., 2020). However, user groups such as carers and other health professionals can bene\ufb01t from these systems besides patients. For example, in a study where physicians used an information seeking chatbot, they reported that the advantages of CIS include diagnostic decision-making (Koman et al., 2020). Even though CIS has major potential, some concerns about implementing these systems in the health domain need to be addressed. For example, these systems may not have su\ufb03cient expertise to answer all questions and may even misinterpret or misunderstand these questions, potentially providing a wrong answer (Su et al., 2021). Although a common challenge to all search systems, this may be exacerbated in a CIS setting if a system were to naively present health misinformation in a way that reinforces it. Furthermore, these systems can deal with sensitive patient data and thus need to be safeguarded. Voice-only CIS systems may also encounter issues with speech recognition, especially Draft Version 1.2 \f32 De\ufb01nitions and Applications when people are distressed or are in noisy environments (Spina et al., 2021). 2.9 Intelligent Assistants Intelligent assistants are often associated with CIS and are rising in popularity. The number of intelligent voice assistants worldwide is predicted to double between 2020 and 2024, from 4.2 billion to 8.4 billion.9 Intelligent assistants are frequently embedded in existing phones, laptops, mobile devices or smart speakers. For instance, assistants such as Google Assistant, Amazon\u2019s Alexa, AliMe, or Apple\u2019s Siri enable users to receive assistance on everyday tasks with a speci\ufb01c goal (e.g., turning on or o\ufb00appliances) or conduct simple question-answering tasks such as asking for weather forecasts or the news. With the increase in mobile devices and mobile internet connections, users instantly have access to powerful computational and digital intelligent assistants. These may even be designed to access the user\u2019s situation or context through GPS locations, the people around them through Bluetooth scans, and previous interactions with their electronic devices (Liono et al., 2020; Trippas et al., 2019) when enabled on the mobile device. However, more research is needed to use all the contextual signals to optimize CIS responsibly and with user privacy in mind. Di\ufb00erent CIS tasks may require access to di\ufb00erent knowledge sources and databases. Intelligent assistants need to disambiguate which knowledge source they need to retrieve the information from. For instance, Aliannejadi et al. (2018b) introduced the problem of uni\ufb01ed mobile search, in which intelligent assistants identify the target mobile apps for each search query, route the query to the selected apps, and aggregate the search results. In follow-up work, the authors demonstrated the impact of user context and app usage patterns on uni\ufb01ed mobile search (Aliannejadi et al., 2018a; Aliannejadi et al., 2021b). Identifying knowledge sources was also used in the Ninth Dialog System Technology Challenge (DSTC9) with a track called \u201cBeyond domain APIs Tasksoriented conversational modeling with unstructured knowledge access\u201d. 9https://www.statista.com/statistics/973815/worldwide-digital-voice-assistant-in-use Draft Version 1.2 \f2.10. Summary 33 This track aimed to expand di\ufb00erent task-oriented dialog systems by incorporating external unstructured knowledge sources (Gunasekara et al., 2020). The track\u2019s purpose was to investigate how to support frictionless task-oriented situations so that the \ufb02ow of the conversation does not break when users have questions that are out of the scope of APIs/DB but possibly are available in external knowledge sources. Other applications incorporating CIS systems are embodied robots, e.g., the Multi-Modal Mall Entertainment Robot (MuMMER) (Foster et al., 2016). MuMMER was a collaborative challenge in which a robot was made to behave appropriately to human social norms and engage through speech-based interactions. Similarly, social bots enable users to search and engage in information dialogues. This has been thoroughly studied in the context of Alexa Prize Socialbot Challenge (Ram et al., 2018). Although these interactions involving search for information may di\ufb00er from a focused CIS system, embedding CIS enables a wider variety of use-cases. 2.10 Summary This section provided a high-level overview of CIS and its applications. We \ufb01rst started by providing de\ufb01nitions for conversation, information seeking conversation, and CIS systems. Under these de\ufb01nitions, conversational search, conversational question answering, and conversational recommendation are seen as the subdomains of conversational information seeking tasks. This section also included several system requirements that are expected from CIS systems. We later reviewed previous work that characterizes the three subdomains of CIS and discussed their connections. We lastly provided an overview of how CIS can be used in particular domains and compared CIS to intelligent assistants. CIS is still being developed and is rapidly expanding as a multi-dimensional and multi-disciplinary research area. Overall, this section summarized prior work in conversational information seeking applications to provide an overview. Draft Version 1.2 \f3 Conversational Interfaces and Result Presentation The emergence of conversational systems has empowered the development of a new kind of human\u2013computer interface supporting users to converse with the interface through spoken interactions. In this section, we introduce di\ufb00erent kinds of conversational interfaces, set out the limitations, how they support the entire interaction from the users\u2019 speech input to the system\u2019s output, and investigate the latest research in the presentation of results. A conversational interface, also identi\ufb01ed as conversational user interface (CUI), presents the front-end to a chatbot or virtual personal assistant, enabling the user to interact with the application through various input and output modalities such as speech, text, or touch (McTear et al., 2016; McTear, 2017). Besides being the system\u2019s front-end, the conversational interface integrates or glues together all the underlying system components, represented in a usable application (Zue and Glass, 2000). Even though all the recent developments of the separate components have made conversational interfaces more functional, they act as the orchestrator of all the information with their challenges. Overall, this section introduces the di\ufb00erent conversational interfaces and illustrates the limitation of transferring information in a 34 Draft Version 1.2 \f3.1. Conversational Interfaces 35 conversational style for di\ufb00erent interfaces. We discuss initiative as a critical element in conversational interactions, including the interface limitations with regards to CIS. 3.1 Conversational Interfaces Interfaces that provide users with the ability to interact conversationally with systems through di\ufb00erent modalities such as speech, gesture, text, or touch are commonly referred to as CUIs. Many additional terms refer to these systems that enable conversational interactions, including chatbots, intelligent assistants, or conversational agents. An interface is often referred to be conversational when it covers two basic attributes (1) natural language and (2) conversational interaction style (McTear, 2017). The natural language attribute means that the system and user can use language as in naturally occurring conversations between two or more participants; this contrasts to restricted commands, mouse clicks, or phrases in a graphical user interface (GUI). Furthermore, natural language is more \ufb02exible, permitting input to be expressed in many di\ufb00erent ways versus one \ufb01xed expression. Intuitively, allowing the user to input natural language contributes to a more complex system. In addition, conversational interaction style is often referred to as basic turn-taking behavior in which the user and system converse one after another. This contrasts with clicking or swiping on GUI elements such as buttons or drop-down menus. Furthermore, to make an interface even more conversational, the usage of mixed-initiative is introduced. Mixed-initiative is more human-like and \ufb02exible because both actors can independently contribute to the conversation. Lastly, a more advanced system could include context tracking enabling follow-up questions and persistent tracking of the topic. Even though many dialogue systems are seen as conversational, they may not be tracking the context and therefore never refer back to a previous question or answer. Instead, they attend to every input individually. Draft Version 1.2 \f36 Conversational Interfaces and Result Presentation Basic conversational interfaces often consist of two primary attributes and sub-attributes: natural language which does not consist of \ufb01xed expressions, and conversational interaction style which could support turn-taking, mixed-initiative, and context tracing. Even though various forms of conversational interfaces have been around for a long time, we have recently seen a revival of the topic, mostly due to the advances in automatic speech recognition (ASR), natural language processing (NLP), and machine learning in general. Nevertheless, much fundamental research dates back to the 1960s with the \ufb01rst well-known chatbot, ELIZA, having simulated a Rogerian psychologist (Weizenbaum, 1966). In the following, we provide some historical context for four distinctive groups of conversational interfaces, (1) spoken dialogue systems (SDSs), (2) voice user interfaces (VUIs), (3) live chat support, and (4) chatbots. 3.1.1 Spoken Dialogue Systems Spoken dialogue systems (SDSs) enable users to interact with a system in spoken natural language on a turn-by-turn basis and are an instance of a conversational interface. Many of these systems are used for taskoriented issues with clear task boundaries, such as travel planning. In the 1960s and 70s, the earliest SDSs were mainly text-based. However, once technologies improved in the 80s, more complex components were added, such as more advanced ASR or components that helped recover from conversational breakdowns. Much government funding from Europe and the U.S. supported research in SDS, which resulted in the European SUNDIAL (Speech Understanding and DIALog) project (Peckham, 1991) and the DARPA spoken language system in the U.S. (Clark, 1988). The SUNDIAL project aimed to design systems that could be used by the public, while the DARPA program focused on the technical aspects. Many of the early research outcomes are still applicable today, such as the Information State Update Theory (Traum and Larsson, 2003), information presentation techniques (Gibbon et al., 1997), or the Draft Version 1.2 \f3.1. Conversational Interfaces 37 CSLU toolkit (Sutton and Cole, 1997). A frequent example task for SDSs is time-tabling for travel services, providing the interface between the user and a database (Fraser, 1998). In the Figure 3.1 example, the user has the need of \ufb01nding a reasonable travel plan. Figure 3.1: Example conversation where the user wants to book a travel and the system provides options. As seen in the \ufb01rst utterance from the system, it is narrowing down the information need by adding a re\ufb01nement or clari\ufb01cation question. These back and forth interactions are part of the elicitation process for the system to understand and specify the information need. 3.1.2 Voice User Interfaces Companies have traditionally developed VUIs for commercial bene\ufb01ts, in contrast with SDS that has been created mainly by academic and research labs. For example, AT&T created an early VUI called How May I Help You? which supported call routing (Gorin et al., 1997). The automated customer self-service systems are task-oriented and engage in conversation to help the client, thus being classi\ufb01ed as a conversational interface. Instead of helping the customer with their problem, such VUIs typically aim to understand the customer\u2019s problem su\ufb03ciently, after which the user can be routed to the appropriate (human) call taker Draft Version 1.2 \f38 Conversational Interfaces and Result Presentation to help with their problem further. Thus, these call routing services only need to elicit the general problem to refer the call to someone or a speci\ufb01c system module. The system responses are pre-recorded, which is possible for highly structured domain-speci\ufb01c settings. For example, a scenario where a user wants to pay for a service might follow a scripted interaction as shown in Figure 3.2. Figure 3.2: Example conversation with a VUI in which the system is eliciting how they can help the user before possibly routing them to a human operator for more complex interactions. In these systems, when none of the options are relevant to the user, the system will narrow down the problem to re-route the call to an appropriate human agent. The connection with CIS is the humanlike interactions, eliciting information needs, and narrowing down the relevant answers or services. The VUI community has involved with the development of W3C standards for scripting spoken dialogues such as VoiceXML,1 VoiceXMLbased toolkits,2 and the development for speech analtyics. 1https://www.w3.org/TR/voicexml21/ 2http://evolution.voxeo.com/ Draft Version 1.2 \f3.1. Conversational Interfaces 39 3.1.3 Live Chat Support The above interfaces (i.e., SDS and VUI) are mainly used with an underlying automated system. However, many support systems are powered by humans in which the interface is the connection between a user and a service provider. Live chat support is real-time communication between a customer and a support person via instant messaging, often through a pop-up dialogue box. The service providers can include librarians on a library website (Matteson et al., 2011), technical or sales support on e-commerce websites (Goes et al., 2012), or health assistance (Stephen et al., 2014). Such chat support interfaces are often embedded as web widgets in websites or as an extra feature within an application. The main advantage of live chat support interfaces is that the chat history is persistent and can be referred to by the users. Furthermore, these chats can support asynchronous and synchronous interactions (Fono and Baecker, 2006). Some recent work by Vakulenko et al. (2021) investigated virtual reference interviews of professional librarians. They suggest major differences between librarian interviews and existing datasets used to investigate, analyze, and train CIS topics. For example, they suggested that professional intermediaries are more proactive, write more extended responses, ask follow-up questions, and actively steer the topic of conversation. Further research e\ufb00orts are needed to understand the impact of di\ufb00erent conversational styles of CIS systems (Thomas et al., 2018). A \u201clive chat support\u201d provider (e.g., the call taker or customer provider) is often synchronous, meaning that the support person answers questions from the user in real-time. However, many support providers are required to answer multiple customers simultaneously, creating a one-to-many relationship. The importance of the support provider\u2019s interface, which could support decision making by ranking response suggestions on the information-seeking process or incorporating machine reading to track the conversation, has not been studied extensively (Xu and Lockwood, 2021; Yang et al., 2018b). Furthermore, research on how the support providers deal with task-switching and interruptions could suggest future conversational interface optimisations (Pajukoski, 2018). Draft Version 1.2 \f40 Conversational Interfaces and Result Presentation 3.1.4 Chatbots The interactions with chatbots are often based on social engagement through chit-chat (i.e., small talk), in contrast to the task-oriented interactions with SDSs and VUIs. Traditionally, chatbots are mainly text-based. However, more recent chatbots incorporate spoken interactions, images, and avatars to create a more human-like persona.3 All the above systems aim to support users to interact with datasets or databases. Due to the conversational aspect of the interaction, no technical expertise is required to interact with these databases, making them more accessible. As illustrated with the di\ufb00erent CUIs (i.e., SDS, VUI, live chat support, and chatbots), these systems cover a large range of applications and tasks (e.g., from travel booking to chit-chat). Although all these CUIs may be considered conversational, they still di\ufb00er in the degree that people are searching for information, the system maintains control, and \ufb02exibility allowed by the user to ask for what they want to \ufb01nd or how they want to have the information presented. In contrast, searching for information on the web over documents is much less predictable and cannot be implemented by pre-set re\ufb01nement options. Due to the vast amount of information, more advanced techniques are needed to support users\u2019 information needs. Other questions such as the ambiguity in people knowing when they are talking to a human or machine (e.g., chatbot),4 the trust of people have in these systems, appropriateness of these systems, or transparency around the usage of arti\ufb01cial intelligence in general5 are relevant (Mori et al., 2012; Zamora, 2017; Gupta et al., 2022). 3Note that a chatbot is di\ufb00erent from a bot (McTear, 2017). A chatbot is a software application that can perform automated tasks while engaging in conversations with the user. This contrasts with bots, which complete repetitive and mundane automated tasks such as crawling the web or harvesting email addresses from social networks. 4https://botor.no/ 5https://digital-strategy.ec.europa.eu/en/policies/ european-approach-arti\ufb01cial-intelligence Draft Version 1.2 \f3.2. Result Presentation: From Search Boxes to Speech Bubbles 41 3.2 Result Presentation: From Search Boxes to Speech Bubbles Result presentation in CIS is tightly coupled with decades of research on interface development for search engines and other information retrieval systems. In this section, we draw the connection between conversational user interfaces required in CIS and past research on result presentation in search engines. Results presentation, the way search results are communicated, has been a major research area for many years (Croft et al., 2010). The general approach to presenting search results is a vertical list of information summarizing the retrieved documents. These results should not only return relevant results but also display them so that users can recognize them as relevant to their information need. Even though many people have become accustomed to searching through these search boxes, \ufb01nding information can still be a demanding task with much information to \ufb01lter through. Traditionally, a user would submit an information need through keywords in a search engine search box. In return, search engines present a ranked list with potential relevant documents for that query, also referred to as the search engine result page (SERP). This SERP consists of the traditional \u201cten blue links\u201d in which each item or result consists of a document title, a short summary (i.e., snippet), URL, and often other meta-data such as date or author (see Figure 3.3) (Hearst, 2009; Paek et al., 2004). Figure 3.3: Traditional SERP example versus a conversational style interaction The user would then review this returned ranked list and select an item they think would satisfy their information need. However, the \ufb01rst Draft Version 1.2 \f42 Conversational Interfaces and Result Presentation clicked item will often not satisfy the users\u2019 information need. Instead, the user will go back and forth between inspecting SERPs, looking at the contents of documents and submitting new queries. These interactions mimic a limited or one-sided conversation driven by the user. In this instance, the user has \u201ccontrol\u201d over the actions taken and the system has limited capabilities to interact with the user. These systems are sometimes referred to as passive (Avula, 2020; Trippas et al., 2018). The alternative interaction paradigm of CIS aims to overcome the limitations of the results presentation strategies of existing search engines by becoming more active. That is, instead of presenting a ranked list, these CIS systems can be more \ufb02exible with their information presentation strategies by adapting to the user\u2019s needs. Even though research has shown that di\ufb00erent presentation techniques and answer organization are needed for di\ufb00erent modalities, limited research has been conducted in how (the content expression) and what (the content response) to present in conversational search (Chuklin et al., 2018; Trippas et al., 2015b; Vtyurina et al., 2020). Furthermore, not only the retrieved information needs to be presented but depending on the modality of the results presentation, other interactions such as meta-conversations (i.e., information about the information, for example, information about a document or page), need to be presented (Kiesel et al., 2021a; Trippas et al., 2018). People search di\ufb00erently depending on the device (e.g., desktop versus mobile) and modality (e.g., text versus audio). Some of these di\ufb00erences are highlighted in the following subsections. 3.2.1 Text-Only Result Presentation on Desktops Much research has been conducted on the appearance of SERPs in browsers (Hearst, 2009). In a visual setting, researchers have investigated features such as snippet length (Cutrell and Guan, 2007; Kaisser et al., 2008; Maxwell et al., 2017; Rose et al., 2007), snippet attractiveDraft Version 1.2 \f3.2. Result Presentation: From Search Boxes to Speech Bubbles 43 ness (Clarke et al., 2007; He et al., 2012), or the use of thumbnails (Teevan et al., 2009; Woodru\ufb00et al., 2002). Research on results presentation has suggested that the presentation has an impact on the usability of the system. For instance, Clarke et al. (2007) investigated the in\ufb02uence of SERP features, such as the title, snippets, and URLs on user behavior. They suggested that missing or short snippets, missing query terms in the snippets, and complex URLs negatively impacted click-through behavior. In addition, Cutrell and Guan (2007) used an eye-tracking study to explore the e\ufb00ects of changes in the presented search results. They manipulated the snippet length with three di\ufb00erent lengths (short [1 text line], medium [2-3 lines], and long snippets [6-7 lines]) as shown in Figure 3.4. Their results suggested that depending on the search task (i.e., navigational or informational), the performance improved with changing the length of the snippet. For navigational queries, optimal performance happened with short snippet lengths, while extended snippets helped the most for informational tasks. Figure 3.4: Snippet length di\ufb00erences (Cutrell and Guan, 2007). Further research into snippet summary length con\ufb01rmed the \ufb01ndings that di\ufb00erent snippet lengths were preferred depending on the task (Kaisser et al., 2008). A more recent study by Maxwell et al. (2017), re-investigated the varying snippet length and the information Draft Version 1.2 \f44 Conversational Interfaces and Result Presentation content within the snippets. Their results suggest that users preferred more informative and extended summaries, which they perceived as more informative. However, even though participants felt that longer snippets were more informative, they did not always help the users to identify relevant documents. Techniques in which visual changes to the text are made, such as clustering, highlighting, or \u201cbolding\u201d query words in their context, sentence fragments, or query-biased summaries have been extensively investigated for traditional results presentation (Hearst, 2009). Furthermore, besides only showing text in the SERP, search engine companies have added more techniques to display results through feature snippets, knowledge cards, query suggestions, or knowledge panels. More research on these presentation styles in CIS is needed to understand the impact of these techniques in a conversational setting. Limited research has been conducted into conversational results presentation for desktop. A recent prototype for text-only chat-based search by Kaushik et al. (2020) combined a conversational search assistant (i.e., Adapt Search Bot), with a more traditional search interface (i.e., Information Box), see Figure 3.5. The user can either interact with the assistant on the left side of the application or with the retrieved information on the right panel. The authors described this design as \ufb02exible for users to interact with the agent and the search engine itself. Furthermore, their design supported users interacting with the search engine with the agent initiating dialogues to support the search process. However, further research could help understand the impact of di\ufb00erent presentation techniques, chat-based search, and distributed results presentation (e.g., results on both left and right panels). Another alternative for searching through conversational interactions on a desktop was presented by embedding a searchbot directly into an existing messaging platform (i.e., Slack) by Avula et al. (2018). The searchbot interfered in a collaborative setting (i.e., a search interaction with more than one searcher) by injecting information relevant to the conversation between the two users. An example of a searchbot results page within Slack is presented in Figure 3.6. As seen in the \ufb01gure, the results were always followed by a \u201cclick here for more\u201d option, redirecting the users to a di\ufb00erent SERP. The results of this study suggest Draft Version 1.2 \f3.2. Result Presentation: From Search Boxes to Speech Bubbles 45 Figure 3.5: A visual example of the Conversational Agent by Kaushik et al. (2020). The agent exist of a conversational search assistant (left) with a more traditional search interface (right). that dynamically injected information can enhance users\u2019 collaborative experience. Further research into the presentation of the results in such a collaborative CIS setting is needed to enhance our understanding of optimizing this search experience. 3.2.2 Text-Only Result Presentation on Small Screens People interact di\ufb00erently when searching for information on a mobile or desktop device (Jones et al., 1999; Church and Oliver, 2011; Ong et al., 2017). Researchers have suggested that the shift to mobile search has also been a paradigm shift in web search (Ong et al., 2017). Di\ufb00erences in screen size and being able to access search engines in di\ufb00erent contexts or \u201con-the-go\u201d have impacted how we search. With the increasing use of mobile devices such as smartphones, researchers have also investigated the results presentation on di\ufb00erent screen sizes (Ong et al., 2017; Kim et al., 2015). Because of the smaller screen sizes on mobile devices, it is important to investigate the result presentation and optimize for the screen real-estate. For example, an average-sized snippet for a desktop site may not be appropriate for a smaller screen since it may involve more scrolling and swiping. Kim et al. (2017) studied di\ufb00erent snippet lengths on mobile devices. Draft Version 1.2 \f46 Conversational Interfaces and Result Presentation Figure 3.6: A searchbot results presentation example inside Slack on a desktop (Avula et al., 2018). An example of varying snippet length on a small screen is presented in Figure 3.7. They demonstrated that participants who were using more extended snippets took longer to search because it took them longer to read the snippets. They suggested that unlike previous work on the e\ufb00ect of snippet length, the extended snippets did not seem that useful for mobile devices and that snippets of two to three lines were most appropriate. Furthermore, it has been suggested that short snippets may provide too little information about the underlying document, which can have an adverse e\ufb00ect on the search performance (Sachse, 2019). In general, depending on the information need, di\ufb00erent snippet lengths could be used to optimize the user experience. Even though results presentation has not been fully explored in a CIS context, CIS systems can be developed and deployed on already installed mobile messaging applications such as Telegram (see Figure 3.8) (Zamani and Craswell, 2020). This means that people are already familiar with Draft Version 1.2 \f3.2. Result Presentation: From Search Boxes to Speech Bubbles 47 Figure 3.7: Examples of SERPs with short (left) and long (right) snippets by Kim et al. (2017). the application and it can often be deployed and accessed over multiple devices and platforms. Furthermore, embedding these CIS systems within existing messaging applications means the user does not need to download and install new apps for every service. However, further research is needed to understand how users interact with information through such messaging applications. For example, little is known about how to display multi-modal information on small screens (i.e., how much information should be displayed versus the trade-o\ufb00from screen real-estate). 3.2.3 Speech-Only Result Presentation Result presentation research has traditionally been focused on visual representation. However, with the ongoing trend of CIS and the improvement of speech recognition, researchers have started investigating how to present results in a speech-only setting.6 It has been suggested that using speech to search is a natural extension of the visual search 6We use speech-only, which is the structural act or mechanism to speak. However, some of the studies described use audio as a sound or voice. Draft Version 1.2 \f48 Conversational Interfaces and Result Presentation Figure 3.8: Example screenshot of results presentations with Macaw (Zamani and Craswell, 2020) using Telegram. engines, potentially changing how we access information (Trippas, 2019). However, several researchers have also suggested that simply translating a SERP from a visual to a speech setting is not desirable (Lai et al., 2009; Trippas, 2019; Vtyurina et al., 2020). For instance, Vtyurina et al. (2020) found that simply translating text results into audio impacts the user experience negatively and requires higher cognition. Thus, it has been suggested to steer away from the \u201cten blue link\u201d paradigm and instead re-think the interactions with search systems. Furthermore, due to the temporal nature of speech, results can be adapted on the \ufb02y to change the presentation, thus supporting the user in their information need more actively. Similarly, as in traditional web search versus searching on smaller screens, it has been suggested that snippet length should be altered depending on the information need. In a study by Trippas et al. (2015b), the preference for summary length was investigated with a crowdsourcing setup. Speci\ufb01cally, they studied the summary length by comparing user preference between text and speech-only results. They observed that Draft Version 1.2 \f3.2. Result Presentation: From Search Boxes to Speech Bubbles 49 users preferred longer, more informative summaries in a text setting, than with audio summaries. Furthermore, di\ufb00erent results were observed depending on the query style (singleor multi-faceted): users preferred shorter audios for single-faceted queries, although for more ambiguous queries, this preference was not clear. More recent work by Vtyurina et al. (2020) also compared results presented over text versus speech. They used a mixed-methods study with a crowdsourcing and laboratory component, \ufb01nding that user preferences di\ufb00er depending on the presentation mode (text or speech). However, they also found that users can still identify relevant results even if presented in a more cognitively demanding speech format. The authors suggested that further improvements to the snippets can help optimize and guide the use of speech-based search interfaces. As part of this study, the authors provided the following presentation guidelines for speech-only results presentation: \u2022 Use prosody to avoid monotone voice \u2022 Avoid abbreviations in the spoken results \u2022 Avoid truncation of sentences \u2022 Avoid repetitive terms in spoken results Research on using prosody for results presentation was conducted by Chuklin et al. (2018) and Chuklin et al. (2019). They investigated audio manipulation as an alternative to \u201chighlighting\u201d or \u201cbolding\u201d, which is frequently done in a visual interface. They used a crowdsourcing study by modifying speech prosodies such as pitch, pauses, and speech rate in readout snippets. They found that some emphasis features help users identify relevant documents and also increase snippet informativeness. Many open problems related to the support and guiding of searchers through results presentation exist. For example, presentation order bias (Azzopardi, 2021; Kiesel et al., 2021b), interaction with tabular data (Zhang et al., 2020a), personas of the conversational system (Nass and Brave, 2005), persuasiveness of synthetic speech (Dubiel et al., 2020b), meta-communication to support communication breakdowns (Trippas et al., 2018), or using non-speech sounds to increase user Draft Version 1.2 \f50 Conversational Interfaces and Result Presentation engagement with search results (Winters et al., 2019; Arons, 1997). For example, order bias has been suggested to a\ufb00ect which results summaries receive the most attention from users in a visual setting (Joachims et al., 2005). Work has suggested a possible bias towards \ufb01rst and last readout search results depending on the kinds of information need, singleversus multi-faceted (Trippas et al., 2015a). This example of a serial-position e\ufb00ect (i.e., the tendency to recall the \ufb01rst and last items best and the middle items worst) are open problems. 3.2.4 Multi-Modal Results Presentation Past research on CIS primarily focuses on uni-modal interactions and information items. That is, all information is generally either exchanged in text or speech-only format within one turn. However, more recently, researchers have started investigating in more detail the advantages of multi-modal CIS (MMCIS), in which multiple input and output approaches are used (Deldjoo et al., 2021; Liao et al., 2021). Presenting search engine results over a multi-modal channel aims to increase the knowledge transfer of di\ufb00erent modalities, enhancing the search experience (Scha\ufb00er and Reithinger, 2019). A multi-modal interface can process two or more user input modes in one turn, for instance, speech, images, gestures, or touch (Furht, 2008). Multi-modal systems try to recognize human language, expressions, or behaviors which then can be translated with a recognition-based system. These multi-modal interfaces are often seen as a paradigm shift away from the conventional graphical interface (Oviatt and Cohen, 2015). Similar to a multi-modal dialogue system, MMCIS systems aim to provide completeness to the unimodal counterpart by providing information through multiple modalities (Firdaus et al., 2021). Furthermore, the theoretical advantage of these di\ufb00erent inputs is that they are very close to human expression and thus are an e\ufb03cient way of human-computer interaction. Thus, multi-modal interfaces enable humans to input signals to machines naturally through a mixture of interactions to convey the intended meaning (Rudnicky, 2005) and it is often suggested that multi-modality increases the intuitiveness of an interface. By coupling the intuitiveness of conversations with human conDraft Version 1.2 \f3.2. Result Presentation: From Search Boxes to Speech Bubbles 51 versation, which is inherently multi-modal, the strengths of human communication can be combined, enabling a natural form of information seeking. In addition to the system trying to elicit information from the user to satisfy the information need and perform queries in the background, the system also needs to decide which, what, how, and when to present information. Rousseau et al. (2006) created a conceptual model, called WWHT, describing four main concepts of multi-modal information presentation, based on four concepts \u201cWhat\u201d, \u201cWhich\u201d, \u201cHow\u201d, and \u201cThen\u201d: \u2022 What is the information to present? \u2022 Which modality(ies) should we use to present this information? \u2022 How to present the information using this(ese) modality(ies)? \u2022 and Then, how to handle the evolution of the resulting presentation? When designing multi-modal CIS interactions, a fundamental problem is the option, combination, or sequence of di\ufb00erent outputs of \u201cdisplaying\u201d results. For example, it is logical that relying only on a speech-only result presentation in a loud environment will be undesirable. Instead, using a combination of modalities to present the results in such an environment may be advantageous. Furthermore, as identi\ufb01ed and demonstrated by Deldjoo et al. (2021), MMCIS, and therefore the information presentation problem, is suitable in the following conditions: \u2022 the person who is searching has device(s) available which allows for more than one interaction mode (multi-device and multimodal), \u2022 when the task\u2019s context is important and can be captured with a device in a suitable modality enhancing personalization, \u2022 when task complexity can be supported by the mode of device interaction, \u2022 when the results can be returned in an appropriate output modality given the device, context, and complexity. Draft Version 1.2 \f52 Conversational Interfaces and Result Presentation Many open challenges for CIS results presentation in a multi-modal domain exist. Problems include selecting the optimal output modality depending on the context or the user\u2019s ability, adapting or changing the output modality to be di\ufb00erent from the retrieved modality, or fusing the response to present the results in multiple modalities (Deldjoo et al., 2021). New tools like Task Multimodal Agent Dialogue (TaskMAD) support wizard-of-oz data collection and experimentation with multiple modalities (Speggiorin et al., 2022) to support research in these future directions. 3.3 Initiative in Conversational Systems The demand to access information rapidly in a natural way has substantially increased due to the proliferation of reliable mobile internet, mobile devices, and conversational systems. Humans create and collect more information than ever before7 through blog posts, social media, emails, news articles, or videos while using them for education, entertainment, \ufb01nance decisions, or other decision making (Zue and Glass, 2000). In addition, querying this information has become omnipresent, with an estimated 75,000 Google searches per second in 2019.8 DuckDuckGo, a privacy-focused search engine with an estimated market share of 0.18% of global searches, received 23.65 billion search queries in 2020,9 illustrating the scale of search in our daily life. Furthermore, with the rise of smartphones and mobile internet, we have been accustomed to accessing this information on the go and while multitasking.10 However, accessing information through a small screen and on-screen keyboard while travelling can be cumbersome. Therefore, conversational interfaces in which natural language can be used to interact with information have great promise. Indeed, spoken human language is attractive since it is the most intuitive way of conversation. 7https://www.forbes.com/sites/nicolemartin1/2019/08/07/ how-much-data-is-collected-every-minute-of-the-day/ 8https://www.domo.com/learn/infographic/data-never-sleeps-7 9https://www.theverge.com/2018/10/12/17967224/ duckduckgo-daily-searches-privacy-30-million-2018 10https://www.thinkwithgoogle.com/intl/en-aunz/marketing-strategies/ app-and-mobile/device-use-marketer-tips/ Draft Version 1.2 \f3.3. Initiative in Conversational Systems 53 Furthermore, it is often seen as a very e\ufb03cient, \ufb02exible, and inexpensive means of communication (Zue and Glass, 2000; Trippas, 2019). In addition to human language, additional support input can be given through gestures as part of the multi-modal input (see Section 3.2.4). Figure 3.9: Two example chats with XiaoIce (Shum et al., 2018). Independent of the kind of conversational interface, these interfaces are often considered from the perspective of initiative. That is, to which degree does the system maintain an active role in the conversation (Zue and Glass, 2000; McTear et al., 2016; McTear, 2017). Three di\ufb00erent levels are often used to distinguish these, i.e., system-initiative, mixedinitiative, and user-initiative and are often used interchangeably with levels of control system, user, interchangeable. With system-initiative applications, or system-directed dialogue, the computer takes control over the sequences in the conversation and which information needs to be exchanged. The aim of the system is to elicit information from the user to provide relevant details back to the user. This can be done by asking open-ended questions, such as seen in the \ufb01rst utterance in Figure 3.10, in which the system invites the user to provide information and then elicits further details (third utterance). As seen in the example, in system-initiative dialogues, the system takes the initiative to drive the conversation and the user only answers the system\u2019s queries. This strategy aims to constrain the user input Draft Version 1.2 \f54 Conversational Interfaces and Result Presentation Figure 3.10: Example conversation where the system asks an open-ended question in the opening utterance and a more speci\ufb01c question next. or request variety, thus making the dialogues more e\ufb03cient. However, this comes at a cost, with rigid and restricted conversations making the interactions less natural. In the third user utterance, the user takes control of the dialogue by asking a question, turning the conversational interaction into a mixed-initiative dialogue. Hence, both user and system now actively participate in addressing the information need through the interactive conversational paradigm. Thus, mixed-initiative dialogues are known for a more natural exchange, however, more advanced ASR and language understanding are needed. Lastly, user-initiated, or user-directed dialogues, are conversations in which the user has complete control and can say anything to the system and the user always has the initiative. This means that the system will only respond to the user\u2019s requests. The disadvantage of this approach is that the user may \ufb01nd it challenging to understand the system\u2019s capabilities because the system will never suggest anything. Furthermore, dialogues with user-initiative may lead to frustration from Draft Version 1.2 \f3.4. Interface Limitations in Conversational Systems 55 the user because the system is not a conversational partner but rather only replies to requests. 3.4 Interface Limitations in Conversational Systems Even though conversational systems can have many advantages, such as enabling users or supporting natural language input, expression of multiple information needs in one turn, cross-platform compatibility and integration, and increasing engagement through personalization, many limitations need to be addressed. For example, natural language input components, such as ASR and NLU, need to be optimized to handle the huge number of unknown and unexpected user inputs. Furthermore, conversational systems need to be optimized to handle non-explicit information needs. For example, a user\u2019s tone of voice may imply that they want the conversational partner to do something, even though that need was not explicitly stated. Current CIS systems work reasonably well with narrow or factoid queries, however, they still have issues when the information need is more complex (e.g., multi-faceted) or has multiple information needs in one turn. Besides the limitation of results presentation or output from the system discussed in Section 3.2, such as highlighting or bolding keywords, other more general limitations must be considered. For example, GUIs should be carefully investigated before being directly translated into conversational or voice user interfaces. Even though many chatbots support menu-based interactions within the application, using buttons or menus will limit the bene\ufb01ts of natural language input. Furthermore, issues that already exist in GUIs are now passed on to conversational systems. As such, conversational systems now inherit the GUI experience devaluing the natural language advantage. In addition to these existing output di\ufb03culties, speech-only conversational systems have distinct challenges. For example, simply reading out textual components or reading out lists has shown to be ine\ufb00ective (Trippas et al., 2015b; Vtyurina et al., 2020; Gooda Sahib et al., 2015). Indeed, the serial and transient nature of audio can challenge the users\u2019 ability to recall all information presented (Dubiel et al., 2020a). Draft Version 1.2 \f56 Conversational Interfaces and Result Presentation This exacerbates the di\ufb03culty of skimming audio and makes it challenging to present results while not overwhelming the user with information nor leaving them uncertain as to whether they have covered the information space (Trippas et al., 2015b; Trippas, 2019). These CIS systems cannot maintain a lengthy information exchange or keep su\ufb03cient track of the context. In addition, images and graphs are more challenging to be displayed and may need to be interpreted by the system to inform the user what is displayed (Trippas et al., 2019). Other limitations, such as the tone of voice or persona of the system interacting with existing stereotypes or biases of humans speaking in particular ways may plausibly both reinforce existing biases as well as cause systems to be perceived in particular ways (Nag and Yal\u00e7\u0131n, 2020) Considerations must also be made for limitations of automatic speech recognition (ASR). For example, users\u2019 speech-input may include dis\ufb02uencies or errors. Users may mispronounce words, use \ufb01ller words such as \"uhm\" or \"ah\", or add extra pauses. They may also use words from other languages or made-up words and phrases (e.g., a made-up name for a personal music playlist). Furthermore, di\ufb00erent speech variabilities such as patterns, dialects, age, gender, or speech impairments can impact ASR performance. For example, speaking faster or slower can have an impact on the acoustic models used for transcriptions (Benzeghiba et al., 2007). Indeed, an apparent challenge for conversational systems is the barrier to recognize speech from a diverse population (Zue and Glass, 2000). To make information more accessible and enable wide adaptation of conversational systems, including by people with cognitive or physical impairments is needed (Baldauf et al., 2018; Derboven et al., 2014). Beyond this, there has been very limited published work on the design of speech-only systems to consider users who are either hard of hearing or vision impaired. 3.5 Summary This section covered conversational interfaces, results presentation, di\ufb00erent kinds of initiative in conversational systems, and interface limitations. We explained some of the essential building blocks for conversational interfaces and presented the central features such as natural Draft Version 1.2 \f3.5. Summary 57 language, conversational style, mixed-initiative, and context tracking. This section also provided a historical context on existing conversational interfaces (i.e., SDS, VUIs, live chat support, and chatbots) and their application. We highlighted the di\ufb00erences between these various types of conversational interfaces with respect to CIS. Since information presentation is key to transferring the found information to the users for CIS, we provided an overview of information presentation research. We described text-only presentations on desktops and small screens, speech-only presentations, and multi-modal presentations. In each part, we highlighted the limited research on how and what to present with interactions in CIS systems. Lastly, we explained key elements regarding initiative in conversations and how di\ufb00erent initiatives create di\ufb00erent conversational styles, followed by a brief summary of limitations in conversational interfaces. To conclude, di\ufb00erent interfaces can support people in their information seeking journey di\ufb00erently. Thus, CIS systems can bene\ufb01t from di\ufb00erent interfaces, and models and evaluation methodologies for handling multiple interfaces will be needed. Understanding the impact of interactivity in CIS is crucial to make user-friendly systems. Di\ufb00erent ways of understanding these conversational interactions are discussed in the next section. Draft Version 1.2 \f4 Understanding Conversational Interactions A distinctive property of CIS systems is the ability of the systems to understand multi-turn interactions expressed in the form of natural language. This section discusses the varying approaches to conversational understanding required to follow the conversational structure, with a focus on information seeking conversations. We give an example of a possible CIS conversation taken from the TREC Conversational Assistance Track (CAsT) Year 3 (Dalton et al., 2021) \u2013 topic number 113. This illustrates coreference issues, dependence on previous user and system turns, and explicit feedback with topic-switching behavior. User: How do genes work? System: A gene is a short piece of DNA. Genes tell the body how to build speci\ufb01c proteins. There are about 20,000 genes in each cell of the human body. Together, they make up the blueprint for the human body and how it works. A person\u2019s genetic makeup is called a genotype... Many personal characteristics, such as height, are determined by more than 1 gene. However, some diseases, such as 58 Draft Version 1.2 \f59 sickle cell anemia, can be caused by a change in a single gene. User: What others are caused by a single change? System: The four types of (human) Genetic diseases are: (1) Single-gene/monogenic Genetic Diseases: In this category the starting point is a mutation/change in one gene... Almost 6000 single gene disorders are known and it is estimated that 1 of 200 newborns face a single gene genetic disorder... User: What are the other types of diseases? System: (3) Chromosomal Genetic Diseases: Chromosomes are big DNA molecules composed from genes... Abnormalities in the structure, number (and not only) of the chromosomes can cause some of the most dangerous genetic disorders... User: You missed the second type. What was that? System: A large genetic mistake typically occurs in the woman\u2019s egg, which may partially explain why older women are more likely to have babies with Down syndrome... User: That\u2019s not what I wanted. How about recent developments in gene therapy to treat those defects? System: ... As illustrated above, CIS systems must \u201cunderstand\u201d a natural language utterance and represent the state of an information dialogue as it evolves. Second, they need to track the evolution of that state across turns and relate them to one another including detecting feedback and sub-topic switches. The fundamental mechanisms and representations for conversation understanding vary widely. The architecture may have an implicit internal representation based upon the model\u2019s input (e.g., attention-based models that encode the conversation history). The Draft Version 1.2 \f60 Understanding Conversational Interactions model may also have an explicit representation of the salient state that could include structured representations (entity or schema), lexical (word-based), or vector-based. This section begins by discussing the representation used within a single turn. It then discusses conversation modeling and history tracking for information and how this di\ufb00ers from previous work in dialogue state tracking. Next, it covers models of conversational discourse and discusses approaches that identify turn salience, extract contextual keywords, and construct vector representations. It provides an overview of core conversational tracking sub-tasks including (1) entity recognition, (2) query expansion, (3) salient term selection, and (4) conversational query rewriting (CQR). It concludes with a discussion of how these approaches continue to evolve beyond short conversations towards longer and multi-session conversations. 4.1 Modeling within Turn State This subsection introduces the building block for multi-turn conversations \u2014 the representation of the state for a single turn. Because CIS systems operate in an open-domain environment, they do not often use prede\ufb01ned domain state (frame) ontologies. At its most basic level, the state representation includes the utterance text; whether it is typed or from automatic voice transcription. The state representation for a single turn in CIS is contextualized with the history with implicit or explicit relationships between turns and concepts in the conversation. A widely adopted approach to conversational representation uses pretrained language models with contextualized embeddings, particularly Transformer-based models (Vaswani et al., 2017; Ra\ufb00el et al., 2020). These exhibit transfer learning capabilities that allow them to be \ufb01netuned for one or more conversational ranking or question answering (QA) tasks. For conversations, utterances may be encoded separately, compared, and possibly combined in a dense embedding space (Khattab et al., 2021b; Xiong et al., 2021; Prakash et al., 2021). Some early systems use explicit structured annotations of the utterances from the output of an NLP system: part of speech information, dependency parse, semantic frame parsing (e.g., FrameNet (Baker et Draft Version 1.2 \f4.1. Modeling within Turn State 61 al., 1998)), entity recognition and linking, semantic parsing to a logical representation, and others. However, pre-trained language models demonstrate key elements of these NLP pipelines including coreference resolution, entity recognition, and relations (Tenney et al., 2019). As a result, approaches in many leading CIS benchmarks (e.g., CAsT (Dalton et al., 2019) and QuAC (Choi et al., 2018)) do not explicitly use the output from an NLP system, but instead, rely on the models to handle these tasks implicitly. Because of these advances, modern CIS does not often focus on explicit structured state tracking. Widely used CIS datasets do not contain labeled annotations of ground-truth conversational state, except in the form of manually disambiguated utterances to resolve phenomena like coreference. The focus is then on generating these automatically via tasks such as query rewriting. Currently, instead of componentwise evaluation of understanding elements the primary evaluation of e\ufb00ectiveness is primarily on extrinsic e\ufb00ectiveness in the overall end-toend retrieval task. The key di\ufb00erentiating element for CIS compared with singleturn information seeking is the type of interaction and discourse structure. There are various proposed models of conversational structure in the literature. Structure in a conversation builds on the actions of the participants, namely the speech or dialogue acts. A common task is \u2018dialogue act recognition\u2019 to label the utterances with the type of interaction (Bunt et al., 2017) (e.g., INFORM, REQUEST, GREETING) that encodes how the current turn relates to previous ones explicitly. The de\ufb01nition of these act types and their usage varies widely. One model developed speci\ufb01cally for CIS by Azzopardi et al. (2018) presents a model of conversational search evolution and includes a taxonomy of the user and system action space. A conversational information need evolves with a single turn being the Current Information Need (CIN), past turns with results as Past Information Needs (PINs), Draft Version 1.2 \f62 Understanding Conversational Interactions and an agent\u2019s model of the information space including a model of varying trajectories with Alternative Information Needs (AINs). The action space includes rich types of both user and system revealment of varying forms. The work of Ren et al. (2021b) re\ufb01ne this with a focus on conversational interaction with existing search engines, including explicit user intents (such as reveal, revise, chit-chat) and system actions (suggest clari\ufb01cations, show results, chit-chat, etc). Many of the current benchmark datasets have simplistic discourse with the user asking questions and the system returning answers of varying types. For example, the widely used QuAC (Reddy et al., 2019) conversational QA dataset contains three categories of dialogue act annotations for each turn, (1) continuation (follow up, maybe follow up, or don\u2019t follow up), (2) a\ufb03rmation (yes, no, or neither), and (3) answerability (answerable or no answer). Later developments to tackle challenges in this area include richer types of user revealment, feedback, and others (Dalton et al., 2021). The action space and intents vary widely according to the task and interface constraints. What makes CIS distinctive is the unique focus on satisfying a user\u2019s information need that may encompass short answers, long answers, and other rich types of interactions. 4.2 Modeling Conversation History and Tracking State Understanding a conversation is primarily concerned with organizing how a series of turns relate to one another. The relationships in CIS di\ufb00er from previous work in search systems in that they often exhibit natural language phenomena that span turns \u2013 coreference (two or more expressions referring to the same thing) and ellipsis (omitting words or topics implied by the context). It also requires handling informal language use and implicature. Dalton et al. (2020b) looked at the use of coreference of varying kinds \u2013 anaphora, zero-anaphora (omission), and others. They \ufb01nd that compared with traditional NLP corpora (such as OntoNotes and CoNLL coreference) conversation information seeking Draft Version 1.2 \f4.2. Modeling Conversation History and Tracking State 63 has a higher rate of ellipsis and zero-anaphora, which are extremely rare in narrative text. More recently, Radlinski et al. (2022b) looked at subjective language more broadly, arguing for di\ufb00erent forms of subjective language requiring di\ufb00erent treatment. Informal conversational phenomena also include interpreting indirect answers in context (Louis et al., 2020). An example is: \u201cWould you like to get some dinner together?\u201d with a reply, \u201cI\u2019d like to try the new Sushi place.\u201d, which is an implicit a\ufb03rmative that indirectly implies an answer. For voice-based applications, they must also handle noise because of dis\ufb02uency removal and the noisy channel from speech-to-text transcription (Hassan Awadallah et al., 2015). The use of an explicit structured state is widely adopted by taskoriented dialogue systems. Frame-based approaches model the dialogue state with structured domain-speci\ufb01c schemas that have intents (actions) and typed slots with values. Keeping track of this evolving state is a standard task, Dialogue State Tracking (DST), with long-running benchmarks in the Dialogue State Technology Challenge (DSTC) (Williams et al., 2016). These systems often support a \ufb01xed number of pre-de\ufb01ned domains with schemas; the widely used MultiWoz dataset (Budzianowski et al., 2018) has an ontology with twenty-\ufb01ve slots spanning seven domains. The largest, the Schema Guide Dialogue dataset (Rastogi et al., 2020) contains sixteen domains with an average of \ufb01ve intents per domain and 214 slots (2.5 per intent on average). In contrast, CIS most systems typically do not have pre-de\ufb01ned domains, intents, or slot representations. In contrast to task-oriented dialogue systems, CIS systems typically do not have pre-de\ufb01ned domains, intents, or slot representations. One exception to this is a proposed frame-like model that builds a structured representation (SR) of a turn with context entities, question entities, predicates, and expected answer types (Christmann et al., 2022). Unlike the structured schemas from DST, these are loosely de\ufb01ned text Draft Version 1.2 \f64 Understanding Conversational Interactions values. The state of SRs evolves through a conversational \ufb02ow graph. 4.3 Modeling Conversation Discourse There are varying approaches to modeling the evolution of a conversational topic across turns. These leverage the natural language phenomena used in conversational dialogue. Automatic approaches look for topic shifts based on changes in coreferent mentions, shared noun phrases, and common patterns (Mele et al., 2020). The realism of the conversational discourse varies widely among the conversational corpora based on their creation methodology. The TREC CAsT topics are inspired by informational sessions in web search (Dalton et al., 2020b) but are also engineered to be challenging for trivial reformulation systems. Other datasets such as SQuAD and CoQA are derived from arti\ufb01cially created information needs. The widely used QuAC dataset is limited to discussing an information need about a single person with a bias towards entertainers (Choi et al., 2018). The result is that the discourse of conversations vary based on the type of information, the topic being discussed, the user task, and the modalities supported for interaction. Most of the aforementioned datasets, including TREC CAsT (2019), assume that the sequence of questions is \ufb01xed and is independent of the system\u2019s response, which is di\ufb00erent from real interactions. Further, they assume that the only action the system can take is answering the questions and do not support mixed-initiative interactions where the system make take other actions. This is changing, with increased result dependence in CAsT 2021 (Dalton et al., 2021) and mixed-initiative sub-tasks in 2022 (Owoicho et al., 2022). It represents part of the larger trend towards greater dependence on previous system responses as well as richer types of system responses. In the following subsections, we discuss the evolution of approaches to modeling conversational history including how it is represented. We then break down history understanding sub-tasks and discuss each. Finally, we conclude by looking towards the evolution of conversations to longer, more complex tasks and information needs. Draft Version 1.2 \f4.3. Modeling Conversation Discourse 65 4.3.1 History Models Modeling the conversation history requires determining relevant parts of the history, how the history is encoded, and how the encoding is leveraged by the model. Approaches to modeling state across turns vary. A simple and widely used approach to modeling history is the simple heuristic to concatenate the last-k turns. The approaches vary in length and type of context appended. One example of this approach uses approximately the previous two turns (Ohsugi et al., 2019) \u2013 the previous user utterances, system responses, or one of those two. For conversational QA, a popular approach is to only append the previous answer as context (Choi et al., 2018). Similar heuristics that append the \ufb01rst turn and previous turn(s) of a conversation were also used in the \ufb01rst year of TREC CAsT (Dalton et al., 2019). The most important feature in modeling history is the positional relationship between turns to capture common patterns of conversational discourse. In particular, in current datasets, most references refer to immediate or short-term contexts (Chiang et al., 2020). Qu et al. (2019b) append encodings of the history but do not model position explicitly. Multiple threads of work improved on this by adding the relative position of previous answers (Qu et al., 2019c; Chen et al., 2021a). Beyond position, Qu et al. (2019c) adds a History Attention Module that takes the encoded representations of sequences or tokens and learns the importance of the representations to the current answer. Analysis shows that models appear to be relying heavily on positional understanding more than on textual semantic relationships (Chiang et al., 2020). A challenge for history models is that many of the existing benchmarks only exhibit simple discourse with strong local positional bias. As shown in CAsT, most dependencies are local, on directly preceding turns (Dalton et al., 2020a). This is evolving as CIS systems become more capable, with non-local dependencies increasing from 12% of the turns in CAsT 2020 to 22% in CAsT 2021. Draft Version 1.2 \f66 Understanding Conversational Interactions Improved models of conversation history is an area for future work, particularly for long and complex conversations where appending short-term history does not adequately model the discourse. Behavioral analyses of existing models show that they rely heavily on short-term distance cues rather than deeper understanding. 4.3.2 History Representation As mentioned above, a simple approach to model representation is to concatenate relevant turns to history in the order they appear. This creates an explicit text-based representation for downstream tasks including query expansion and rewriting. This may be performed implicitly through the latent state from a sequence model. Recurrent networks (such as LSTMs (Hochreiter and Schmidhuber, 1997)) encode long-term conversation history dependencies via latent hidden states (Yang et al., 2017). More recent neural language models based on Transformer architectures (Vaswani et al., 2017), e.g., BERT (Devlin et al., 2019), use attention to latently encode relationships. A key consideration is how to encode turn structure (for example using separators) to indicate boundaries between previous user and system responses (Reddy et al., 2019; Qu et al., 2020). This may also be done in the model as in Qu et al. (2019b) using a separate embedding indicator to determine if an utterance is a part of a question (user) or answer (system). Chiang et al. (2020) use a special word embedding indicator if a token is used in a previous answer. Gekhman et al. (2022) extend the separator approach by modifying the input with prompt-based separators with positions. They study and compare this approach that modi\ufb01es the input text with symbols compared with the other widely used approaches that modify the embedding layer for conversational question answering. They \ufb01nd that the simple prompt-based approach is more e\ufb00ective with new language models. Another key di\ufb00erence from previous work is that they append the answers in most recent \ufb01rst order in addition to explicit prompt labels for order. Future work might explore the impact of these positional Draft Version 1.2 \f4.4. Conversational Language Understanding Tasks 67 modeling decisions further. A separate vein of research creates an explicit history model with a mechanism to integrate the representation. The FlowQA approach (Huang et al., 2019) introduces the concept of Flow which generates an explicit latent representation of the previous context. Modeling the conversation was subsequently evolved in the context of graph networks to model the \ufb02ow of information as a graph using recurrent neural networks (Chen et al., 2021a). Yeh and Chen (2019) extended this to a Transformer architecture and makes the context \ufb02ow dependence explicit. Following recent trends in retrieval approaches, the adoption of approximate nearest neighbor search applied to learned dense representations, also known as dense retrieval, and/or sparse representations resulted in a signi\ufb01cant shift. In these representations, the query and history are combined into one or more vectors issued as queries to a dense retrieval system. Yu et al. (2021) encoded the history representation with a dense vector that is learned with a teacher-student model to mimic a dense representation of the manually rewritten query. The model for multiple turns uses composition with dense retrieval approaches similar to those in multi-hop QA (Khattab et al., 2021a), but applied to a conversational context. The results from Dalton et al. (2021) include these as a baseline, and they are widely adopted by many of the top-performing teams in TREC CAsT (Owoicho et al., 2022). Most of the CIS systems, although still using a dense vector representation adopt simplistic history heuristics to create one or more representations that may also leverage words combined via fusion. Further, to maximize e\ufb00ectiveness most current models require an explicit text representation for further reranking (as discussed in the next section), although this is starting to change with e\ufb00ective learned sparse representations like Splade (Formal et al., 2021) being used in CAsT \u201922 (Owoicho et al., 2022). 4.4 Conversational Language Understanding Tasks Given a representation of conversation a key consideration is how to use elements of the history in the current turn to retrieve relevant Draft Version 1.2 \f68 Understanding Conversational Interactions information. There are varying and complementary approaches to address this problem. The tasks include unsupervised or supervised query expansion, generative query rewriting, identifying and tracking concepts and entities, identifying salient turns, and extractive or abstractive summarization. 4.4.1 Turn Salience This task involves explicitly modeling the relationship of turns in a dialogue to determine their relevance and relationship to the current turn. The CAsTUR dataset created by Aliannejadi et al. (2020) adds turn salience data to the TREC CAsT 2019 dataset (Dalton et al., 2019). The authors performed a detailed analysis of the dependencies. The resulting relation labels were used to train classi\ufb01ers of turn salience (Kumar and Callan, 2020). We note that subsequent iterations of CAsT in 2020 and 2021 (Dalton et al., 2020a) include explicit dependence annotation labels by the topic creators with labels on the dependence on previous user utterances as well as previous results. The content of relevant turns can be used directly for multiple tasks including expansion and rewriting. 4.4.2 Query Expansion In this section we discuss Conversational Query Expansion (CQE) including both unsupervised and supervised approaches. These augment the representation of the current turn with additional information from previous turns using a form of pseudo-relevance feedback (Yang et al., 2019; Mele et al., 2020; Hashemi et al., 2020). 4.4.2.1 Unsupervised Approaches Work in this area started with heuristic approaches and unsupervised models. In TREC CAsT a simple heuristic baseline expansion approach was to expand with the \ufb01rst and previous turn in the conversation (Clarke, 2019). These turns often represent an overall topic and the most recent (and therefore likely relevant) previous information. A mixture of feedback models (Diaz and Metzler, 2006) can be used to Draft Version 1.2 \f4.4. Conversational Language Understanding Tasks 69 combine feedback models across turns. However, these simple approaches are less e\ufb00ective when there is a sub-topic shift or there are non-relevant turns. The expansion unit used varies, with some only using previous user turns and others using both user turns and system turns. The HQExp model proposed by Yang et al. (2019) does both and leverages the combination of scores from a BERT model across past turns. This is an important model because it uses rules, but includes a model of topic shifts as well as query performance prediction. Going beyond individual turns, some expansion approaches build a model explicit graphs and word networks that evolve. The Conversational Reasoning Over Word Networks (CROWN) (Kaiser et al., 2020) model is an unsupervised method for propagating relationships across turns based upon a network of words related by mutual information. 4.4.2.2 Supervised Approaches Later work framed the task of expansion as a summarization task \u2013 extractive or abstractive. These use supervised models to select or generate terms for use in query expansion. The Query Resolution by Term Classi\ufb01cation (QuReTeC) model proposed by Voskarides et al. (2020) models the task as a binary term classi\ufb01cation, e\ufb00ectively performing term-level extractive summarization. In parallel and similar work, the Conversational Term Selection (CVT) method by Kumar and Callan (2020) frames the problem as a term extraction task but further applies the same extraction to pseudo-relevant results. These methods extend previous methods that extract key concepts from long verbose queries for web search (Bendersky and Croft, 2008) to a conversational language understanding task. The overall utility of both unsupervised and supervised expansion approaches is mixed, with many of the expansion approaches being outperformed by rewriting approaches (Dalton et al., 2019; Dalton et al., 2020a), but turn and term salience is often complementary and a key part of an overall end-to-end e\ufb00ective system. Draft Version 1.2 \f70 Understanding Conversational Interactions 4.4.3 Conversational Query Rewriting Given a query and a dialogue history, the goal of Conversational Query Rewriting (CQR) is to generate a new query that contains the relevant context needed to rank relevant content in a single unambiguous representation. In a pipeline system with multiple passes of retrieval, this step is critical because it determines the e\ufb00ectiveness of both candidate passage retrieval as well as subsequent re-ranking. A widely adopted approach is to model the task as a sequenceto-sequence task (Sutskever et al., 2014). The task-oriented dialogue systems community used pointer-generator networks and multi-task learning to rewrite turns but they are limited to a handful of task domains (Rastogi et al., 2019). This approach rapidly evolved with pretrained language models based on Transformer architectures (Vaswani et al., 2017) and with evaluations on a Chinese dialogue dataset (Su et al., 2019). They showed that these architectures implicitly solve coreference resolution more e\ufb00ectively for the target task than previous state-of-the-art coreference models. Subsequent work by Vakulenko et al. (2020) in the TREC CAsT 2019 benchmark (Dalton et al., 2019) demonstrated the e\ufb00ectiveness of pre-trained models based on GPT-2, resulting in one of the top three best-performing automatic runs in that year. Subsequent work showed the model can generalize with relatively few examples, particularly when combined with weak supervision based on rules to handle omission and coreference (Yu et al., 2020). Improvements continued to evolve by training the models with additional data spanning both CAsT and conversational QA datasets (Elgohary et al., 2019; Vakulenko et al., 2020). Improvements in this area continue with newer generations of sequence-to-sequence models (e.g., T5 (Ra\ufb00el et al., 2020)) based on larger corpora, increased model size, and re\ufb01ned objectives. Additionally, recent work from the dialogue community (Henderson et al., 2020; Mehri et al., 2020) demonstrated that pre-training and \ufb01ne-tuning on conversational data provides signi\ufb01cant gains for both task-oriented and chit-chat models over models pre-trained on general corpora only. It is common to train on public large-scale social media data, such Draft Version 1.2 \f4.4. Conversational Language Understanding Tasks 71 as Reddit (heavily \ufb01ltered), because of its size and diverse informal language (Henderson et al., 2020; Roller et al., 2021). CQR remains a active research area because high quality rewrites are more e\ufb00ective with current neural rankers trained on web search. There remains a signi\ufb01cant gap, 20-40% in CAsT between manual and automatic queries (Dalton et al., 2021; Owoicho et al., 2022). An area for future work in these models is to handle the rich forms of discourse, like user clari\ufb01cation or feedback. CQR models track conversation state implicitly resulting in resolving ambiguity and missing information in a generative approach. In contrast, an alternative model is to explicitly detect and track concepts as they evolve in a conversation. 4.4.4 Entity Detection and Linking Tracking the evolution of concepts and entities used in a conversation explicitly is the task of Conversational Entity Detection and Linking. This includes tracking coreferent mentions, but also other forms of concept evolution. Due to the informal and personal nature of conversational discourse this task can be quite challenging. Work on tracking entities across turns \ufb01rst appears in multi-turn factoid QA at TREC 2004 (Voorhees, 2004). This evolved with the introduction of \u2018related\u2019 questions that included anaphora and TREC 2005 with dependence on previous factoid responses (Voorhees, 2005). A related line of research uses search session history to improve named entity recognition e\ufb00ectiveness in queries (Du et al., 2010). Approaches grounded in concepts and entities were widely used by Alexa Prize socialbot systems (Ram et al., 2018) that allowed them to track topics and shifts across turns in similar ways as CIS systems. A key consideration for these systems is that they need to be able to identify general concepts, commonly referred to as Wiki\ufb01cation (Cucerzan, 2007). Joko et al. (2021) studied the e\ufb00ect of entity linking on conversational queries and found that existing linking systems perform signi\ufb01cantly worse on conversations than on other types of text. In followup work, they extend the REL entity linking system to conversations creating CREL (Joko and Hasibi, 2022). To evaluate the model they Draft Version 1.2 \f72 Understanding Conversational Interactions create ConEL-2, an extension of the Wizard-of-Wikipedia dialogue dataset to add annotations of personal entity mentions and links. As systems and benchmarks evolve, we expect the importance of this area to grow and to address issues like personal bias (Gerritse et al., 2020). 4.5 Long and Multi-Session Conversations Many of the existing approaches discussed in the previous focus on a single relatively short conversation. For example, years one and two of TREC CAsT averaged between 8-10 turns (Dalton et al., 2019; Dalton et al., 2020a), QuAC has fewer, with approximately 7 turns (Choi et al., 2018). A common heuristic based on the observation that many dependencies are local is to use only the three previous turns (Mehri et al., 2020). As conversations become longer, simple methods for modeling conversations break down. While there are new model variants for longer sequences e.g., Conformer (Mitra et al., 2021) and Longformer (Beltagy et al., 2020), many widely used neural models, including those used for conversational query rewriting or term salience prediction are only capable of encoding a limited (short) context of a few hundred tokens. To address this, approaches that select conversational context to use, a task referred to as sentence selection in a dialogue (Dinan et al., 2019a). 4.5.1 Long Answer Dependence Another dimension of modeling conversations is understanding long responses. Much of the previous related work focused on tracking and reformulation mostly based on previous utterances (queries) with only limited result interaction (Dalton et al., 2019). The structure of previous conversational QA tasks had limited reliance on immediate previous results (Choi et al., 2018; Voorhees, 2005). This is because the responses given by the system are short factoid responses. In contrast to prior work on ConvQA with factoid responses, the broader scope of CIS systems has richer questions that Draft Version 1.2 \f4.5. Long and Multi-Session Conversations 73 require varied length responses. These may be one or more passages, a document, a multi-source summary, or even an entire search engine results page. These long answers make the overall conversational history much longer than typical chit-chat dialogues. Interactions with long results in later turns make language understanding signi\ufb01cantly more challenging for CIS systems. They need to be able to understand references across a longer distance in more complex discourse. This remains a challenging area, where current neural approaches for conversation understanding struggle (Dalton et al., 2021). 4.5.2 Turn Retrieval and Recommendation Similar to previously discussed work on turn salience, an alternative approach is to model \ufb01nding relevant information from previous history as a ranking rather than classi\ufb01cation task. Previous turns and responses are ranked for relevance to the current turn using the same sparse, dense, or neural ranking models (Humeau et al., 2020) used in response ranking. The evidence from previous turns may be encoded independently (or concatenated) (Izacard and Grave, 2021) or fused (Xiong et al., 2020) before being used in the target generative task. Blenderbot from Xu et al. (2022) retrieve turns from past conversational sessions for additional historical context. The model retrieves sessions as a document and uses these as the context in the generation process. There are also clear connections to classic recommendation tasks here. Recommender systems often encode rich long-term sequences of interactions (which may be considered a \u201cconversation\u201d) in a user model that is meant to summarize this sequence of interactions. Recent work has advocated representing such knowledge about users\u2019 needs in natural language (Radlinski et al., 2022a). Finally, a possible area for future work might be to create summaries of turns or conversations, similar existing work on text compression (Rae et al., 2020). Draft Version 1.2 \f74 Understanding Conversational Interactions 4.6 Summary This section reviewed conversational state-tracking approaches and models. We examined the fundamentals of modeling intra-turn states including vector representations, entity representations, and discourse classi\ufb01cation. We discussed approaches to model conversational history and di\ufb00erentiating features of conversational search as contrasted with voice search or traditional text narratives, with a key di\ufb00erentiating feature being the wider use of implied context including indirect answers, zero-anaphora, and ellipsis. We discussed long-range history models with many current approaches using a static window of context (last few turns) as well as dynamic turn salience, or attention-based models. Within this history, we examined key sub-tasks: entity recognition and linking, query expansion, query summarization, and query rewriting. The best-performing approach leverages multiple diverse techniques: rewriting, expansion, and reranking in a multi-stage pipeline (Lin et al., 2020b). An approach based upon both early and late fusion of multiple expansions and rewrites across both retrieval and reranking is currently the most e\ufb00ective (Lin et al., 2020b; Lin et al., 2021b). This indicates an opportunity for more uni\ufb01ed approaches combining the di\ufb00erent sub-components. Overall, this section suggested that notably challenging areas in understanding conversational interactions include result dependence on long responses as well as modeling long conversations, possibly spanning multiple sessions. Draft Version 1.2 \f5 Response Ranking and Generation In this section, we discuss response ranking and generation used in conversational information seeking. The task of response ranking is selecting the relevant information item(s) for a turn in the conversation from the knowledge available to a conversational system. The types of methods are often categorized based on the type of conversational response provided: short answer (QA), longer single passage or document, automatically generated responses from extractive or abstractive summarization, and structured entities (products, restaurants, locations, movies, books, etc). The evolution of ranking and generation is heavily in\ufb02uenced by the publicly available resources in this area. Early work in this area evolved existing QA datasets and models towards ones that include context. This includes single-turn QA or asynchronous discussions from Community Question Answering (CQA) on data including Reddit, StackExchange (Penha et al., 2019), and Yahoo! Answers (Hashemi et al., 2019). But going beyond context, conversational approaches evolve this towards interactive chat-like discussions that use di\ufb00erent types of language patterns. 75 Draft Version 1.2 \f76 Response Ranking and Generation 5.1 Short Answer Selection and Generation This section covers an overview of Conversational QA (ConvQA) approaches, also referred to as Conversational Machine Comprehension in the NLP community. ConvQA often assumes that the question in each turn is answerable by a span of text within a particular passage (from a conversational retrieval model) and selects one or more spans of text from the passages. We begin by discussing traditional models, then more recent neural approaches, and end with recent work combining elements of retrieval and selection with end-to-end approaches. The evolution of ConvQA follows advances in QA and machine comprehension. The adoption of deep neural models brought new interest in the task and approaches. They are the building blocks for later ConvQA models. Early models are extractive and select one or more spans of text as the answer(s). These models have evolved to use generative sequence-to-sequence models. 5.1.1 Early Conversational QA Models Early ConvQA models started in the TREC 2004 QA Track (Voorhees, 2004; Voorhees, 2005) with questions grouped into di\ufb00erent series related to a single target entity (or event). Each question asks for more information about the target. This requires models to use previous questions in the sequence, mainly the \ufb01rst with the target. Unlike a dialogue or conversation, the questions did not mention answers (responses) from previous questions in the series, resulting in a limited discourse structure. E\ufb00ective models (Harabagiu et al., 2005) use straightforward and rule-based models, the response ranking methods did not leverage the multi-turn nature of the series. A building block for later ConvQA models is extractive neural models for single-turn QA. Notable models include DrQA (Chen et al., 2017) and BiDAF (Seo et al., 2017) that use Recurrent Neural Networks \u2013 speci\ufb01cally bidirectional long short-term memory networks (Bi-LSTMs). The BiDAF++ QA model (Peters et al., 2018) includes self-attention and the use of pre-trained contextualized word vectors (ELMo). Later Pointer Generator Networks (See et al., 2017) extended these by supportDraft Version 1.2 \f5.1. Short Answer Selection and Generation 77 ing copying spans from an input context in the decoder. These models and related datasets are extractive QA and do not focus signi\ufb01cantly on ranking the input text. They are also not conversational, although as we discuss next they were later adapted to encode conversational context. The shift from QA to ConvQA for these models required the development of new benchmark datasets. The Question Answering in Context (QuAC) dataset (Choi et al., 2018) is one of the early ones. The baseline model on the dataset was BiDAF++, the state-of-the-art QA model at the time. To adapt it for ConvQA, the conversational history was appended (as described in Section 4) and was referred to as \u2018BiDAF++ with k-Context\u2019. This model appends previous k (1-3) answers (their contextual embeddings) as context, along with the question turn number. We note that the QuAC dataset is limited to people entities, with a particular emphasis on entertainment.1 Concurrent with QuAC, the CoQA benchmark (Reddy et al., 2019) was released with similar goals. Because of its crowdsourcing task setup, the extracts are shorter (2.7 words vs over 15 for QuAC). It includes conversational questions from seven diverse domains: children\u2019s stories, literature, middle and high school English exams, news, Wikipedia, Reddit, and science. The CoQA baseline models were also similarly single-turn QA models adapted for conversation. They used BiDAF++ w/ k-Context. They also extended the DrQA model (Chen et al., 2017) by including context history markers to separate turns, which outperforms the BiDAF model variants. These datasets and models are important because they represent the \ufb01rst steps towards a large-scale evaluation of ConvQA systems with models simply adapted from previous QA systems. One of the \ufb01rst steps towards new models explicitly designed for conversation are models that incorporate Flow (FlowQA) (Huang et al., 2019) to model the conversational dialogue. Instead of appending history with a marker, they introduce a method that provides the model access to the full latent state used to answer the previous questions. This is a stack of two recurrent networks one for each turn and one across 1See the datasheet description of QuAC for details including details of bias, https://quac.ai/datasheet.pdf Draft Version 1.2 \f78 Response Ranking and Generation turns. Their \ufb01rst model uses Bi-LSTMs to encode each turn and then processes each full turn representation linked with GRUs (for e\ufb03ciency reasons). This represents a signi\ufb01cant advancement over previous models that were extended to other types of networks including Transformers, discussed next. 5.1.2 Conversational QA with Transformers The introduction of pre-trained language models based on Transformer architecture that supports transfer learning represents a signi\ufb01cant shift for ConvQA systems. This subsection describes this evolution in approaches and challenges with these models. Following the early success of Transformer-based models, such as BERT (Devlin et al., 2019) in QA tasks, these models were applied to ConvQA and yielded similarly impressive improvements. In many cases, the early work using Transformer approaches simply appended previous turn context with separators similar to previous extensions of BiDAF and DrQA. However, results show this has signi\ufb01cant limitations. Naive approaches appending answer context degrade faster because of the limitations of the sequence input length (Qu et al., 2019b). To overcome these issues, Qu et al. (2019b) proposed the History Answer Embedding (HAE) model that uses BERT for ConvQA while modifying the representation to explicitly encode whether parts of the input are present in the previous history. On QuAC they found that this model outperforms BERT models that naively append the question or answer history, and is also more robust to appending longer conversations. In a di\ufb00erent thread, Yeh and Chen (2019) introduced the FlowDelta model that extends the previously discussed Flow model to use BERT for encoding, as well as changing the Flow loss to focus on the di\ufb00erence in Flow (Delta) across turns. They found that the proposed FlowDelta outperforms the previous Flow and BERT-based models. A long-standing top-performing system on the CoQA leaderboard is RoBERTa+AT+KD (Ju et al., 2019), an extractive model using a RoBERTa language model in combination with Adversarial Training (AT) that performs perturbation of the contextual embedding layer and Knowledge Distillation (KD) using a student-teacher setup. It Draft Version 1.2 \f5.1. Short Answer Selection and Generation 79 ensembles nine models and has a post-processing step for the multiplechoice questions to match extracted spans to the target answer. Beyond the leaderboard, Stali\u00af unait\u02d9 e and Iacobacci (2020) studied the behavior of BERTand RoBERTa-based models on CoQA. They found that the key gain between the base models is that RoBERTa provides a better lexical representation. However, it does not capture more of the fundamental linguistic properties in ConvQA. To address these issues, they tested incorporating varying types of linguistic relationships in a multi-task model and combined the models in an ensemble. They found that incorporating the linguistic structure outperforms the base models. This indicates that the base representation of the language model is important for e\ufb00ectiveness and that there is an opportunity for models that incorporate more linguistic and conversational discourse structure. Note that the behavior of the current models for response ranking and generation in CIS is constrained by issues with current datasets and task formulation. For example, an issue highlighted by Mandya et al. (2020) is exposure bias: CoQA systems use gold answer labels for previous turns in both training and test time. As a result, CoQA evaluation sometimes overestimates the e\ufb00ectiveness of systems that have to rely on noisy previous predictions rather than human-written gold responses. They \ufb01nd this particularly problematic for longer conversations and longer questions. As discussed later, there is a similar phenomenon for conversational retrieval systems that perform conversational query rewriting. Systems that use manual query rewrites instead of predicted ones for earlier turns overestimate their e\ufb00ectiveness (Gemmell and Dalton, 2020). The ConvQA models and datasets (QuAC and CoQA) use a short pre-de\ufb01ned narrative of 200-400 tokens with the conversation focusing on one passage. As a result, the previously discussed ConvQA systems work well for extracting information from short passages with conversations grounded in a single paragraph. Further because of the way they Draft Version 1.2 \f80 Response Ranking and Generation were constructed, the headroom for generative models is very limited, approximately 5% on CoQA (Mandya et al., 2020). The next subsection covers more realistic models that include the retrieval of passages in the QA process. 5.1.3 Open Retrieval Conversational QA This subsection discusses approaches that incorporate retrieval into the ConvQA task. This is referred to as open retrieval ConvQA (ORConvQA) or end-to-end ConvQA. The distinguishing feature of these models is that they operate on a large corpus of passage content and rank the passages used in the conversation. A common architecture for these systems is that they consist of two components a Retriever and a Reader. The Retriever takes the conversational context and uses it to identify candidate passages. The Reader takes the context and candidates (text) and produces an answer. The base retrieval systems are e\ufb00ectively the Conversational Passage Retrieval long answer systems discussed below in Section 5.2 combined with a QA reader model to extract or generate the answer. A key challenge is that the existing ConvQA benchmarks are not designed for open retrieval QA and that current conversational passage retrieval benchmarks do not have short answer annotations. As a result, recent e\ufb00orts (Qu et al., 2020; Gao et al., 2021b; Ren et al., 2021a) adapted and extended the datasets to bridge this gap. The \ufb01rst of these by Qu et al. (2020) extended QuAC to incorporate passage retrieval over Wikipedia, creating the OR-QuAC dataset. To do this a synthetic query representing the information needed is created by providing the Wikipedia title and \ufb01rst paragraph with the initial question that is rewritten to be unambiguous. Recent developments in dense retrieval are also being applied to OR-ConvQA. Qu et al. (2020) performed retrieval using a dot-product of a query history representation (previous k queries) and a passage that is based upon a learned query and passage encodings using ALBERT (Lan et al., 2020), a lite BERT representation. One of the novel contributions is the multi-task training objective where the retriever, a BERT-based cross-attention reranker, and a BERT-based reader are trained concurrently to avoid issues of error propagation. Another conDraft Version 1.2 \f5.1. Short Answer Selection and Generation 81 tribution is that it uses a distribution over candidate answers. One potential issue is that for training the model, golden query rewrites are used rather than employing a noisy query rewriter. This approach was further extended and improved upon by leveraging distant supervision to handle the free-form responses more e\ufb00ectively (Qu et al., 2021). One of the large-scale e\ufb00orts in OR-ConvQA is the development of the Question Rewriting in Conversational Context dataset (Anantha et al., 2021). For a baseline, they used a BERTserini passage retriever combined with a BERT-large reader model. They found that a key factor in the success of reader models that leverage retrieval is incorporating the passage relevance score into the reader model (Anantha et al., 2021). Recent results by Del Tredici et al. (2021) demonstrate that di\ufb00erent representations should be used for retrieving and reading models. One missing aspect from these is the existing models and datasets didn\u2019t exhibit topic switching. The TopiOCQA dataset is a large-scale ORConvQA dataset (Adlakha et al., 2021) that includes topic switching behavior. They start with seed questions from the Natural Questions QA dataset and traverse topics in Wikipedia. A leading approach on this dataset is a variation of Fusion-in-Decoder (Izacard and Grave, 2021) extended to dialogue (Wu et al., 2021). Recently, mirroring a trend in QA there is increased attention to ConvQA over heterogeneous sources that combine text, tables, and entity KGs. Christmann et al. (2022) propose a new heterogeneous conversational dataset (ConvMIX) and pipeline called CONVINSE to perform the task. One key di\ufb00erence in their proposed approach is a variant of conversational rewriting that instead of predicting a natural language utterance generates a frame-like representation \u201cintentexplicit structured representation\u201d (SRs) whose nodes and sub-graphs are connected across turns in a graph. 5.1.4 Response Generation for Conversational QA Recent trends in QA increasingly have a focus on generative sequenceto-sequence models. These models are used to (1) perform generation and put the answer in the conversational context, and (2) to make the model more e\ufb00ective by generating responses from retrieved passages Draft Version 1.2 \f82 Response Ranking and Generation and past sessions. The \ufb01rst type focuses on the conversational natural language generation of answers, putting the answer into the natural conversational context and focusing on \ufb02uency. They follow the pattern of retrieve and re\ufb01ne (Weston et al., 2018). The initial results, retrieved from previous document collections or previous conversation responses, are used as the context that is re\ufb01ned during generation. The re\ufb01ning processing connects the answer to the dialogue and puts it into a natural language response form. An example of this is AnswerBART (Peshterliev et al., 2021), which provides an end-to-end model that performs answer ranking, generation, and includes abstaining from answering when there is none. A novelty of this model is that it jointly learns passage reranking with the extraction task. A variation of this is treating generation as a ranking task. Baheti et al. (2020) used syntactic patterns and templates to generate multiple candidate responses. This was combined with a GPT-2 based model that was pre-trained on Reddit conversations. These models focus primarily on \ufb02uency and putting the answer in context. The ability to incorporate long-term memory from past sessions is important for CIS systems. The work from Shuster et al. (2021) extended (Lewis et al., 2020) by incorporating the turn structure for knowledge-grounded conversations and they found this reduces model hallucination (i.e., producing factually invalid information), and results in a model that generalizes more e\ufb00ectively. Going beyond this, the work from Xu et al. (2022) extended the retrieval aspect to incorporate retrieval from past conversational sessions. The model retrieves sessions as a document and uses these as the context in the generation process. One limitation of many of these ConvQA approaches is that because the answers are short (even if they are put into a natural language utterance), they are usually simple factoid responses. As a result, the level of discussion in the conversation does not discuss aspects of the response and the ability to reference previous results in follow-up parts of the conversation is limited. The Question Rewriting in Conversational Context (QReCC) dataset from Anantha et al. (2021) is noteworthy because approximately 25% of the answers are not simple extractions, but are human-generated paraphrases, possibly of multiple passages. Systems with these types of responses continue to evolve and represent Draft Version 1.2 \f5.1. Short Answer Selection and Generation 83 an area for further work. This section covered multiple threads of the evolution of these systems to use Transformer and attention-based architectures for ConvQA. They focus on improving the contextualized encoding (BERT vs RoBERTa), multi-task learning of discourse or token importance, stacking networks to capture cross-turn relationships, and approaches to make the models more robust using adversarial training and data augmentation. Recent work by Kim et al. (2021) brought together generative conversational query rewriting using T5 in the QA process and showed that it outperforms more complex models that attempt to model both simultaneously. The models largely target factoid QA with most being extractive, possibly with minor adaptions for yes/no questions or multiple choice. None of the existing ConvQA benchmarks are based on real user information needs (queries) with multiple results from retrieval. This represents an opportunity for new systems and methods to evolve towards more realistic tasks based upon real information needs. 5.1.5 Conversational QA on Knowledge Graphs Similar to parallel threads in question answering over unstructured text, ConvQA can also be performed on structured knowledge graphs (KGs) containing entities. This sub-area of conversational QA over a knowledge graph is called KG-ConvQA. These approaches allow conversational information seeking over structured data. Therefore, the nature of the questions they can answer is also structured and may involve logical operations including joins, aggregations, quantitative comparisons, and temporal references. KG-ConvQA systems may be partitioned into two distinct types. The \ufb01rst performs QA directly using the KG internally or traversing it using actions to produce an answer. The second type performs conversational semantic parsing and produces an executable logical structured query for producing the answer. For the \ufb01rst type of KG-ConvQA systems, a neural sequence-tosequence model is combined with a memory network to generate an answer. One of the \ufb01rst attempts to do this was done by Saha et al. (2018), who introduced the Complex Sequential QA (CSQA) dataset and baseline model. A baseline for KG-ConvQA is HRED+KVmem, Draft Version 1.2 \f84 Response Ranking and Generation which combines a base conversational recurrent neural network (RNN) model, HRED, with a key-value memory network for modeling the KG, and \ufb01nally an RNN decoder to generate the answer. This baseline model works well for many categories of questions but struggles with quantitative and comparative reasoning. Another approach, CONVEX, proposed by Christmann et al. (2019) starts from a seed entity and performs actions to traverse the graph to identify an answer. To handle the conversational evolution, CONVEX maintains a dynamic sub-graph that changes with each conversational turn using look-ahead, weighting, and pruning techniques to limit the graph size. This is e\ufb00ective because traversing the graph on the evaluation benchmark CONVQUESTIONS \ufb01nds answers that are relatively close (no more than \ufb01ve edges away from the seed entity that starts the conversion). The dynamic sub-graph approach was extended by Kaiser et al. (2021) with their model, CONQUER. It uses reinforcement learning to select graph traversal actions. CONQUER maintains a set of context entities from which the agents traverse the graph. Their model uses a policy network that uses weak supervision from a \ufb01ne-tuned BERT model. One of the key di\ufb00erences from previous work is that the model also predicts if the query is a reformulation. This is built on an extension of the CONVQUESTIONS dataset that adds manual reformulations when the baseline system produces incorrect answers. The results show that CONQUER outperforms CONVEX, demonstrating that its reformulation and policy network outperform the previous sub-graph tracking approach, particularly when there is implicit feedback from reformulation for wrong answers. The later PRALINE model learns graph traversal using contrastive learning that models the dialogue and possible KG paths in a joint space (Kacupaj et al., 2022). The second direction taken to address this task is based on conversational semantic parsing. Instead of generating an answer, these approaches generate structured responses from a grammar. Guo et al. (2018) propose the Dialog-to-Action (D2A) model that builds on a GRU sequence-to-sequence model with a question and context from interaction history and outputs an action sequence from a prede\ufb01ned grammar. The dialogue history is managed in the action space. In contrast to the Draft Version 1.2 \f5.1. Short Answer Selection and Generation 85 earlier HRED+KVmem model, the D2A model is much more e\ufb00ective, particularly for queries requiring reasoning. Subsequent approaches improve upon the semantic parsing quality by incorporating entity recognition and disambiguation in the semantic parsing process with multi-task learning. For instance, Shen et al. (2019) presented the Multi-task Semantic Parsing (MaSP) model, performing both entity typing and coreference resolution together with semantic parsing. A subsequent multi-task model is CARTON (Context trAnsformeR sTacked pOinter Networks) (Plepi et al., 2021), with an encoder and decoder model to model the conversational representations. A series of three stacked pointer networks focus on the logical form needed for execution (types, predicates, and entities). A later approach using Transformers with multi-task learning and graph attention (LASAGNE) by Kacupaj et al. (2021) built on this semantic parsing approach leveraging a graph attention network. It has a grammar-guided Transformer model to generate logical forms as well as a sub-model that learns correlations between predicates and entity types to avoid spurious logical forms. LASAGNE appears to outperform CARTON across most categories. However, CARTON performs better on coreference and quantitative reasoning. They perform ranking on the KG by selecting the relevant entity and are implicit in the semantic parses produced by the model. The work from Marion et al. (2021) generates a hierarchical JSONlike logical form that is KG executable. They used an Object-Aware Transformer that includes entity linking. They highlight that the CSQA approaches often use a base gold seed entity and only require coreference to the previous turn. The results demonstrate strong e\ufb00ectiveness across multiple datasets using pre-trained encoder-decoder models. The focus of most of the KG-ConvQA models is traversing the graph for structured comparison. The conversational structure, such as ellipsis and dependency support is limited in current models. Draft Version 1.2 \f86 Response Ranking and Generation 5.2 Conversational Long Answer Ranking This subsection discusses open-domain conversational long answer retrieval, sometimes referred to as ConvPR (for Passage Ranking). Analogous to the previous distinction between ConvQA and OR-ConvQA (see Section 5.1.3), this subsection distinguishes between ConvPR and OR-ConvPR. ConvPR focuses on conversational passage reranking from a closed set of responses. In contrast, OR-ConvPR includes full retrieval over a corpus of passages in the ranking step. The questions may require one or more long answers to su\ufb03ciently answer the questions. This class of responses covers work on Ubuntu/Quora, MSDialog, AliMe, TREC CAsT, and similar corpora. The task of ConvPR has a rich history that builds on response retrieval and selection from discussion forums. These models have a long history in retrieval-based chatbots, see (Tao et al., 2021) for details. For the ConvPR task, the Deep Attention Matching Network (DAM) (Zhou et al., 2018) encodes each turn with a transformer model and combines them with a matching network and a \ufb01nal 3D convolutional network that incorporates the history. The intent-aware ranking model from Yang et al. (2020) extends this model by adding explicit conversation intents. The encoding is similar to DAM, but it also produces a vector representing the user intent. This represents dialogue discourse types speci\ufb01c to CIS and includes: asking a question, clari\ufb01cation, elaboration on details, and both positive and negative feedback. The encoded turns are combined with the intent classi\ufb01cation using a weighted attention model and aggregated into a matching tensor. Similar to DAM, the result is used in a \ufb01nal two-layer 3D-CNN model to rerank the candidate responses. One of the fundamental aspects of the e\ufb00ectiveness of any ConvPR model is the language model used in the encoding. Many of the encodings used are o\ufb00-the-shelf language models, but an alternative is to perform a step of model \ufb01ne-tuning with the language modeling objective on conversational corpora. Current leading approaches in chatbots and similar use models are trained on heavily \ufb01ltered and curated conversations from web forums like Reddit. For example, the ConveRT model (Henderson et al., 2020) \ufb01ne-tunes a BERT-based model on Draft Version 1.2 \f5.2. Conversational Long Answer Ranking 87 Reddit discussions and applies the resulting model to the task of response selection. This pre-training objective results in signi\ufb01cant gains on Ubuntu DSTC7 and the AmazonQA response selection tasks. It is also widely used as a pre-training objective for dialogue system models. In contrast to the intent-aware model, these do not use pre-de\ufb01ned intents and instead learn common discourse patterns directly from the text. In contrast to the previous ConvPR models, the OR-ConvPR models must retrieve and optionally rerank from large passage corpora. As a result, a current pattern exempli\ufb01ed by many CAsT systems is a pipeline with specialized modules. This includes modules that focus on understanding the context, as described in Section 4, that include conversational question rewriting and expansion across turns. These are then used with neural ranking models for passage retrieval. For more information on neural ranking models, we refer the readers to the recent survey articles (Mitra and Craswell, 2018; Guo et al., 2020; Lin et al., 2020a). This architecture allows existing components trained on large existing datasets for query expansion, rewriting, and ConvPR to be used in the open retrieval context. It is common to use a multi-stage cascaded architecture for ORConvPR tasks. One of the prototypical multi-stage systems that perform this is developed by Lin et al. (2021b). A core building block of their approach is Historical Query Expansion (HQE) which generates expanded queries based on the dialogue history using a sequence-to-sequence model. The conversational query rewriting is a standard T5 model trained on QuAC/Canard. One aspect is that the system additionally performs rank fusion to combine multiple query interpretations and formulations. This fusion can be performed early (in initial retrieval) or late (in reranking) and they \ufb01nd that fusion in early retrieval is critical for getting su\ufb03cient candidate passages in the pipeline for later reranking. Instead of the multi-stage cascade architecture, an alternative is end-to-end approaches based upon dense retrieval, sometimes referred to as Conversational Dense Retrieval (ConvDR) (Yu et al., 2021). The representations of the query and document encodings vary and may include ANCE, TCT-Colbert (Lin et al., 2021a), and others. The disDraft Version 1.2 \f88 Response Ranking and Generation tinguishing feature is that retrieval and conversation are encoded with dense vectors rather than an explicit word-based query. This avoids explicit rewriting and instead builds a vector-based representation for retrieval directly. This approach can also be applied to OR-ConvQA. This continues to be an active area of research with few-shot approaches that rely on a multi-stage learning process including data augmentation, curriculum learning, and multi-task learning (Mao et al., 2022). These elements are important to reduce noise and improve overall e\ufb00ectiveness. There are also attempts at zero-shot approaches (Krasakis et al., 2022) that can approach few-shot model e\ufb00ectiveness in some cases. There is also work demonstrating that e\ufb03ciency in conversational dense retrieval process can be optimized to achieve fast latency by leveraging topical relatedness in the conversation (Frieder et al., 2022). Although not (yet) as e\ufb00ective as the best complex pipeline systems incorporating explicit rewriting, they are rapidly improving. 5.3 Long-Form Response Generation for CIS The previous subsection discussed retrieval of (mostly) passage-level responses. In contrast, a recent development is extractive or generative summarization of retrieved results appropriate to a conversational interface and in a conversational context. One approach to handling multiple retrieved documents or passages for CIS is to combine them with extractive summarization approaches. This is particularly important for summarizing long documents for CIS interfaces and interaction. A text editing approach is to keep, delete, or make small insertions. This approach is used by the LaserTagger (Malmi et al., 2019) and Felix (Mallinson et al., 2020) models. They leverage pre-trained Transformers trained with supervised data. They produce responses for CIS applications that are true to the document and add elements of \ufb02uency by putting them in a conversational form. Beyond extractive approaches, generative systems are evolving towards longer and more complex information responses. Recently, these developments include pre-training of language models for generation on dialogue, such as Language Model for Dialogue Applications (LaMDA) that builds upon the previous Meena (Adiwardana et al., 2020) archiDraft Version 1.2 \f5.4. Procedural and Task-Oriented Ranking 89 tecture based on Evolved Transformers (So et al., 2019) and trained on social media data. Generative approaches for long answers are a signi\ufb01cant open area of research for CIS. This area is particularly important as generated answers become longer and more complex. Year four of TREC CAsT included evaluation of generative responses (Owoicho et al., 2022) with human crowdworkers that assessed relevance, naturalness, and conciseness. The most e\ufb00ective models used T5 and BART to generate abstractive summaries of the input passages. As summarizes and inputs become longer and more complex work there will need to be architectures like the Routing Transformer (Krishna et al., 2021) with dynamic attention routing to support longer sequences. The most signi\ufb01cant advance in this area is from ChatGPT2 by OpenAI. ChatGPT is a purely generative model that encodes all of its knowledge parametrically. Although it doesn\u2019t leverage search, the breadth and scope of its generation capability as well as its ability to generate long-form, \ufb02uent responses across diverse areas is remarkable. Although formal evaluation is limited, its generated signi\ufb01cant press with its \ufb02uent responses that can succinctly summarize complex content and even pass challenging medical exams (Kung et al., 2022). A key consideration for all of these generative models is their factual consistency and \ufb01delity to the input passage (or corpus), with previous work showing that the degree to which the model uses the input varies (Krishna et al., 2021). To address this for short answers, an early benchmark by Dziri et al. (2022), Benchmark for Evaluation of Grounded INteraction (BEGIN), uses generated responses from Wizardof-Wikipedia (Dinan et al., 2019b). Further, the provenance of facts to source passages and attribution of information will become increasingly important. 5.4 Procedural and Task-Oriented Ranking The previous subsections describe formulations of CIS response ranking that largely extend previous research from QA, retrieval, and recom2https://openai.com/blog/chatgpt/ Draft Version 1.2 \f90 Response Ranking and Generation mendation to a conversational context. However, because CIS systems are often embedded or used in combination with task assistants, the types of information needs and tasks performed are more likely to be grounded in procedures and real-world tasks. Information seeking is interleaved with task-oriented processes and structured dialogue actions, such as task navigation (Ren et al., 2021b; Azzopardi et al., 2018). This subsection discusses multiple veins of work in these areas and their connection to CIS. 5.4.1 Procedural Question Answering In Procedural QA, the task is to interact conversationally to determine outcomes based on complex processes represented in text documents. To address this task, Saeidi et al. (2018) introduced the Shaping Answers with Rules through Conversation (ShARC) benchmark. It contains varied types of discourse and natural language inference required within it. The procedures come from conversations on complex regulatory decisions. Because they are vague, the model must generate clarifying questions and understand the complex rule structures encoded in documents. Instead of providing excerpts like a typical QA task, the goal is to use rules in the text and the conversational responses to infer a yes/no answer. Similar to the evolution of other QA systems, a baseline model for this task includes a conversational BiDAF model for encoding history which is then combined with a natural language inference model, such as the Decomposed Attention Model (DAM) (Parikh et al., 2016) for interpreting rules. Subsequent work (Gao et al., 2020) focused on segmenting documents into elementary discourse units (EDUs) which are tracked through the conversation. Going further, recent work built on this by explicitly modeling the conversational structure using Graph Convolutional Networks (GCNs) (Ouyang et al., 2021). The results show that using both explicit and implicit graph representations allows the model to e\ufb00ectively address conversations with complex types of discourse structure. Mirroring the evolution of QA towards open retrieval, Gao et al. (2021b) extended the ShARC conversational entailment task by adding rule retrieval, creating OR-ShARC. In this task, systems must Draft Version 1.2 \f5.4. Procedural and Task-Oriented Ranking 91 \ufb01rst search a knowledge base of rule texts with context from the user and scenario (although it is limited to rule texts used in the original ShARC benchmark). It uses a basic TF-IDF retriever achieving over 95% recall in the top 20 rules; approximately the top \ufb01ve rules are used with a recall of over 90%. These are used in a RoBERTa machine comprehension system that also leverages inter-sentence Transformer layers to combine evidence. It is noteworthy that systems capable of reading multiple passages in the top-k retrieved results, e.g., (Dehghani et al., 2019), can be more e\ufb00ective than systems that only use the top (often golden) rule. 5.4.2 Task-Oriented Information Seeking Task-based virtual assistants perform tasks in the world. They are largely separate from CIS systems. Recently, there is a trend towards systems and models capable of both: A critical aspect of CIS is that information seeking is occurring within an explicit task context with domains and intents. It may start with conversational search to \ufb01nd an appropriate agent or task to execute (for example, \ufb01nding a recipe to cook) and then unfold as the task is performed. This may involve answering procedural questions grounded in the task execution, questions requiring external knowledge for QA, and other types of information needs. The CIS should also respond to changes in the task execution environment. From the dialogue community, this task was proposed and evaluated as part of the DSTC9 challenge in the Beyond Domain APIs track (Kim et al., 2020). The recent Amazon Alexa Prize TaskBot Challenge (Gottardi et al., 2022) introduced the challenge of using multi-modal conversation to solve real-world tasks. This challenge includes conversational task retrieval and re\ufb01nement, task-oriented QA, and conversational procedural instruction responses. Further, because interactions are multi-modal (including voice and screen), the responses may include images and videos in response to the information need. In practice, this means that elements of a dialogue system to navigate the task are interleaved with task-speci\ufb01c question answering and open-domain question answering. Additionally, the goal is also implicitly to select responses that are Draft Version 1.2 \f92 Response Ranking and Generation natural and engaging for the user with elements of social chat related to the task. The winning approach (Gemmell et al., 2022) during the \ufb01rst iteration of TaskBot challenge focused on automatic creation of TaskGraphs \u2013 a dynamic graph unifying steps, requirements, and curated domain knowledge enabling detailed contextual explanations and adaptable task execution. They showed o\ufb04ine creation and enrichment of TaskGraphs, potentially with the help of large language models, can reduce the system\u2019s complexity in navigating through the steps and responding to user\u2019s requests, leading to a more e\ufb03cient and e\ufb00ective TaskBot. Several participating teams found that the system\u2019s ability in \ufb01nding relevant instructions plays a key role in the overall TaskBot performance (University, 2022; Hattimare et al., 2022). This competition also demonstrated a successful use of visual content in conversational systems. Ferreira et al. (2022) successfully took advantage of visual interactions and proposed a multimodal curiosity-exploration task guiding assistant to improve user experience by potentially reducing the cognitive load on the user. 5.5 Conversational Recommendation Traditionally, recommender systems mainly exploit historical user-item interactions for predicting user preferences. This has led to the development of collaborative \ufb01ltering methods which are at the core of e\ufb00ective real-world recommendation engines. Other recommendation models, such as content-based and demographic \ufb01ltering, have also been studied and showed promising results mostly for cold-start users and items. All of these approaches provide users with little control over the recommended list. For instance, users often cannot ask for a revised recommendation list based on their preferences. Conversational recommender systems address this limitation. During a human-machine dialogue, the system can elicit the current user preferences, provide explanations for the recommended items, and/or take feedback from users for recommendation re\ufb01nement. Interactive and conversational recommender systems have been studied for several years (Thompson et al., 2004; Mirzadeh et al., 2005; Draft Version 1.2 \f5.5. Conversational Recommendation 93 Mahmood and Ricci, 2009; Blanco and Ricci, 2013). Due to the potential real-world applications, conversational recommender systems have recently attracted considerable attention. Most e\ufb00orts in this domain focus on preference elicitation by asking questions from users. Christakopoulou et al. (2016) studied this task and proposed a conversational model based on probabilistic matrix factorization for restaurant recommendation. They proposed to initialize the conversational recommendation model\u2019s parameters by training the model on o\ufb04ine historical data and updating the parameters while the users interact with the system through online learning. They focused on question selection from a question bank during online interactions for preference elicitation. This approach was later revisited by Zhang et al. (2018) who used multi-memory neural networks for template-based question generation. They uni\ufb01ed conversational search and recommendation and trained their model based on item reviews in the e-commerce domain. In more detail, they extracted attribute-value pairs mentioned by users about items in their reviews, and train a model that generates attribute-based questions based on the attributes. Besides the explicit attribute-value pairs, implicit knowledge learned by pre-trained large language models can also be used for preference elicitation in recommendation (Penha and Hau\ufb00, 2020). Preference elicitation in conversation can be improved by conditioning the dialogue on the user pro\ufb01le. To this aim, Li et al. (2022a) proposed a multi-aspect user modeling approach that uses historical conversational interactions collected from look-alike users to go beyond the current dialogue session. More recently, the applications of conversational interactions have been extended to bundle recommendation problems, where a set of items is recommended to a user. Bundle recommendation largely su\ufb00ers from data sparsity and the interactive nature of conversations would help the recommender system to collect more feedback and overcome this issue. Based on this idea, He et al. (2022) proposed Bundle MCR which models bundle recommendation as a Markov Decision Process with multiple agents, for user modeling, consultation, and feedback handling in bundle contexts. Additionally, Leszczynski et al. (2022) studied conversational music playlist recommendation which is another example of bundle recommendation tasks. Draft Version 1.2 \f94 Response Ranking and Generation Another line of research focuses on modeling conversational recommendation using reinforcement learning (RL). Sun and Zhang (2018) developed an early interactive RL-based recommendation model that can take two actions: (1) selecting an attribute (or facet) for preference elicitation, or (2) making a personalized recommendation. They simply used a two-layer fully-connected neural network as the policy network and de\ufb01ned the reward function based on the recommendation quality at every timestep during the dialogue. They demonstrated the bene\ufb01ts of conversational recommendation via both o\ufb04ine and online experimentation. This approach was later improved by modeling conversational recommendation using an Actor-Critic framework (Montazeralghaem et al., 2021) as well as improving user and item representations based on implicit feedback (Hu et al., 2022). Recently, Lei et al. (2020a) introduced the Estimation-Action-Re\ufb02ection (EAR) framework for conversational recommendation. This framework uni\ufb01es the following three fundamental problems in conversational recommendation: (1) what questions to ask, (2) when to recommend items, and (3) how to adapt to the users\u2019 online preferences. Another approach to conversational recommendation is to exploit multi-armed bandit solutions which have shown promising results in sequential and interactive recommendation. Zhang et al. (2020c) followed this path and proposed conversational contextual bandit. Later on, Li et al. (2021) improves this model by introducing the Conversational Thompson Sampling (ConTS) model. ConTS builds upon multi-armed bandit and models items and attributes jointly. This enables the model to compute the exploration-exploitation trade-o\ufb00between preference elicitation and item recommendation automatically. An interesting research direction in conversational recommender systems is producing responses that explain the rationale behind the recommendations (Volokhin et al., 2022). This will help users to engage with the conversational system to provide more feedback and express their opinion. Li et al. (2022b) developed a self-supervised bot play approach that learns to produce such explanations through reasoning and demonstrated that it can go beyond user simulations and can also work well in the wild. Popularity bias has always been an important challenge in recomDraft Version 1.2 \f5.6. Summary 95 mender systems, especially collaborative \ufb01ltering models (Ricci et al., 2010). Lin et al. (2022) recently explored the correlation between popularity bias and exposure rate, success rate, and conversational utility in a conversational recommendation setting. They proposed a three-stage de-biasing framework and demonstrated that reducing the impact of popularity bias improves the overall conversational recommendation quality. For more information on conversational recommender systems, we refer the reader to the recent survey on the topic (Jannach et al., 2021b). 5.6 Summary This section focused on core conversational response ranking. The models started with ConvQA, with basic extractive factoid QA with context naively appended that operated in a closed environment. These evolved towards variants that are more realistic by incorporating retrieval from a corpus (OR-ConvQA), including incorporating multiple results and their retrieval scores as well as other forms of external memory, including past turns or conversations. As the retrieval task evolved towards longer and exploratory responses (OR-ConvPR and OR-ConvDR), the systems evolved to be complex pipelines that required query rewriting, query expansion, dense retrieval, multi-pass re-ranking, and result fusion. However, the ranking components are still largely separated and trained on external datasets speci\ufb01c to those tasks. Later, models evolved to include conversational models of richer types of responses, including entities (KG-ConvQA), as well as ones that are longer and more natural. Longer and more complex responses support richer types of result dependency and more natural conversations. This includes generating responses from one or more retrieved sources. Most of the e\ufb00ective ranking and generation models build upon pre-trained language models. The e\ufb00ectiveness of the models varies depending on their lexical representations and training to support linguistic and conversational structure. The most e\ufb00ective ones include additional \ufb01ne-tuning with a language modeling objective on the target conversational data before \ufb01nal task-speci\ufb01c training. For ranking modDraft Version 1.2 \f96 Response Ranking and Generation els, there is a common pattern of having a model for a single turn and then incorporating evidence across turns by stacking models to capture conversational context (e.g., Flow or 3D-CNNs). Finally, the section covered response ranking for structured prediction in the form of task-oriented dialogues and recommendations. Beyond these, the ranking tasks and models will continue to evolve to include richer types of responses and to support more realistic and complex information seeking tasks. Draft Version 1.2 \f6 Mixed-Initiative Interactions Most approaches to human-computer interactions with intelligent systems are either controlled by a person or the system (i.e., useror system-initiative). For example, in current search engines, users always initiate the interaction by submitting a query and the search engine responds with a result page. Therefore, search engines are user-initiative systems. That being said, developing intelligent systems that support mixed-initiative interactions has always been desired. Allen et al. (1999) believed that development of mixed-initiative intelligent systems will ultimately revolutionize the world of computing. Mixed-initiative interactions in dialogue systems have been explored since the 1980s (Kitano and Van Ess-Dykema, 1991; Novick and Douglas, 1988; Walker and Whittaker, 1990). Early attempts to build systems that support mixedinitiative interactions include the LookOut system (Horvitz, 1999) for scheduling and meeting management in Microsoft Outlook, Clippit1 for assisting users in Microsoft O\ufb03ce, and TRIPS (Ferguson and Allen, 1998) for assisting users in problem solving and planning. Horvitz (1999) identi\ufb01ed 12 principles that systems with mixedinitiative user interfaces must follow. They are listed in Table 6.1. 1https://en.wikipedia.org/wiki/O\ufb03ce_Assistant 97 Draft Version 1.2 \f98 Mixed-Initiative Interactions Table 6.1: Principles of mixed-initiative user interfaces by Horvitz (1999). Principle 1. Providing genuine value 2. Considering uncertainty about user intents 3. Considering the user status in the timing of services 4. Inferring ideal action in light of cost, bene\ufb01t, and uncertainties 5. Employing dialogue with users to resolve key uncertainties 6. Allowing e\ufb03cient direct invocation and termination 7. Minimizing the cost of poor guesses about action and timing 8. Scoping precision of service to match uncertainty in goals 9. Providing mechanisms for e\ufb03cient result re\ufb01nement 10. Employing socially appropriate behaviors 11. Maintaining working memory of past interactions 12. Continuing to learn by observing Mixed-initiative interactions should be taken at the right time in the light of cost, bene\ufb01t, and uncertainties. Many factors mentioned in these principles can impact cost and bene\ufb01t of interactions. In addition, systems with mixed-initiative interactions should put the user at the center and allow e\ufb03cient invocation and termination. Systems with mixed-initiative interactions are expected to memorize past interactions and continuously learn by observation. Based on these principles, conversational systems by nature raise the opportunity of mixed-initiative interactions. Allen et al. (1999) de\ufb01ned four levels of mixed-initiative interactions in the context of dialogue systems, as follows: 1. Unsolicited reporting: An agent noti\ufb01es others of critical information as it arises. For example, an agent may constantly monitor the progress for the plan under development. In this case, the agent can notify the other agents (e.g., user) if the plan changes. 2. Subdialogue initiation: An agent initiates subdialogues to clarDraft Version 1.2 \f99 ify, correct, and so on. For example, in a dialogue between a user and a system, the system may ask a question to clarify the user\u2019s intent. Since the system asks the question and the user answers the question, and this may be repeated for multiple turns, the system has temporarily taken the initiative until the issue is resolved. This is why it is called subdialogue initiation. 3. Fixed subtask initiation: An agent takes initiative to solve prede\ufb01ned subtasks. In this case, the agent can take initiative to ask questions and complete the subtask. Once the subtask is completed, initiative reverts to the user. 4. Negotiated mixed-initiative: Agents coordinate and negotiate with other agents to determine initiative. This is mainly de\ufb01ned for multi-agent systems in which agents decide whether they are quali\ufb01ed to complete a task or it should be left for other agents. When it comes to (pro-active) open-domain conversational information seeking, some of these mixed-initiative levels remain valid. Mixedinitiative interactions in the context of CIS have been relatively less explored, but are nevertheless identi\ufb01ed as critical components of a CIS system (Radlinski and Craswell, 2017; Trippas et al., 2018; Aliannejadi et al., 2019; Wadhwa and Zamani, 2021; Wu et al., 2022). Vakulenko et al. (2021) conducted a large-scale analysis of 16 publicly available dialogue datasets and established close relations between conversational information seeking and other dialogue systems. Clari\ufb01cation and preference elicitation are the two areas related to mixed-initiative interactions that have attracted considerable attentions in recent years. Therefore, in the rest of this section, we \ufb01rst review the role of agents in initiating a conversation (Section 6.1), and continue with discussing methods for generating, analyzing, and evaluating clari\ufb01cation in conversational search (Section 6.2). We further summarize preference elicitation in conversational recommendation (Section 6.3), and \ufb01nally discuss how the user and system can be involved in mixed-initiative interactions with the goal of providing feedback (Section 6.4). Draft Version 1.2 \f100 Mixed-Initiative Interactions 6.1 System-Initiative Information Seeking Conversations Typically, users initiate the interaction with a conversational system, for example by clicking or touching a link or button, by using pre-de\ufb01ned voice commands such as \u201cAlexa\u201d or \u201cOK Google\u201d, or by asking a question or submitting an action request. In mixed-initiative conversational systems, the agent is also able to initiate the conversation. This is also called a system-initiative (or agent-initiative) conversation. Making a recommendation is perhaps the most common scenario for initiating an interaction by the system. For example, a CIS system can initiate a conversation by recommending an item based on the situational context of the user (e.g., location and time) and their preferences. Note that this is di\ufb00erent from many conversational recommendation settings, where users \ufb01rst submit a request about the item they are looking for, e.g., (Sun and Zhang, 2018; Zhang et al., 2018). Joint modeling of search and recommendation (Zamani and Croft, 2020a; Zamani and Croft, 2020b) is a step towards developing mixed-initiative search and recommendation systems. However, initiating a conversation by the system is not limited to recommendation. For instance, Avula and Arguello (2020) developed a system for conducting wizard-of-oz experiments to study system-initiative interactions during conversational collaborative search. This system can be integrated into collaborative discussion tools, such as Slack.2 In this system, while a group of users are performing a collaborative search task, another user (who plays the role of wizard) can intervene and provide additional information. Although little progress has been made in this area, there is a great potential for systems to initiate conversations based on context and engage with users or even get feedback. For instance, assume a user drives to a restaurant using a mapping application. When it has access to the context, a CIS system could initiate a conversation when the user is driving back, by asking about their experience at the restaurant. This could potentially lead to improving the user experience with the conversational system, collecting feedback on the restaurant, and also collecting information on the user\u2019s preferences for improving the user pro\ufb01le. As another example, if a user 2https://slack.com/ Draft Version 1.2 \f6.1. System-Initiative Information Seeking Conversations 101 is struggling with completing a task, a CIS system can be automatically triggered to start the conversation with the user, hear their complaints, and help them complete the task. Related to this line of research, Rosset et al. (2020) studied how a system can lead a conversation while users are searching for or exploring a topic. They formulated the problem as a conversational question suggestion task and demonstrated its impact by presenting the question suggestions in search engine result pages. Initiating a conversation by the system can be risky and it may annoy users and hurt user satisfaction and trust. For instance, in some situations, a user may not be interested in engaging in a conversation, and thus predicting opportune moments for conversation initiation is an important part of developing systeminitiative CIS systems. Therefore, whether and when to initiate a conversation are the key decisions a mixed-initiative CIS system should make. Wadhwa and Zamani (2021) studied system-initiative CIS systems and discussed their challenges and opportunities. They introduced a taxonomy of system-initiative CIS systems by de\ufb01ning three orthogonal dimensions: (1) initiation moment (when to initiate a conversation), (2) initiation purpose (why to initiate a conversation), and (3) initiation means (how to initiative a conversation). They further identi\ufb01ed \ufb01ve di\ufb00erent purposes for initiating conversations for CIS systems, some of which have been mentioned above: (1) \ufb01ltering streaming information, (2) context-aware recommendation, (3) following up a past user-system conversation, (4) contributing to a multi-party human conversation, and (5) requesting feedback from users. Based on this taxonomy and conversation initiation purposes, they introduced a generic pipeline that is depicted in Figure 6.1. According to this pipeline, several algorithms are constantly monitoring the user\u2019s situation (user context) and the stream of generated information to produce conversation initiation instances. These instances are stored in a database which is constantly monitored by a conversation initiator component. Based on the situation, the initiator may select one of the initiation instances. Then, a \ufb02uent Draft Version 1.2 \f102 Mixed-Initiative Interactions filtering streaming data recommendation conversation follow-up contributing to a multi-party conv. feedback request Instance collection \u2026 stream of information user profile & context user profile & context conversation generation device and modality selection initiation purposes interface initiator past user-system interactions \ud835\udf19 \ud835\udf13 \ud835\udefe Figure 6.1: A generic pipeline for conversation initiation in CIS systems by Wadhwa and Zamani (2021). conversation will be initiated. For more information on this architecture, we refer the reader to Wadhwa and Zamani (2021). 6.2 Clari\ufb01cation in Information Seeking Conversations Clari\ufb01cation is de\ufb01ned as \u201can explanation or more details that makes something clear or easier to understand.\u201d3 In information seeking systems, it is often used to clarify the user\u2019s information need or user\u2019s intent, and it can be in any form. For instance, relevance feedback is one form of clari\ufb01cation that is provided by the user. In mixed-initiative interactions, systems can take initiative to ask for clari\ufb01cation. This is why asking for clari\ufb01cation has been identi\ufb01ed as a necessary component in developing ideal CIS systems (Radlinski and Craswell, 2017; Aliannejadi et al., 2019; Anand et al., 2020; Zamani et al., 2020a; Trippas et al., 2020). As pointed out earlier, subdialogue initiation is one of the four levels of mixed-initiative interactions in conversational systems, 3https://dictionary.cambridge.org/us/dictionary/english/clari\ufb01cation Draft Version 1.2 \f6.2. Clari\ufb01cation in Information Seeking Conversations 103 which involves asking a clari\ufb01cation. In a study of mixed-initiative collaborative planning in human conversations, clari\ufb01cation accounts for 27% of interactions, more than any other type of mixed-initiative interactions (Allen et al., 1999). A conversational agent can ask a clarifying question to resolve ambiguity, to prevent potential errors, and in general to clarify user\u2019s requests and responses. Clari\ufb01cation may happen in multiple levels for various purposes. Stoyanchev et al. (2014) used clari\ufb01cation for resolving ambiguity and uncertainty in speech recognition, while Aliannejadi et al. (2019) used clari\ufb01cation to identify query intent in a conversational search setting. Besides CIS systems, asking clarifying questions has been explored in various tasks. For instance, Rao and Daum\u00e9 III (2018) used clari\ufb01cation for identifying missing information in a passage, such as community question answering posts. Trienes and Balog (2019) identi\ufb01ed the community question answering posts that require clari\ufb01cation. Subsequent work by Tavakoli et al. (2021) studied properties of clari\ufb01cation in community question answering websites based on user responses. Asking clarifying questions has also been studied in the context of task-oriented dialogue systems which are mostly closed-domain (Krum et al., 2005; Rieser and Moore, 2005). In the following subsections, we mostly focus on query intent clari\ufb01cation which is the most relevant type of clari\ufb01cation for information seeking systems. 6.2.1 A Taxonomy of Clari\ufb01cation Types In the context of information seeking systems, clari\ufb01cation has been studied in both synchronous and asynchronous information seeking scenarios. For instance, Braslavski et al. (2017) studied clari\ufb01cations asked in community question answering (CQA) websites as an example of asynchronous human-human information seeking conversations. They derived a taxonomy of clari\ufb01cation types for questions asked in CQA websites. The clari\ufb01cation types and their examples are reported in Table 6.2. Later on, Zamani et al. (2020a) studied clari\ufb01cation in open-domain search systems by analyzing a large-scale query reformulation data Draft Version 1.2 \f104 Mixed-Initiative Interactions Table 6.2: A taxonomy of clari\ufb01cation types for questions asked in CQA websites by Braslavski et al. (2017). Clari\ufb01cation Type Example More Information What OS are you using? Check Are you on a 64-bit system? Reason What is the reason you want a drip pan? General Can you add more details to this question? Selection Are you using latex or oil based Kilz? Experience Have you tried to update video card drivers? collected from a commercial web search engine. This resulted in a clari\ufb01cation taxonomy for open-domain information seeking queries. Their taxonomy consists of four main categories and a number of subcategories as follows: \u2022 Disambiguation: some queries (or part of the queries) are ambiguous and could refer to di\ufb00erent concepts or entities. Clarifying questions can be used to disambiguate the query intent. \u2022 Preference: Besides disambiguation, a clarifying question can help identify a more precise information need. Four major subcategories of preference clari\ufb01cations are: \u2013 Personal information (\u201cfor whom\u201d): personal information, such as gender, age, language, and expertise, can limit the search space. \u2013 Spatial information (\u201cwhere\u201d): spatial information is also re\ufb02ected in reformulations in many cases. \u2013 Temporal information (\u201cwhen\u201d): some queries have a temporal aspect which can be clari\ufb01ed by the system. \u2013 Purpose (\u201cfor what purpose\u201d): if the answer to a query depends on the purpose of user, a clarifying question can seek the purpose. For example, a user searching for \u201cscrewdrivers\u201d may be interested in screwdrivers for di\ufb00erent kinds of screws in di\ufb00erent sizes, depending on the user\u2019s purpose. Draft Version 1.2 \f6.2. Clari\ufb01cation in Information Seeking Conversations 105 \u2022 Topic: In case of broad topics, the system can ask for more information about the exact need of the user. This would narrow down the search space and would potentially lead to more accurate results. Topic clari\ufb01cation includes: \u2013 Sub-topic information: The user might be interested in a speci\ufb01c sub-topic of the query. \u2013 Event or news: based on an event or breaking news, many users often search for a topic related to the news, while the query may have di\ufb00erent meanings out of the context of that event or news. \u2022 Comparison: Comparing a topic or entity with another one may help the user \ufb01nd the information they need. Note that clarifying the information need of a user may lie in multiple categories in this taxonomy. As mentioned earlier, this taxonomy was obtained based on web search query logs. Therefore, it can be considered as a taxonomy for open-domain information seeking queries. However, there may be other domain-speci\ufb01c types of clari\ufb01cation that are not easily recognizable in web search query logs. For more information on this taxonomy, we refer the reader to Zamani et al. (2020a). For all clarifying questions, we note that it is also essential to consider a system\u2019s need for speci\ufb01c information, with particular attention to personal or private information. As an example, while personal information such as gender or age may help a CIS system better answer a particular information need, is it clear to the user why this is being asked? Is it clear how this information will be processed and/or recorded? What would be the e\ufb00ect should the user decline to answer this question? While there are commonly accepted UI a\ufb00ordances for visual search systems (such as an asterix for required \ufb01elds and hover-over information tags to provide background on questions), such a\ufb00ordances rarely exist in verbal modalities. 6.2.2 Generating Clarifying Questions There exist three categories of solutions for generating clarifying questions: (1) selecting and \ufb01lling out pre-de\ufb01ned question templates, (2) Draft Version 1.2 \f106 Mixed-Initiative Interactions selecting and editing a clarifying question, (3) generating clarifying questions based on sequence-to-sequence modeling by maximizing the likelihood of generating the questions in a training set, and (4) generating clarifying questions by maximizing a clari\ufb01cation utility. In the following subsections, we brie\ufb02y discuss solutions from each of these categories. 6.2.2.1 Template-based Slot Filling Models Template-based slot \ufb01lling is the simplest approach for asking a clari\ufb01cation. In this approach, a small set of question templates is \ufb01rst de\ufb01ned. The templates are taskand domain-dependent. For instance, Coden et al. (2015) simply used the question template \u201cDid you mean ___ or ___?\u201d for entity disambiguation. The question template \u201cDid you mean ___?\u201d has been widely used by various commercial search engines, such as Bing and Google, to clarify misspelling. Zamani et al. (2020a) listed a handful of question templates for search clari\ufb01cation. The question templates can be as generic as \u201cWhat would you like to know about ___?\u201d. However, more speci\ufb01c questions, such as \u201cWhat ___ are you using?\u201d or \u201cWho are you shopping for?\u201d would be desired in most scenarios. Once the question templates are de\ufb01ned, the task is to select one of the templates and \ufb01ll it out. The template selection can be as simple as a rule-based algorithm or modeled as a machine learning problem, either as a multi-class classi\ufb01cation or a learning to rank task. Similarly, rule-based solutions can be used to \ufb01ll out the templates. For example, a substring of the user request or its entity type obtained from a knowledge base can be used to \ufb01ll out some templates. Machine learning solutions are often preferred due to their superior performance for \ufb01lling out the templates. Slot \ufb01lling is not speci\ufb01c to clari\ufb01cation. A number of slot \ufb01lling models used in task-oriented dialogue systems can be employed in clari\ufb01cation as well (Wu et al., 2019; Budzianowski and Vuli\u0107, 2019; Zhao et al., 2019). Draft Version 1.2 \f6.2. Clari\ufb01cation in Information Seeking Conversations 107 6.2.2.2 Sequence Editing Models Another category of approaches for generating clarifying questions is based on selecting a clarifying question and editing it based on the conversation context. For instance, Liu et al. (2021b) proposed a Reinforcement Iterative Sequence Editing (RISE) framework that minimizes the Levenshtein distance between the model\u2019s output and ground truth questions through explicit editing actions. In more detail, the authors used BERT2BERT (Rothe et al., 2020) to implement the policy network in RISE and used a variant of Markov Decision Process (MDP) for optimization, in which the reward function is de\ufb01ned as the Levenshtein distance obtained by each action compared to the last iteration. RISE is able to pay attention to tokens that are related to conversational characteristics. Therefore, this approach is able to produce questions with coreferences to the conversation history. The idea of retrieve-and-edit has also been explored in the context of generating structured output, e.g., programming code (Hashimoto et al., 2018). Similar ideas can potentially be applied to this category of clari\ufb01cation generation models. 6.2.2.3 Sequence-to-Sequence Models As discussed in Rao and Daum\u00e9 III (2019) and Zamani et al. (2020a), generating clarifying questions can be seen as a sequence generation task, in which the inputs are the query q and the context c and the output is a clarifying question q\u2217. The context here may refer to the query context, e.g., shortand long-term search or conversation history (Bennett et al., 2012) and situational context (Zamani et al., 2017), or some additional knowledge about the query, such as query aspects. Sequence-to-sequence models, including seq2seq (Sutskever et al., 2014) and the Transformer encoder-decoder architecture (Vaswani et al., 2017), can be adopted and extended to address this task. Sequence-to-sequence models consist of at least one encoder and one decoder neural network. The encoder model E takes the query q and the corresponding context c and learns a representation v for the input tokens. The decoder model D uses the encoder\u2019s outputs and generates a sequence of tokens, i.e., a clarifying question. The training Draft Version 1.2 \f108 Mixed-Initiative Interactions objective is to maximize the likelihood of generating the clari\ufb01cation q\u2217 by the decoder. This maximum likelihood objective is equivalent with minimizing the cross-entropy loss. Once the model is trained, it is autoregressively to generate the clari\ufb01cation at inference time. This decoding step can be achieved using beam search, its variants, or in the most simplest case, generating the clari\ufb01cation token by token until observing an end token. For more detail on sequence-to-sequence modeling, we refer the reader to Sutskever et al. (2014) and Vaswani et al. (2017). It is widely known that training text generation models by maximizing likelihood of generating a ground truth output will result in frequent generation of the most common outputs. Thus, the models often su\ufb00er from generating diverse outputs. This has been addressed using di\ufb00erent techniques, such as unlikelihood training (Welleck et al., 2020) and F 2-Softmax (Choi et al., 2020). Clari\ufb01cation utility maximization (next subsection) also implicitly addresses this issue. 6.2.2.4 Clari\ufb01cation Utility Maximization Models An alternative to the presented sequence-to-sequence models that maximize the likelihood of generating clari\ufb01cation observed in the training set is clari\ufb01cation utility maximization models. The intuition is to generate a question that best clari\ufb01es the user information need, while there is no notion of clari\ufb01cation in the training objective of sequence-to-sequence models. In this approach, the goal is to maximize a clari\ufb01cation utility function U that measures the likelihood of clarifying the user information need or a similar objective. For instance, Rao and Daum\u00e9 III (2019) estimated the information value of the possible answer that a user may give to the generated clari\ufb01cation as a utility function. Zamani et al. (2020a) estimated the likelihood of covering all information needs observed in the query logs based on the past interactions. The clari\ufb01cation utility functions are often non-di\ufb00erentiable, which prevents us from using gradient descent based optimization. Therefore, clari\ufb01cation generation can be modeled as a reinforcement learning task whose reward function is computed based upon the clari\ufb01cation utility Draft Version 1.2 \f6.2. Clari\ufb01cation in Information Seeking Conversations 109 function U. The REINFORCE algorithm (Williams, 1992) can then be used for learning the clari\ufb01cation generation model. It has been shown that using the models that are pre-trained using maximum likelihood training for the REINFORCE algorithm can lead to more e\ufb00ective and more robust outcomes. This approach is called Mixed Incremental Cross-Entropy Reinforce (MIXER) (Ranzato et al., 2016). For more information, we refer the reader to Rao and Daum\u00e9 III (2019) and Zamani et al. (2020a). 6.2.3 Selecting Clarifying Questions Clarifying question generation models can be evaluated using human annotations or online experimentation. However, both of these approaches are time consuming and are not always available. On the other hand, o\ufb04ine evaluation based on text matching metrics, such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004), are not reliable for clari\ufb01cation generation models. Therefore, due to the challenges in o\ufb04ine evaluation of clarifying question generation models, Aliannejadi et al. (2019) introduced the task of selecting clarifying questions from a set of candidate humanor machine-generated clarifying questions. The authors created and released the Qulac dataset, consisting of over 10K human-generated (through crowdsourcing) question-answer pairs for 198 topics associated with the TREC Web Track 2009-2012. An alternative dataset is MIMICS (Zamani et al., 2020b) that contains over 450K unique real queries and machine-generated clarifying questions along with user engagement signals (i.e., clickthrough rate). The more recent MIMICS-Duo dataset (Tavakoli et al., 2022) enables both online and o\ufb04ine evaluation of clarifying question selection tasks. Baseline models that use a combination of contextual representations of the query and clarifying questions (e.g., BERT) and query performance prediction indicators (e.g., standard deviation of retrieval scores) demonstrate the best performance on clari\ufb01cation selection tasks on Qulac (Aliannejadi et al., 2019). Zamani et al. (2020c) showed that the clarifying question selecting model can bene\ufb01t from query reformulation data sampled from search engine query logs. Subsequent work by Hashemi et al. (2020) proposed Guided Transformer, an extension Draft Version 1.2 \f110 Mixed-Initiative Interactions to the Transformer architecture that uses external information sources (e.g., pseudo-relevant documents) for learning better representations for clarifying questions. This model signi\ufb01cantly improves upon the baseline models for clari\ufb01cation selection tasks. Speci\ufb01cally, they showed that the model performs well for clari\ufb01cations with short negative responses. Subsequently, Bi et al. (2021) focused on a BERT-based model for clari\ufb01cation selection based on negative feedback. This model works well for document retrieval when clarifying questions are asked. Kumar et al. (2020) looked at clari\ufb01cation selection as a special case of natural language inference (NLI), where both the post and the most relevant clari\ufb01cation question point to a shared latent piece of information or context. Both clarifying question generation and selection tasks are still active areas of research in both the IR and NLP communities. 6.2.4 User Interactions with Clari\ufb01cation The way users interact with clari\ufb01cation can reveal information on the clari\ufb01cation quality. For example, user engagement with clarifying questions can be studied as a proxy to measure clari\ufb01cation quality. Zamani et al. (2020c) studied how users interact with clarifying questions in a web search engine. They found out that more speci\ufb01c questions have a higher chance to engage users. They showed that the majority of engagement comes for one of two reasons: (1) high ambiguity in the search queries with many resolutions, and (2) ambiguity but where there is a dominant \u201cassumed\u201d intent by users where they only realize the ambiguity after issuing the query. Interestingly, users are more likely to interact with clari\ufb01cation in case of faceted queries in comparison with ambiguous queries. Note that the user interface may a\ufb00ect these \ufb01ndings. For instance, in the web search interface with ten blue links, users can simply skip a clari\ufb01cation and directly interact with the retrieved web pages. However, this may not be possible in a conversational search system with a speech-only interface. Therefore, besides generating high-quality clarifying questions, (spoken) CIS systems should make a (binary) decision at every step on whether to ask a clarifying question or to show the result list or answer. Wang and Ai (2021) addressed this issue by developing a risk-aware model that learns this decision-making Draft Version 1.2 \f6.3. Preference Elicitation in Conversational Recommendation 111 policy via reinforcement learning. Their model considers the common answers to each clari\ufb01cation in order to minimize the risk of asking low-quality or out-of-scope clari\ufb01cations. The model enables the CIS system to decide about asking a clari\ufb01cation with di\ufb00erent levels of user tolerance. In a separate line of research, Tavakoli et al. (2021) studied user interactions with clarifying questions in asynchronous conversations. They focused on user interactions in community question answering websites, e.g., StackExchange.4 To study user interactions, they categorized clarifying questions to three categories: (1) clari\ufb01cations that have been answered by the Asker (the person who submitted the questions/post), (2) clari\ufb01cations that have been answered but not by the Asker, and (3) clari\ufb01cations that are left unanswered. They found that clari\ufb01cations with the goal of disambiguation account for the majority of clarifying questions and they are very likely to be answered by the Asker. On the other hand, clari\ufb01cations with the goal of con\ufb01rmation are more likely to be left unanswered. For more analysis on user interactions with clari\ufb01cation in asynchronous information seeking conversations, refer to Tavakoli et al. (2021). 6.3 Preference Elicitation in Conversational Recommendation Preference elicitation in conversational recommender systems forms another type of mixed-initiative interactions. Typically, recommender systems create a user pro\ufb01le or user representation based on the user\u2019s past interactions (e.g., click) (Jannach et al., 2018; Oard and Kim, 1998) and/or her explicit feedback on items using ratings and reviews (Resnick et al., 1994; Ricci et al., 2010). Conversational systems enable recommendation engines to ask for user preferences in a natural language dialogue. This creates a signi\ufb01cant opportunity for the system to learn more about the current context of the user, and how their preferences at this point in time may di\ufb00er from their preferences in general. Christakopoulou et al. (2016) studied the task of conversational recommender systems by focusing on preference elicitation in a closed-domain scenario, like 4https://stackexchange.com/ Draft Version 1.2 \f112 Mixed-Initiative Interactions restaurant recommendation. They observed 25% improvements over a static model by asking only two questions. Following their work, Sun and Zhang (2018) proposed a reinforcement learning model for preference elicitation by asking questions about item facets in a closed-domain setting, i.e., restaurant recommendation. Zhang et al. (2018) focused on a broader domain by automatically extracting user preferences about item facets from user reviews on an online e-commerce website. They showed that multi-memory networks can be successfully used for asking questions about item facets in their setting. Sepliarskaia et al. (2018) used a static questionnaire to ask questions from users in the context of movie and book recommendation. They studied di\ufb00erent optimization strategies for the task with a focus on cold-start users. In this work, user responses to the system questions are automatically generated and may be di\ufb00erent from real-world settings. To mitigate this issue, Radlinski et al. (2019) conducted a crowdsourcing experiment with a wizard-of-oz setting, where a crowdworker plays the role of user and another person (i.e., assistant) plays the role of the system. They introduced a \u201ccoached\u201d preference elicitation scenario, where the assistant avoids prompting the user with speci\ufb01c terminology. The mentioned methods ask questions about items and item attributes for preference elicitation. In case of incomplete information on item attributes, Zhao et al. (2022) proposed a knowledge-aware preference elicitation model. Moreover, users may not be able to answer all questions about item attributes especially if they have limited knowledge. More recently, Kostric et al. (2021) proposed to address this issue by asking questions about item usage, which is related to \u201cpurpose\u201d in the clari\ufb01cation taxonomy presented in Section 6.2.1. Preference elicitation in recommendation is tightly coupled with the design of conversational recommender systems. Refer to Section 5.5 for further information. 6.4 Mixed-Initiative Feedback The system can take advantage of mixed-initiative interaction to get feedback from users and even give feedback to them. For instance, in the middle (or at the end) of a dialogue in a conversational recommender Draft Version 1.2 \f6.5. Modeling Mixed-Initiative Strategies 113 system, the system can ask for explicit feedback from the users. Existing systems often have a static pre-de\ufb01ned questionnaire that will automatically be triggered after a conversation ends. For instance, the Alexa Prize Challenge (Ram et al., 2018) has sought explicit rating feedback from users upon the completion of the conversation and used average ratings for evaluating the participant teams. This simple approach can be further improved by asking context-aware questions for feedback and making natural language interactions within the conversation. Mixed-initiative feedback can be also relevant to the concept of \u201cgrounding as relevance feedback\u201d introduced by Trippas et al. (2020). Grounding is de\ufb01ned as discourse for the creation of mutual knowledge and beliefs. The authors demonstrated grounding actions in a spoken conversational search data, such as providing indirect feedback by reciting their interpretation of the results. This grounding process can potentially enable CIS systems to better understand a user\u2019s awareness of the results or information space. As mentioned earlier, mixed-initiative interaction can be used to give feedback to the users. As an emerging application, users may not directly know how to e\ufb00ectively use the system. Hence, the system can take advantage of this opportunity to educate users on the system capabilities. Educating users on interacting with CIS systems has been relatively unexplored. 6.5 Modeling Mixed-Initiative Strategies The CIS system needs to make decision on what action to take at each timestamp and mixed-initiative interactions signi\ufb01cantly increase the number of options, resulting in a complex decision making problem. Thus, formulating, modeling, measuring, and simulating mixed-initiative information seeking conversations is quite important. Aliannejadi et al. (2021a) proposed a user model for mixed-initiative conversational search that consists of three major phases: querying, feedback (i.e., mixed-initiative), and browsing (i.e., assessing search results). This user model is shown in Figure 6.2. Based on this user model, they considered two extreme cases. (1) Feedback First, where the system \ufb01rst asks for multiple feedback (e.g., clari\ufb01cation) once the user submits the query Draft Version 1.2 \f114 Mixed-Initiative Interactions Query Q Feedback F Assess A Start End Continue Assessing Re-Query Give Feedback Browsing Model Feedback Model Querying Model Continue Feedback Feedback or Assess Figure 6.2: A user model of mixed-initiative conversational search proposed by Aliannejadi et al. (2021a) which is composed of three sub-components the Querying, Browsing and Feedback Models. Diamonds represent user decision points, while circles represent the action/turn taken. and then presents the results, (2) Feedback After, where the results are shown \ufb01rst and then unsatis\ufb01ed users can provide feedback to re\ufb01ne the search results. To measure each conversation they rely on gain to cost ratio, where gain is de\ufb01ned by the relevant documents assessed by the user and cost is de\ufb01ned by the time the user spent on each conversation. Note that the de\ufb01nition of gain and cost can be simply revisited, if needed. Through extensive simulations by modeling the gain to cost ratio, Aliannejadi et al. (2021a) provided guidelines for taking mixed-initiative interactions in di\ufb00erent situations, for example for patient and impatient users. Such modeling is later extended by proposing an economic model of conversation search (Azzopardi et al., 2022). This theoretical framework for conversational search can provide insights to guide and inform the development of conversational search agents. Draft Version 1.2 \f6.6. Summary 115 6.6 Summary In this section, we discussed the opportunities and challenges that mixed-initiative interactions bring to CIS systems. We drew connections with mixed-initiative user interfaces and mixed-initiative interactions in dialogue systems. We discussed system-initiative CIS and reviewed di\ufb00erent purposes for conversation initiation. We also provided an overview of clari\ufb01cation in CIS systems and how a clarifying question can be generated or selected to identify the user\u2019s intent. We brie\ufb02y reviewed preference elicitation and demonstrated its connections with intent clari\ufb01cation. We \ufb01nished by showing how systems can get feedback from and give feedback to the users through mixed-initiative interactions. Overall, understanding mixed-initiative interactions and initiating conversations have been identi\ufb01ed as a key part of CIS research. Clari\ufb01cation, as a form of mixed-initiative interaction, has been studied quite extensively. However, other forms of mixed-initiative interactions require further signi\ufb01cant e\ufb00orts. Evaluating mixed-initiative CIS systems is another under-explored yet important research area. Draft Version 1.2 \f7 Evaluating CIS Systems Evaluation of conversational information seeking systems continues to be a rapidly evolving research area due to unique challenges of assessing the quality of conversations, and the parallel di\ufb03culty in creating benchmark datasets. In contrast to non-conversational information seeking settings, the multi-turn nature of conversations requires evaluations to model longterm state, and consider what information is conveyed, when the information is conveyed, as well as how this communication happens. All these are dependent on why a user is engaging in a conversational interaction in the \ufb01rst place (as opposed to non-conversational alternatives). The same conversation may be considered of high or of low quality depending on context: For example, if a user is in a rush or not, or if the user requires high con\ufb01dence in the conclusion or not. 7.1 Categorizing Evaluation Approaches There are a number of ways that CIS evaluation may be presented. We structure Section 7 by evaluation modality: O\ufb04ine or online evaluation, and sub-types of these modalities. However, evaluation approaches can be broken down in other ways 116 Draft Version 1.2 \f7.1. Categorizing Evaluation Approaches 117 (see Chapter 4.2 of (Anand et al., 2020)). We summarize some here as researchers in CIS may \ufb01nd some of the speci\ufb01c micro-evaluation or user-centric questions particularly pertinent to the research questions being asked in a given setting. For example, individual components of conversations can be evaluated at a micro-level, leading to a catalogue of micro-evaluation techniques including How well does the system predict the dialogue act of a given utterance? How well does the system predict the user\u2019s goals and sub-goals? Can the system identify terms in statements to \ufb01ll slots in a structured search query? How well does the system select responses from a set of candidates? How well does the system answer individual questions? As we shall see, such an evaluation approach has the bene\ufb01t that these questions lend themselves well to traditional information retrieval evaluation approaches. A major drawback, however, is that high performance on micro-level metrics does not necessarily translate into a CIS system being e\ufb00ective for satisfying users\u2019 needs. An alternative is to break down by evaluation approaches in a usercentric manner: Does the user trust the system? What is the cognitive load of interactions? How \ufb02uent and e\ufb03cient is the system in communication in general? Within the context of a particular information need, one can seek metrics to evaluate based on properties such as Is the user satis\ufb01ed with the outcome of the conversation? How much e\ufb00ort and/or time was required to satisfy the information need? Is the information need ultimately resolved? Was the user frustrated in the process? For such metrics, subjectivity is a common concern. Additionally, while such evaluation does assess the overall quality of a CIS system, such metrics are particularly di\ufb03cult to optimize. Individual components of CIS systems can be evaluated at a micro-level. A major drawback, however, is that high performance on micro-level metrics does not necessarily translate into overall user satisfaction. Thus, an alternative is end-to-end user-centric evaluation methodologies. Draft Version 1.2 \f118 Evaluating CIS Systems Dataset Domain Task Construction Scale CAsT 2019 2022 (Dalton et al., 2019) open domain ConvPR questions written by organizers & passage pooling 100+ conversations CoQA (Reddy et al., 2019) seven domains ConvQA wizard-of-oz 1K+ conversations QuAC (Choi et al., 2018) people ConvQA wizard-of-oz 10K+ conversations MISC (Thomas et al., 2017) open domain CIS spoken human conversations 10+ conversations Qulac (Aliannejadi et al., 2019) open domain CIS clari\ufb01cation crowdsourcing 10K+ clari\ufb01cations MIMICS (Zamani et al., 2020b) open domain CIS clari\ufb01cation search logs 100K+ clari\ufb01cations RiDial (Li et al., 2018) movies ConvRec wizard-of-oz 10K+ conversations Table 7.1: Examples of notable datasets for various conversational information seeking tasks: conversational passage retrieval (ConvPR), conversational question answering (ConvQA), clari\ufb01cation in conversational search, and conversational recommendation (ConvRec). There are many other datasets that are not listed here. Draft Version 1.2 \f7.2. O\ufb04ine Evaluation 119 7.2 O\ufb04ine Evaluation As a staple of information retrieval evaluation, o\ufb04ine evaluation permits reproducible evaluations that can reliably compare di\ufb00erent systems. We start with a discussion of some of the existing datasets commonly used to evaluate CIS systems. Following a summary of each category of dataset, we present open challenges with respect to o\ufb04ine evaluation of CIS tasks. 7.2.1 Conversational Datasets Conversational datasets are transcripts of actual conversations that have occurred between two or more parties, either as part of natural information seeking or through a role-play conversation exercise. Table 7.1 reports a few notable CIS datasets. We begin by observing that some conversational datasets are synchronous (e.g., (Budzianowski et al., 2018)), while others are asynchronous (such as datasets derived from Reddit (Henderson et al., 2019)). Although, in principle, the content of these can be similar, subtle timing e\ufb00ects can lead to meaningful practical di\ufb00erences. For instance, asynchronous conversations may contain fewer dis\ufb02uencies and unintentional errors as participants take time to consider their utterances (Serban et al., 2018). Asynchronicity also makes it possible to carry out time-consuming tasks such as consulting external sources between conversational turns. Of particular importance to studies of mixed initiative, the role of initiative and conversational turn taking is very di\ufb00erent in synchronous and asynchronous conversations (Gibson, 2009; Boye et al., 2000). An example of a widely used conversational dataset is Multi-WOZ (Budzianowski et al., 2018). Consisting of synchronous naturalistic taskoriented dialogues designed to simulate a possible conversation between a tourist and information agent, it focuses on recommendation tasks with well-de\ufb01ned slots and values. To create these, one person is presented with search criteria, while a second (\u201cwizard\u201d) has access to a search system that allows them to identify recommendations that satisfy the \u201cuser\u2019s\u201d constraints. However, by presenting such speci\ufb01c Draft Version 1.2 \f120 Evaluating CIS Systems requirements that perfectly match the wizard\u2019s known \ufb01elds, it may be argued that the conversations can be somewhat unnatural. The TaskMaster dataset (Byrne et al., 2019) generalizes on the Multi-WOZ idea, with dialogues around making orders and setting up appointments, such as ordering a pizza or creating an auto repair appointment. In addition to synchronous wizard-of-oz dialogues similar to those from Multi-WOZ, the authors also include asynchronous self-dialogues where a single person types both sides of a conversation, focusing on given needs. To make the conversations more natural, the authors also instructed raters to intentionally include understanding errors and other types of dialogue glitches, with some conversations created to be intentionally unsuccessful. This type of dataset is predominantly used for the evaluation of slot-\ufb01lling algorithms. As an alternative to task-oriented dialogues, Radlinski et al. (2019) presented Coached Conversational Preference Elicitation, intending to obtain realistic synchronous dialogues by instructing a \u201cwizard\u201d to simply motivate a \u201cuser\u201d to describe their preferences, without setting a detailed goal for either. Another category of conversational datasets is used for conversational question answering (Iyyer et al., 2017; Choi et al., 2018; Reddy et al., 2019) or TREC CAsT Track (Dalton et al., 2019; Dalton et al., 2020a). Here the major challenge addressed is co-reference resolution, evaluating the systems ability to answer questions in sequence, particularly when a given question may refer to earlier questions or their answers (for example, \u201cWho won the superbowl?\u201d followed by \u201cWho is their quarterback?\u201d). Such dialogues can be sampled from search engine interactions, known answers, or manually constructed. Two more types of conversational datasets are commonly used in developing CIS systems. Asynchronous discussions on a given topic, often from the Reddit forum (for example, (Henderson et al., 2019; Qu et al., 2018; Qu et al., 2019a)), are often used to model openended conversations. As a massive corpus of free-form dialogues, these exchanges can be used to train and evaluate conversational agents with a goal of responding reasonably to any utterance on any topic without an assumption of a particular task. Of course, it is important to note in the context of web forums that careful attention must be paid to the representativeness of the authors of the corpus being used. For instance, Draft Version 1.2 \f7.2. O\ufb04ine Evaluation 121 training or evaluating CIS systems based on a forum with a particular type of contributor may lead to bias in a CIS system evaluation, and may lead to undesirable conversational behaviors being learned if they mirror the behavior of the authors who contributed to that forum. For instance, language ranging from microaggressions to insults or worse is often observed (Bagga et al., 2021). For this reason, the use of massive web corpora must be done with care. Other formus, like Slack, can similarly be used (Sabei et al., 2022) to observe asynchronous communication. To obtain open-ended synchronous conversations with higher quality than may be expected in an open forum, transcripts of movie and television dialogues are frequently used (M\u00fcller and Volk, 2013; Henderson et al., 2019). There are numerous challenges in creating and using conversational datasets for o\ufb04ine evaluation. One of the key challenges is that the motivation of the participants can greatly in\ufb02uence the dialogues observed. In a wizard-of-oz setting, if the wizard is provided with a particular interface to obtain answers for user requests, this is likely to in\ufb02uence their utterances (Radlinski et al., 2019). If the user is given detailed instructions, especially if these do not align with the person\u2019s actual interests, this again can result in unnatural dialogue (Serban et al., 2018). If several wizard-of-oz datasets are used together for evaluation, they may uncover slight di\ufb00erences in the study setup impacting the conversations (Trippas and Thomas, 2019). Moreover, if users are asked to complete prede\ufb01ned tasks, there is a risk that they do not approach these tasks as someone who actually wants to perform that task (Serban et al., 2018). For example, suppose a user is tasked with purchasing something under a given price. A real user may exhibit certain \ufb02exibility regarding the price, or may ask questions relating to value for money, rather than solely around price \u2013 and examples of realistic behavior around pricing may end up missing from the collected corpus. A second major challenge in evaluation with o\ufb04ine datasets lies in how the datasets are interpreted. Where dialogues are taken to contain correct responses in a particular context, they can su\ufb00er from false negatives: A perfectly capable system may be judged to perform poorly when it is simply performing the task di\ufb00erently (Finch and Choi, 2020; Zhang and Balog, 2020; Sekuli\u0107 et al., 2022). Draft Version 1.2 \f122 Evaluating CIS Systems 7.2.2 Single-Step Datasets As a step towards fully conversational systems, a number of challenges have been proposed to address the necessary sub-tasks. Here we refer to them as single-step datasets, as the focus is on a single step within the many that a conversational system must perform. We note that they do not focus on single dialogue turns (as is the case with Conversational QA datasets), but even more fundamental steps of information processing. One recent example is generating the natural text from structured information to describe a particular search result, as the conversational equivalent of search snippet generation (Turpin et al., 2007). For instance, suppose a conversational agent needs to explain a speci\ufb01c restaurant to a user, showing how it satis\ufb01es their request. The agent may possess rich structured information about the restaurant \u2013 its name, address, the type of food o\ufb00ered, pricing information, and other key attributes. However, just presenting these facets of information to the user may not be suitable. The End-to-End NLG Challenge (Du\u0161ek et al., 2018) produced a dataset mapping a set of attributes to natural language descriptions, allowing a challenge for generating text from structured information \u2013 a critical single step of many CIS systems. A second example where single-step datasets are used is for applications where generating full text is unnecessary. This common task treats conversational information seeking as the ranking of possible (existing) responses that an agent could give at a particular time. For instance, Yang et al. (2018a) described datasets derived from transcripts of past technical support dialogues: They assume that for any given user utterance, the system should select from previous agent utterances (as most technical support problems are not novel). Such specialized single-step datasets will address this single-turn ranking problem. As a third example, when an agent asks a question, it must be able to interpret the user\u2019s answers. Taking the seemingly simple case of yes/no questions, a user may answer indirectly. For instance, if an agent asks if a user would be interested in an evening activity, the user may say \u201cI\u2019d prefer to go to bed\u201d rather than simply \u201cno\u201d. The Circa dataset (Louis et al., 2020) was developed to contain natural questions and answers to train and evaluate reliable answer interpretation by CIS Draft Version 1.2 \f7.2. O\ufb04ine Evaluation 123 systems. The approach used multiple phases of crowdworker tasks \ufb01rst to develop natural questions and then, in turn, natural answers while attempting to minimize bias and maximize the diversity and naturalness of answers. 7.2.3 Simulated Users A recent alternative to static conversational datasets is relying on simulators (Ie et al., 2019; Aliannejadi et al., 2021a; Salle et al., 2021; Erbacher et al., 2022). For instance, Zhang and Balog (2020) argued that a simulator \u201cshould enable to compute an automatic assessment of the agent such that it is predictive of its performance with real users\u201d. In this way, rather than evaluating with a \ufb01xed dataset, an agent could be assessed dynamically against a (\ufb01xed) simulator to obtain the bene\ufb01ts of e\ufb00ective o\ufb04ine evaluation. As another example, Sekuli\u0107 et al. (2022) develop a simulator capable of answering clarifying questions posed by a CIS system. Both these recent works showed a high correlation between simulation-based evaluation and an online evaluation approach. Simulation also addresses challenges in \ufb01xed datasets, particularly relating to user privacy (Slokom, 2018; Hawking et al., 2020). Although long studied in information seeking in general, this is a relatively new methodology in the context of CIS. As such, it has been the subject of two recent workshops (Balog et al., 2022; Ekstrand et al., 2021). These identi\ufb01ed particular open challenges: Developing increasingly realistic user simulators, and making simulators easier to share. It was observed that one particularly pertinent still open question is \u201chow realistic simulators can be, or indeed should be\u201d noting that simulations need only correlate well with other approaches (Balog et al., 2022). For instance, Zhang et al. (2022) considered how to design simulators to reformulate their utterances when a conversational agent fails to understand them similarly to how human do. As such, the general problem evaluation/validation of simulators itself is also an open area to ensure simulation-based evaluation is valid. Draft Version 1.2 \f124 Evaluating CIS Systems 7.2.4 Datasets Beyond the Text Several authors have considered evaluating CIS tasks beyond simply the text of interactions between a user and a CIS system. Typically this involves additional annotation of the conversational dialogue to indicate relevant aspects, although it can also involve other content modalities in addition to the conversation. One example is the annotation of the high-level role of individual utterances. This may be at the level of sentences within a conversation (annotated as to whether they are asking a question, sharing an opinion, thanking, or so forth) (Yu and Yu, 2021), or may be at the level of the high-level structure of conversations as in the case of sub-goal or subtask prediction. Alternatively, user-centric metrics may be annotated, such as indicators of customer frustration at speci\ufb01c points in customer service conversations (Oraby et al., 2017). Note that these evaluation annotations are in contrast and complementary to datasets which have been annotated to investigate how interactions between the user and CIS system are structured (Vakulenko et al., 2021; Trippas et al., 2020). A key challenge in such datasets is ensuring that the (indirect) labels produced by raters agree with the (direct) opinion of actual participants. Promisingly, Fu et al. (2022) recently studied this question and found that it is possible to collect labels where there is a fair agreement between direct and indirect assessments at least in terms of user satisfaction. A fundamentally di\ufb00erent type of CIS dataset involves multiple modalities. The conversation may include both text, images, or gestures to illustrate the user\u2019s need in a recommendation setting (Nie et al., 2019; Deldjoo et al., 2021), or even include navigation within a virtual or physical environment as part of the conversational task (Ku et al., 2020). 7.3 Online Evaluation In contrast to o\ufb04ine evaluation, CIS systems may also be evaluated online: deploying a system that real users interact with, dynamically obtaining user utterances and the system\u2019s responses. Draft Version 1.2 \f7.3. Online Evaluation 125 Online evaluation allows systems to be evaluated much more robustly, as the consequences of earlier system actions can be seen in how users respond, which in turn determines what options the system has and how these are handled. In this way, online evaluations are much more predictive of real-world system performance, and is more likely to identify limitations in current solutions. Online evaluation can be done in one of two ways: (1) a lab or crowdsourcing study, or (2) a real-world study. 7.3.1 Lab or Crowdsourced Studies It is often desirable to evaluate components of a system that is not end-to-end complete (such as when developing speci\ufb01c aspects of a CIS system), or where it is necessary to control certain conditions (such as when performance for speci\ufb01c use cases is of particular interest). In this situation, paid users or volunteers are often employed. For instance, Christakopoulou et al. (2016) studied di\ufb00erent approaches for eliciting user preferences in a restaurant recommendation setting. As the authors\u2019 goal was to assess how well di\ufb00erent ways of asking questions e\ufb03ciently established users\u2019 interests, the authors chose to perform a lab study. Participants were presented with preference questions that a conversational system might want to ask. The results were used to inform algorithms for learning about users interests. This type of evaluation was appropriate as the information could not be collected through an o\ufb04ine corpus (as rating data in o\ufb04ine studies is usually incomplete), nor in a real-world system (as preference elicitation studied here is but one part of the overall CIS recommendation challenge). Similarly, Aliannejadi et al. (2019) introduced a crowdsourced approach for evaluating clari\ufb01cation question selection. They started with a variety of queries, crowdsourced a collection of possible clarifying questions, then collected possible answers to these questions. Despite simplifying assumptions, the approach allowed a clarifying question selection model to be evaluated based on the retrieval performance, Draft Version 1.2 \f126 Evaluating CIS Systems giving possible answers to the system\u2019s potential questions. For the same task, Zamani et al. (2020b) provided guidelines for manual annotation of clarifying questions and their candidate answers based on their \ufb02uency, grammar, usefulness for clari\ufb01cation, comprehensiveness, coverage, understandability, diversity, coherency, and so forth. Evaluating a di\ufb00erent aspect of CIS behavior, Balog and Radlinski (2020) studied the role of explanations in recommendation tasks. As one may expect explanations of results presented to be part of CIS, the authors focused on assessing what constitutes a valuable explanation. Using a non-conversational approach, crowdworkers were \ufb01rst asked to express their preferences in a given domain. They were then presented with recommendations along with explanations. These explanations were assessed using a focused questionnaire addressing di\ufb00erent reactions the participants may have to the explanations. As another example, Jiang et al. (2015) recruited participants to complete speci\ufb01c tasks using a well established CIS agent, including speci\ufb01c search tasks. After the tasks were complete, participants were asked to answer speci\ufb01c questions about their experiences. Based on the answers to these questions and a record of the participants\u2019 interactions with the CIS system, the authors developed an automated approach for predicting satisfaction and natural language understanding. As these examples show, controlled studies can allow investigation of the performance of particular aspects of CIS. A detailed treatment of designing user studies for interactive IR systems is presented by Kelly (2009). 7.3.2 Real-World Studies When a CIS system is complete and a fully realistic evaluation of the users\u2019 overall experience is desired, a real-world study is the gold standard. This involves observing actual natural interactions between users and the CIS system, particularly with users motivated by relevant information needs. The key di\ufb00erence between such studies and lab or crowdsourced studies described in Section 7.3.1 above is that of motivation. Speci\ufb01cally, in real-world studies the user comes with their own rich needs (which may or may not be clear to the user from the Draft Version 1.2 \f7.4. Metrics 127 start), and they may be satis\ufb01ed or dissatis\ufb01ed with any aspect of a CIS system. They may choose to engage with a system, or simply leave if some aspect of performance is poor \u2014 or perhaps just become distracted by something outside the system designers\u2019 control. Given su\ufb03cient scale, the conclusions of such an evaluation are most likely to generalize to other users with other needs and in other contexts. The key consideration is that while on one hand users bringing their own information needs leads to more realistic interactions, on the other such an evaluation depends on actual interactions with only limited feedback usually available. As an example of such a study, Park et al. (2020) presented a study of a commercial CIS agent where, for some intents (such as asking for weather), the agent asked for user feedback. In particular, the agent asked users \u201cDid I answer your question?\u201d. Responses to this question were used to assess the quality of the endto-end CIS system. A similar approach is used in the Alexa Prize Challenge (Ram et al., 2018). Here, real users may request to interact with a conversational system. At the end of the interaction, the user is asked to rate their experience. Such online evaluation can assess the quality of the conversational abilities of the system according to predetermined criteria (here, user-declared satisfaction, and level of engagement based on time spent). 7.4 Metrics Having considered evaluation approaches, here we brie\ufb02y discuss an essential aspect of CIS evaluation separately, namely that of metrics. While a complete treatment of metrics suitable for conversational information seeking is beyond our scope, we provide a high-level overview of the metric types used in di\ufb00erent cases, and some of the appropriate considerations that are required when determining the right ones. We refer the reader to Liu et al. (2021a) for a more extended treatment of conversational systems\u2019 single-turn and multi-turn metrics. Draft Version 1.2 \f128 Evaluating CIS Systems 7.4.1 Metrics for Individual Steps At individual steps, it is possible to evaluate whether the system understood a user\u2019s utterance, whether the search system respected a constraint, or whether a system utterance was \ufb02uent among other things. Most often, these aspects can be measured with metrics that can be computed o\ufb04ine. As an example, we take conversational question answering (ConvQA), discussed in depth in Section 5.1. Often used to assess clari\ufb01cation approaches in the NLP community (Rao and Daum\u00e9 III, 2019), common metrics include BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee and Lavie, 2005). At a high level, these metrics match the similarity between a given string and reference strings. While e\ufb00ective for some applications, these metrics do not correlate highly with user satisfaction in conversational systems (Liu et al., 2016). More recently, machine learned metrics have achieved signi\ufb01cantly higher correlation with manual human ratings for such language tasks (Ma et al., 2018; Sellam et al., 2020). When assessing the relevance of recommendations that terminate a conversational exchange, classic information retrieval metrics are used (Croft et al., 2010). For instance, Normalized Discounted Cumulative Gain (nDCG), Mean Reciprocal Rank (MRR), and Precision are often used to assess if recommended items match information needs of users given a particular user representation, e.g., (Christakopoulou et al., 2016), if a system is able to rank possible clarifying question, e.g., (Aliannejadi et al., 2019), or if a system accurately provides answers to speci\ufb01c requests, e.g., (Christmann et al., 2019). As with language metrics, such metrics do not necessarily agree with user experience of an end-to-end system (Jiang and Allan, 2016). As an example of more nuanced re\ufb01nements of relevance in a conversational setting, consider work by Rashkin et al. (2021). Here, the authors propose a metric that assesses whether a CIS system only presents veri\ufb01able information, rather than hallucinated or factually unveri\ufb01able information. Draft Version 1.2 \f7.4. Metrics 129 7.4.2 Metrics for End-To-End Evaluation An essential characteristic of conversational information seeking systems is the multi-turn nature of conversations. As such, it is vital that evaluation considers an end-to-end interaction. For example, consider catastrophic failure in the middle of a long conversation, where an agent may lose the state after a user has provided signi\ufb01cant information to a CIS system. Kiseleva et al. (2016) showed how one failure in a more extended conversation often leads to dissatisfaction. This can happen even if the vast majority of individual conversational steps are successful. The richness of conversational interactions thus means that CIS systems can be assessed along many di\ufb00erent dimensions. Trivially, one may consider whether users were successful at their task (Chuklin et al., 2018; Dubiel et al., 2018) or achieved success quickly (Thomas et al., 2018; Trippas et al., 2017). Despite this, a shorter time to success is not necessarily su\ufb03cient. For instance, in a non-conversational recommendation setting, Schnabel et al. (2016) showed that more successful recommendations may be obtained using systems that require more prolonged user interactions, leading to overall higher user satisfaction. In conversational settings, a system may trade-o\ufb00long-term and shortterm utility (Radlinski and Craswell, 2017). It is important to note that it is also possible to succeed while leaving users frustrated, as studied by Feild et al. (2010). A particular end-to-end evaluation approach was recently presented by Lipani et al. (2021), based on the \ufb02ow of di\ufb00erent subtopics within a conversation. Two other classes of metrics are critical to consider. First, trust between the user and a CIS system. For example, Daronnat et al. (2020) studied how trust a\ufb00ects users satisfaction. Trust usually requires factuality. It has been noted that some modern neural conversational systems can produce utterances that are false (often termed hallucinations). A detailed treatment of hallucination, and references to further work, can be found in Shuster et al. (2021). Trust may also be a\ufb00ected by explanations being incorporated into CIS systems. Explainable AI is, in general, an extremely active and critical area of study (Adadi and Berrada, 2018). In a conversational recommendation setting, explanaDraft Version 1.2 \f130 Evaluating CIS Systems tions have recently received attention as well, for example, see (Chen et al., 2021b; Balog and Radlinski, 2020). The second critical concept to consider in CIS systems is that of fairness. While often not treated as a key metric of e\ufb00ectiveness, many researchers have recognized this as a principal desirable aspect of AI systems in general and recommendation systems in particular. A CIS system that provides recommendations in the course of a conversation, for instance, may aim to do so in a fair manner. Thus biases that may be present within the conversational system warrant careful consideration. We refer interested readers to Beutel et al. (2019) and Ge et al. (2021) and their citations for de\ufb01nitions, approaches and relevant metrics. 7.5 Summary This section presented an overview of key concepts in the evaluation of conversational information seeking systems. We provided an overview of o\ufb04ine as well as online evaluation techniques, discussing common methodologies in both cases. Bene\ufb01ts and drawbacks of the two highlevel approaches were discussed. Finally, we provided an overview of common metrics used to evaluate CIS systems, as well as references to broader topics that should be considered when measuring the performance of CIS systems, such as trust and fairness. Draft Version 1.2 \f8 Conclusions and Open Research Directions This survey aimed to provide an overview of conversational information seeking (CIS) research, summarizing current research and presenting an introduction to researchers new to this area. We addressed CIS from both a userand system-centred approach, aiming not to single out one view but provide a holistic overview. CIS could be naively approached as a straightforward pipeline of all components such as user input (e.g., automatic speech recognition), which transcribes the spoken query as input, information retrieval, which identi\ufb01es and retrieves the relevant items to the query, or information visualization, which summarizes and presents the found information to the user. However, many more components are needed to make CIS truly useful in solving diverse information needs, including features that can capture and utilize interaction and preference history, adapt results presentations to the user\u2019s need or context, and track the conversation \ufb02ow in long-term representations, and interact with external systems. Indeed, we argue that the interconnectedness of all the CIS building blocks makes them intrinsically interrelated, meaning they should be investigated beyond the sum of the parts. Furthermore, we show that CIS is more than system evaluation, and retrieval e\ufb00ectiveness requires a broad range of 131 Draft Version 1.2 \f132 Conclusions and Open Research Directions techniques. CIS is a new interaction paradigm beyond the basic query-response approach. This means that existing knowledge and assumptions of traditional IR should be challenged, reviewed, and expanded. Furthermore, CIS research aims to investigate and develop systems that users use and perceive as genuinely helpful, which means taking actions as well as returning information. The more users interact with CIS systems across diverse tasks and contexts the use cases and types of support the systems can provided will evolve and advance. As such, creating more usable CIS systems will help users adopt and adapt conversational and interactive methods to search for and utilize information. Current research often makes simplifying assumptions about user interactions and system capabilities. Given these assumptions, this monograph showed that large-scale pre-trained language models have many applications in developing di\ufb00erent parts of CIS systems that deal with natural language, e.g., conversational search, question answering, preference elicitation, and clari\ufb01cation. However, deciding about the interaction type, modality, initiative, explanation, etc. involves many components that must work cooperatively with such models for optimal understanding and generation. We provided an overview of evaluation methodologies for CIS research. Due to the interactive nature of CIS systems, developing reusable datasets and user simulation strategies for model training and o\ufb04ine evaluation is incredibly important and challenging. Again, most existing benchmarks and evaluation methodologies make numerous simpli\ufb01cations to the CIS tasks. Currently, online evaluation and collecting human annotations are the most robust and reliable approaches for evaluating CIS systems, although simulation is also gaining popularity. It can be challenging to negotiate all the di\ufb00erent components of CIS, being ethical and rigorous in the research while maintaining a vision of an information system that does not hinder access to information. We hope that the overview of the broad range of research topics within CIS re\ufb02ects the various research disciplines that should be part of the conversation studying and developing CIS. Draft Version 1.2 \f8.1. Open Research Directions 133 8.1 Open Research Directions Many open questions and directions of research have been mentioned throughout this monograph. In this section, we bring many of them together with the aim of providing a uni\ufb01ed starting point for researchers and graduate students currently investigating conversational information seeking. While not intended to be exhaustive, we believe these critical areas for future work are particularly likely to have a profound impact on the \ufb01eld. The content of this section can be seen as complementary to directions suggested by the recent Dagstuhl Seminar on Conversational Search (Anand et al., 2020) and the SWIRL 2018 report (Culpepper et al., 2018). Although some of these topics could be grouped under multiple headings, we divide this section into four main topics, (1) modeling and producing conversational interactions, which covers the foundation of conversational systems to understand and produce user-system interactions and the information transfer between them, (2) result presentation with di\ufb00erent interaction modality and devices, (3) types of conversational tasks that are mostly under-explored, and (4) measuring interaction success and evaluation, focusing on interactivity, ethics and privacy in conversational systems, and lastly, looking at evaluation as a more extensive and broader topic than measuring success. 8.1.1 Modeling and Producing Conversational Interactions Interactivity, the process of two or more agents (human or machine) working together, is a crucial characteristic of information seeking. Modeling interactions and deciding the following action or interaction is at the core of CIS research. In this context, although much research has been devoted recently to mixed-initiative interactions, most mixed-initiative strategies have not been fully explored. In fact, our understanding of when a system can take the initiative without disrupting the \ufb02ow of natural information seeking conversation needs signi\ufb01cant further exploration. We believe that systems should more accurately identify opportune moments to initiate the conversation, introduce new topics, or support disambiguation. Similarly, the ability for systems Draft Version 1.2 \f134 Conclusions and Open Research Directions to model uncertainty in user needs (including due to the ambiguity of language) requires further study to e\ufb00ectively and e\ufb03ciently clarify needs. We argue that supporting all these interactions will enhance the user experience, enable improved information seeking interactions, and thus positively impact this collaborative process. Natural language understanding, to understand the input from the user (e.g., queries or feedback) needs to be further optimized. This includes the ability of the system to understand complex ideas and concepts from a user\u2019s utterance. Furthermore, understanding short, incomplete, or ambiguous queries is still challenging for existing systems. On top of the aforementioned open research directions for interactions, long-term conversational interactions may need specialized attention. In general, when investigating CIS, it is often assumed that the user is interacting with the system only at the time of information need. However, supporting users in long-term information needs, be it multi-session tasks or the ability for a conversation to be continued and repeated much later, need further research. This implies that the history and memory of conversations may be stored and used in future user-system interactions. Thus, further research needs to be done on how users want this memory to work, including privacy and transparency of what is stored and how the system retrieves and identi\ufb01es relevant past interactions responsibly. 8.1.2 Result Presentation Presenting results that the user can incorporate into their personal \u201cknowledge space\u201d, and how the user interacts with them, can be seen as part of a broader challenge of information transfer. Result presentation has not received commensurate attention in the CIS research community relative to its impact. This includes what information needs to be presented and how. For example, how can result presentations be optimized with personalization? Can CIS systems use the user\u2019s context (e.g., user\u2019s location or search history)? Can particular summarization or visualization techniques present results in a concise and easy-to-understand manner? Furthermore, with the increased interest in multi-modal and Draft Version 1.2 \f8.1. Open Research Directions 135 cross-device CIS, further research on when, how, and on which device users want to receive information is crucial. Questions such as how CIS systems can/should use sensor data to optimize result presentation is an open problem (e.g., if a user is close to a screen, instead of using a smart speaker, should the information be presented visually?). As part of result presentation, further research on interactions between multiple devices will be pivotal. Thus, research on including more user context to predict how users will interact with the available devices is warranted. 8.1.3 Types of Conversational Information Seeking Tasks Many users will have di\ufb00erent reasons for why they engage in CIS tasks, with these varying based on the time, context and social situation of their information need. Supporting each user\u2019s goals means recognizing these di\ufb00erences. For instance, users interacting with a CIS may choose this search mode to seek advice, look for a detailed summary of a complex topic, or verify a fact. Developing CIS systems that can integrate di\ufb00erent kinds of information seeking tasks and produce a humanlike dialogue needs to be better understood. Further, di\ufb00erent scenarios or settings may require distinct forms of interaction. For instance, searching for information in enterprise settings contrasts with \u201ceveryday\u201d search. Conversations may also be structured di\ufb00erently, depending, for instance, on the number of actors in the CIS process, thus making collaborative CIS an essential topic for further exploration. There are particular challenges for domain-speci\ufb01c CIS systems. Imagine research for a medical-speci\ufb01c system, it may be hard to \ufb01nd researchers with expertise in the particular medical domain and CIS. From a system point of view, it may be challenging to obtain datasets or resources within the medical domain to train and evaluate the CIS systems, this can be because there is hardly any data available or for ethical reasons. Consequently, the lack of data may hinder understanding the speci\ufb01c terminology or language and information seeking tasks. Furthermore, depending on who the end-user is (i.e., a medical professional or a layperson), the system may need to generate di\ufb00erent responses addressing di\ufb00erent levels of the domain-speci\ufb01c language. Draft Version 1.2 \f136" + }, + { + "url": "http://arxiv.org/abs/2006.10174v1", + "title": "MIMICS: A Large-Scale Data Collection for Search Clarification", + "abstract": "Search clarification has recently attracted much attention due to its\napplications in search engines. It has also been recognized as a major\ncomponent in conversational information seeking systems. Despite its\nimportance, the research community still feels the lack of a large-scale data\nfor studying different aspects of search clarification. In this paper, we\nintroduce MIMICS, a collection of search clarification datasets for real web\nsearch queries sampled from the Bing query logs. Each clarification in MIMICS\nis generated by a Bing production algorithm and consists of a clarifying\nquestion and up to five candidate answers. MIMICS contains three datasets: (1)\nMIMICS-Click includes over 400k unique queries, their associated clarification\npanes, and the corresponding aggregated user interaction signals (i.e.,\nclicks). (2) MIMICS-ClickExplore is an exploration data that includes\naggregated user interaction signals for over 60k unique queries, each with\nmultiple clarification panes. (3) MIMICS-Manual includes over 2k unique real\nsearch queries. Each query-clarification pair in this dataset has been manually\nlabeled by at least three trained annotators. It contains graded quality labels\nfor the clarifying question, the candidate answer set, and the landing result\npage for each candidate answer.\n MIMICS is publicly available for research purposes, thus enables researchers\nto study a number of tasks related to search clarification, including\nclarification generation and selection, user engagement prediction for\nclarification, click models for clarification, and analyzing user interactions\nwith search clarification.", + "authors": "Hamed Zamani, Gord Lueck, Everest Chen, Rodolfo Quispe, Flint Luu, Nick Craswell", + "published": "2020-06-17", + "updated": "2020-06-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.LG" + ], + "main_content": "INTRODUCTION Search clarification has recently been recognized as a useful feature for improving user experience in search engines, especially for ambiguous and faceted queries [29]. In addition, it has been identified as a necessary step towards developing mixed-initiative conversational search systems [22, 28]. The reason is that limited bandwidth interfaces used in many conversational systems, such 1MIMICS is available at https://github.com/microsoft/MIMICS. This work was done while Everest Chen was affiliated with Microsoft. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Microsoft, July 2020, Technical Report \u00a9 2020 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn Figure 1: An example of clarification pane in Bing. as speech-only and small-screen devices, make it difficult or even impossible for users to go through multiple documents in case of ambiguous or faceted queries. This has recently motivated researchers and practitioners to investigate possible approaches to clarify user information needs by asking a question [1, 29]. Despite the recent progress in search clarification, e.g., [1, 13, 29, 30], the community still feels the lack of a large-scale dataset for search clarification, which is necessary for speeding up the research progress in this domain. To address this issue, we introduce MIMICS,2 a data collection consisting of multiple datasets for search clarification. Each clarification in MIMICS consists of a clarifying question and up to five candidate answers. Figure 1 shows the interface used for clarification in Bing for constructing this data. The first dataset, called MIMICS-Click, includes over 400k unique search queries sampled from the Bing\u2019s query logs, each associated with a single clarification pane. The dataset also includes aggregated user interaction signals, such as the overall user engagement level and conditional clickthrough rate on individual candidate answers. The second dataset, called MIMICS-ClickExplore, contains over 64k queries, each with multiple clarification panes which are the result of multiple exploration and online randomization experiments. This dataset also includes the aggregated user interaction signals. The third dataset, on the other hand, is manually labeled by trained annotators. This dataset, which is called MIMICS-Manual, includes graded quality labels for clarifying question, candidate answer set, and the landing result page for each individual answer. The datasets created as part of MIMICS can be used for training and evaluating a variety of tasks related to search clarification, including generating/selecting clarifying questions and candidate answers, re-ranking candidate answers for clarification, click models for search clarification, user engagement prediction for search clarification, and analyzing user interactions with search clarification. This paper also suggests some evaluation methodologies and metrics for these tasks. 2MIMICS stands for the Microsoft\u2019s Mixed-Initiative Conversation Search Data. arXiv:2006.10174v1 [cs.IR] 17 Jun 2020 \f2 RELATED WORK Clarification has been explored in a number of applications, such as speech recognition [26], dialogue systems [4, 12, 20], and community question answering [5, 23, 24]. Recently, it attracted much attention in the information retrieval literature [1, 13, 22, 29, 30]. For instance, Kiesel et al. [15] investigated the impact of voice query clarification on user satisfaction. Their study showed that users like to be prompted for clarification. Simple form of clarification, such as entity disambiguation, has been explored by Coden et al. [8]. They basically ask a \u201cdid you mean A or B?\u201d question to resolve entity ambiguity. Even earlier, Allan [2] organized the HARD Track at TREC 2004 which involved clarification from participants. In more detail, the participants could submit a form containing some humangenerated clarifying questions in addition to their submission run. Recently, Aliannejadi et al. [1] proposed studying clarification in the context of conversational information seeking systems. This was later highlighted as an important aspect of conversational search in the Dagstuhl Seminar on Conversational Search [3]. More recently, Zamani et al. [29] introduced clarification in the context of web search and proposed models for generating clarifying questions and candidate answers for open-domain search queries. In a follow-up study, Zamani et al. [30] analyzed user interactions with clarification panes in Bing and provided insights into user behaviors and click bias in the context of search clarification. Moreover, Hashemi et al. [13] proposed a representation learning model for utilizing user responses to clarification in conversational search systems. Despite the recent progress reviewed above, there is no largescale publicly available resource for search clarification. To the best of our knowledge, Qulac3 [1] is the only public dataset that focuses on search clarification. However, it only contains 200 unique queries borrowed from the TREC Web Track 2009-2012. Therefore, it is not sufficient for training a large number of machine learning models with millions of parameters. In addition, it was constructed through crowdsourcing. Therefore, the clarifications are human generated and user responses to clarifications in real scenarios may differ from the ones in Qulac. There also exist a number of community question answering data and product catalogs with clarifications (e.g., see [24]), however, they are fundamentally different from search clarification. Therefore, this paper provides a unique resource in terms of realisticness, size, diversity, clarification types, user interaction signals, and coverage. It is worth noting that a number of datasets related to conversational search has recently been created and released. They include CCPE-M [21], CoQA [25], QuAC [7], MISC [27], and the Conversation Assistance Track data created in TREC 2019 [11]. Although these datasets do not particularly focus on clarification, there might be some connections between them and MIMICS that can be used in future research. In addition, the public query logs, such as the one released by AOL [19], can be used together with MIMICS for further investigations. This also holds for the datasets related to query suggestion and query auto-completion. 3 DATA COLLECTION Bing has recently added a clarification pane to its result pages for some ambiguous and faceted queries. It is located right below the 3https://github.com/aliannejadi/qulac search bar and above the result list. Each clarification pane includes a clarifying question and up to five candidate answers. The user interface for this feature is shown in Figure 1. The clarifying questions and candidate answers have been generated using a number of internal algorithms and machine learning models. They are mainly generated based on users\u2019 past interactions with the search engine (e.g., query reformulation and click), content analysis, and a taxonomy of entity types and relations. For more information on generating clarification panes, we refer the reader to [29] that introduces three rule-based and machine learning models for the task. All the datasets presented in this paper follow the same properties and only demonstrate the queries from the en-US market. In the following subsections, we explain how we created and pre-processed each dataset introduced in the paper. In summary, MIMICS consists of two datasets (MIMICS-Click and MIMICSClickExplore) based on user interactions (i.e., clicks) in Bing and one dataset (MIMICS-Manual) based on manual annotations of clarification panes by multiple trained annotators. 3.1 MIMICS-Click We sub-sampled the queries submitted to Bing in September 2019. We only kept the queries for which a clarification pane was rendered in the search engine result page (SERP). We made efforts in our data sampling to cover a diverse set of query and clarification types in the dataset, therefore, the engagement levels released in the paper by no mean represent the overall clickthrough rates in Bing. For privacy reasons, we followed k-anonymity by only including the queries that have been submitted by at least 40 users in the past year. In addition, the clarification panes were solely generated based on the submitted queries, therefore they do not include session and personalized information. We performed additional filtering steps to preserve the privacy of users using proprietary algorithms. Sensitive and inappropriate contents have automatically been removed from the dataset. To reduce the noise in the click data, we removed the query-clarification pairs with less than 10 impressions. In other words, all the query-clarification pairs released in the dataset have been presented at least 10 times to the Bing users in the mentioned time period (i.e., one month). This resulted in 414,362 unique queries, each associated with exactly one clarification pane. Out of which 71,188 of clarifications have received positive clickthrough rates. The statistics of this dataset is presented in Table 1. The dataset is released in a tab-separated format (TSV). Each data point in MIMICS-Click is a query-clarification pair, its impression level (low, medium, or high), its engagement level (between 0 and 10), and the conditional click probability for each individual candidate answer. The engagement level 0 means there was no click on the clarification pane. We used a equal-depth method to divide all the positive clickthrough rates into ten bins (from 1 to 10). The description of each column in the dataset is presented in Table 2. 3.2 MIMICS-ClickExplore Although MIMICS-Click is a invaluable resource for learning to generate clarification and related research problems, it does not allow researchers to study some tasks, such as studying click bias in user interactions with clarification. Therefore, to foster research in these \fTable 1: Statistics of the datasets constructed as part of MIMICS. MIMICS-Click MIMICS-ClickExplore MIMICS-Manual # unique queries 414,362 64,007 2464 # query-clarification pairs 414,362 168,921 2832 # clarifications per query 1 \u00b1 0 2.64 \u00b1 1.11 1.15 \u00b1 0.36 min & max clarifications per query 1 & 1 2 & 89 1 & 3 # candidate answers 2.81 \u00b1 1.06 3.47 \u00b1 1.20 3.06 \u00b1 1.05 min & max # candidate answers 2 & 5 2 & 5 2 & 5 # query-clarification pairs with positive engagement 71,188 89,441 N/A # query-clarification pairs with low/medium/high impressions 264,908 / 105,879 / 43,575 52,071 / 60,907 / 55,943 N/A Table 2: The data format in MIMICS-Click and MIMICS-ClickExplore. Column(s) Type Description query string The query text question string The clarifying question option_1, \u00b7 \u00b7 \u00b7 , option_5 string The candidate answers from left to right. If there is less than five candidate answers, the rest would be empty strings. impression_level string A string associated with the impression level of the corresponding query-clarification pair. Its value is either \u2019low\u2019, \u2019medium\u2019, or \u2019high\u2019. engagement_level integer An integer from 0 to 10 showing the level of total engagement received by the users in terms of clickthrough rate. option_cctr_1, \u00b7 \u00b7 \u00b7 , option_cctr_5 real The conditional click probability on each candidate answer. They must sum to 1, unless the total_ctr is zero. In that case, they all are zero. interesting and practical tasks, we created MIMICS-ClickExplore using some exploration and randomization experiments in September 2019. In more detail, we used the top m clarifications generated by our algorithms and presented them to different sets of users (similar to A/B testing). The user interactions with multiple clarification panes for the same query at the same time period enable comparison of these clarification panes. The difference between these clarification panes can be in the clarifying question, the candidate answer set, the order of candidate answers, or a combination of them. We performed the same filtering approach to address privacy concerns as the one discussed above for MIMICS-Click. Again, we only kept the query-clarification pairs with a minimum impression of 10. The resulted dataset contains 64,007 unique queries and 168,921 query-clarification pairs. Out of which, 89,441 query-clarification pairs received positive engagements. The format of this dataset is the same as MIMICS-Click (see Table 2). Note that the sampling strategies for MIMICS-Click and MIMICS-ClickExplore are different which resulted in significantly more query-clarification pairs with low impressions in MIMICSClick. 3.3 MIMICS-Manual Although click provides a strong implicit feedback signal for estimating the quality of models in online services, including search clarification, it does not necessarily reflect all quality aspects. In addition, it can be biased for many reasons. Therefore, a comprehensive study of clarification must include evaluation based on manual human annotations. This has motivated us to create and release MIMICS-Manual based on manual judgements performed by trained annotators. Therefore, we randomly sampled queries from the query logs to collect manual annotations for a set of realistic user queries. The queries satisfy all the privacy concerns reviewed in Section 3.1. We further used the same algorithm to generate one or more clarification pairs for each query. Each query-clarification pair was assigned to at least three annotators. The annotators have been trained to judge clarification panes by attending online meetings, reading comprehensive guidelines, and practicing. In the following, we describe each step in the designed Human Intelligence Task (HIT) for annotating a query-clarification pair. This guideline has been previously used in [29, 30]. 3.3.1 Step I: SERP Review. Similar to Aliannejadi et al. [1], we first asked the annotators to skim and review a few pages of the search results returned by Bing. Since search engines try to diversify the result lists, this would enable the annotators to better understand the scope of the topic and different potential intents behind the submitted query. When completed, the users can move to the next step. 3.3.2 Step II: Annotating the Clarifying Question Quality. In this step, the annotators were asked to assess the quality of the given clarifying question independent of the candidate answers. Therefore, the annotation interface does not show the candidate answers to the annotators at this stage. Each clarifying question is given a label 2 (Good), 1 (Fair), or 0 (Bad). The annotators were given detailed definitions, guidelines, and examples for each of the labels. In summary, the guideline indicates that a Good clarifying question should accurately address and clarify different intents of the query. \fTable 3: The data format in MIMICS-Manual. All the labels in this dataset are either 2 (Good), 1 (Fair), or 0 (Bad). Column(s) Type Description query string The query text question string The clarifying question option_1, \u00b7 \u00b7 \u00b7 , option_5 string The candidate answers from left to right. If there is less than five candidate answers, the rest would be empty strings. question_label integer The label associated with the clarifying question independent of the candidate answers. options_overall_label integer The overall label given to the candidate answer set. option_label_1, \u00b7 \u00b7 \u00b7 , option_label_5 integer The label assigned to each individual candidate answer based on the quality of the landing search result page. Table 4: The statistics of the common clarifying question templates in MIMICS. We only present the templates with at least 100 occurrence in MIMICS-Click and MIMICS-ClickExplore individually. Note that there is no label associated with the first template in MIMICS-Manual. ID Clarifying question template MIMICS-Click MIMICS-ClickExplore MIMICS-Manual Freq. Engagement Freq. Engagement Freq. Question Quality T1 select one to refine your search 395134 0.9285 156870 2.8631 2490 N/A T2 what (do you want|would you like) to know about (.+)? 7136 0.5783 5624 2.9070 158 1.9367 T3 (which|what) (.+) do you mean? 7483 0.6123 1905 2.6714 76 2.000 T4 (what|which) (.+) are you looking for? 3436 1.7252 2055 5.1990 22 1.6818 T5 what (do you want|would you like) to do with (.+)? 689 1.9637 1833 3.4043 60 2.000 T6 who are you shopping for? 101 1.9604 350 4.3800 7 1.5714 T7 what are you trying to do? 188 3.3777 116 5.8793 3 1.0 Table 5: The average and standard deviation of user engagement levels with respect to different query-clarification impressions. Impression level MIMICS-Click MIMICS-ClickExplore Low 0.9061 \u00b1 2.5227 3.1712 \u00b1 4.2735 Medium 0.9746 \u00b1 2.1249 3.1247 \u00b1 3.3622 High 0.9356 \u00b1 1.6159 2.4119 \u00b1 2.4559 It should be fluent and grammatically correct. If a question fails in satisfying any of these factors but still is an acceptable clarifying question, it should be given a Fair label. Otherwise, a Bad label should be assigned to the question. Note that if a question contains sensitive or inappropriate content, it would have been flagged by the annotators and removed from the dataset. Note that in case of having a generic template instead of clarifying questions (i.e., \u201cselect one to refine your search\u201d), we do not ask the annotators to provide a question quality labels. 3.3.3 Step III: Annotating the Candidate Answer Set Quality. Once the clarifying question is annotated, the candidate answers would appear on the HIT interface. In this step, the annotators were asked to judge the overall quality of the candidate answer set. In summary, the annotation guideline indicates that the candidate answer set should be evaluated based on its usefulness for clarification, comprehensiveness, coverage, understandability, grammar, diversity, and importance order. A clear definition of each of these constraints has been mentioned in the guideline. Note that the annotators have reviewed multiple pages of the result list in Step I and have been expected to know different possible intents of the query. Again, the labels are either 2 (Good), 1 (Fair), or 0 (Bad), and the candidate answers with sensitive or inappropriate contents have been removed from the dataset. If a candidate answer set satisfies all the aforementioned constraints, it should be given a Good label. While, the Fair label should be given to an acceptable candidate answer set that does not satisfy at least one of the constraints. Otherwise, the Bad label should be chosen. Note that since all the defined properties are difficult to satisfy with up to 5 candidate answers, the label Good is rarely chosen for a candidate answer set. 3.3.4 Step IV: Annotating the Landing SERP Quality for Each Individual Candidate Answer. Zamani et al. [29] recently performed a number of user studies related to search clarification. In their interviews, the participants mentioned that the quality of the secondary result page (after clicking on a candidate answer) perceived the usefulness of the clarification pane. Based on this observation, we asked the annotators to evaluate the quality of the secondary result page (or the landing result page) for the individual candidate answers one by one. Therefore, the annotators could click on each individual answer and observe the secondary result page in Bing. Since a SERP may contain multiple direct answers, entity cards, query suggestion, etc. in addition to the list of webpages, adopting ranking metrics based on document relevance, such as mean reciprocal rank (MRR) or normalized discounted cumulative gain (NDCG) [14], is not desired to evaluate the overall SERP quality. Therefore, we again asked the annotators to assign a label 2 (Good), 1 (Fair), or 0 (Bad) to each landing SERP. A label Good should be chosen, if the correct answer to all possible information needs behind the selected candidate answer can be easily found in \fTable 6: The average and standard deviation of engagement levels and manual annotation labels per query length. Query MIMICS-Click MIMICS-ClickExplore MIMICS-Manual length Freq. Engagement Freq. Engagement Freq. Question quality Answer set quality Landing page quality 1 52213 0.5158 \u00b1 1.6546 26926 1.9508 \u00b1 2.7098 1028 1.7347 \u00b1 0.4415 1.0418 \u00b1 0.3075 1.9750 \u00b1 0.1251 2 160161 0.7926 \u00b1 2.1548 70621 2.7965 \u00b1 3.3536 942 1.4694 \u00b1 0.4991 1.0085 \u00b1 0.3827 1.9178 \u00b1 0.2881 3 120821 1.0152 \u00b1 2.4573 46070 3.1677 \u00b1 3.5811 555 1.4667 \u00b1 0.4989 0.9333 \u00b1 0.4463 1.8021 \u00b1 0.4816 4 51503 1.2196 \u00b1 2.6980 16798 3.5397 \u00b1 3.7492 199 1.3333 \u00b1 0.4714 0.9698 \u00b1 0.5103 1.8313 \u00b1 0.41986 5 19893 1.4473 \u00b1 2.9078 5755 4.0188 \u00b1 3.8921 75 1.3846 \u00b1 0.4865 1.0267 \u00b1 0.5157 1.7847 \u00b1 0.5291 6 6299 1.5785 \u00b1 3.0318 1806 4.1877 \u00b1 3.9642 15 1.0 \u00b1 0.0 0.8 \u00b1 0.5416 1.7 \u00b1 0.4800 7 2424 1.6634 \u00b1 3.0815 621 4.6715 \u00b1 3.9861 13 1.0 \u00b1 0.0 0.7692 \u00b1 0.4213 1.7692 \u00b1 0.5756 8 823 1.7618 \u00b1 3.1575 264 4.2008 \u00b1 3.9082 3 N/A 1.0 \u00b1 0.0 1.8333 \u00b1 0.2357 9 184 1.9620 \u00b1 3.2959 52 4.1731 \u00b1 3.8467 1 N/A 0.0 \u00b1 0.0 2.0 \u00b1 0.0 10+ 41 2.0732 \u00b1 3.4244 8 4.8750 \u00b1 3.4799 1 N/A 1.0 \u00b1 0.0 2.0 \u00b1 0.0 a prominent location in the page (e.g., an answer box on top of the SERP or the top three retrieved webpages). If the result page is still useful and contain relevant information, but finding the answer is not easy or is not on top of the SERP, the Fair label should be selected. Otherwise, the landing SERP should be considered as Bad. 3.3.5 A Summary of the Collected Data. Each HIT was assigned to at least three annotators. For each labeling task, we used majority voting to aggregate the annotation. In case of disagreements, the HIT was assigned to more annotators. The overall Fleiss\u00e2\u0102\u0179 kappa inter-annotator agreement is 63.23%, which is considered as good. Our annotations resulted in over 2.4k unique queries and over 2.8k query-clarification pairs. The statistics of the dataset is reported in Table 1. The data has been released in a tab-separated file format (TSV). The description of each column in the data is provided in Table 3. 4 DATA ANALYSIS In this section, we provide a comprehensive analysis of the created datasets. 4.1 Question Template Analysis Zamani et al. [29] showed that most search clarifications can be resolved using a small number of question templates. In our first set of analysis, we study the question templates in MIMICS and their corresponding statistics. We only focus on the templates with a minimum frequency of 100 in both MIMICS-Click and MIMICSClickExplore. We compute the average engagement level per clarifying question template for MIMICS-Click and MIMICS-ClickExplore. In addition, we compute the average question quality label per template for MIMICS-Manual that has manual annotations. Note that engagement levels are in the [0, 10] interval, while the manual annotation labels are in [0, 2]. The results are reported in Table 4. The first general template is excluded in our manual annotations. According to the results, the last four templates (T4 T7) have led to higher engagements compared to T1, T2, and T3 in both MIMICS-Click and MIMICS-ClickExplore. They are also generally less frequent in the dataset and more specific. In general, the exploration dataset has higher average engagements compared to MIMICS-Click. The reason is that the number of query-clarification pairs with zero engagements in MIMICS-Click are higher than those in MIMICS-ClickExplore (see Table 1). 4.2 Analyzing Engagement Based on Clarification Impression As mentioned in Section 3, MIMICS-Click and MIMICS-ClickExplore contain a three-level impression label per query-clarification pair. The impression level is computed based on the number of times the given query-clarification pair has been presented to users. The impression level should have a correlation with the query frequency. We compute the average and standard deviation of engagements per impression level whose results are reported in Table 5. According to the results, there is a negligible difference between the average engagements across impression levels. Given the engagements range (i.e., [0, 10]), the query-clarification pairs with high impressions in MIMICS-ClickExplore have led to slightly lower average engagements. 4.3 Analysis Based on Query Length In our third analysis, we study user engagements and manual quality labels with respect to query length. To this aim, we compute the query length by simply splitting the query using whitespace characters as delimiters. The results are reported in Table 6. According to the results on MIMICS-Click and MIMICS-ClickExplore, the average engagement increases as the queries get longer. By looking at the data one can realize that longer queries are often natural language questions, while short queries are keyword queries. Surprisingly, this is inconsistent with the manual annotations suggesting that single word queries have higher question quality, answer set quality, and also landing page quality (excluding the rare queries with less than 10 frequency in the dataset). This observation suggests that user engagement with clarification is not necessarily aligned with the clarification quality. The behavior of users who submit longer queries may differ from those who search with keyword queries. 4.4 Analysis Based on the Number of Candidate Answers As pointed out earlier, the number of candidate answers in the data varies between two and five. To demonstrate the impact of \fTable 7: The average and standard deviation of engagement levels and manual annotation labels per number of candidate answers. # answers MIMICS-Click MIMICS-ClickExplore MIMICS-Manual Freq. Engagement Freq. Engagement Freq. Question quality Answer set quality Landing page quality 2 226697 0.9047 \u00b1 2.3160 50474 2.8430 \u00b1 3.3921 1083 1.3164 \u00b1 0.4651 0.9751 \u00b1 0.3775 1.8915 \u00b1 0.3665 3 91840 0.9904 \u00b1 2.4175 38619 3.0592 \u00b1 3.5111 892 1.7513 \u00b1 0.4323 0.9507 \u00b1 0.2954 1.9129 \u00b1 0.3101 4 42752 0.9276 \u00b1 2.3505 29678 2.9157 \u00b1 3.4395 453 1.6292 \u00b1 0.4830 1.0088 \u00b1 0.3816 1.9073 \u00b1 0.2862 5 53073 0.9099 \u00b1 2.3323 50150 2.8354 \u00b1 3.4236 404 1.4741 \u00b1 0.4993 1.1733 \u00b1 0.5401 1.9168 \u00b1 0.2832 the number of candidate answers, we report the average and standard deviation of engagement levels and manual quality labels per number of candidate answers in Table 7. According to the results, there is a small difference between average engagements in both MIMICS-Click and MIMICS-ClickExplore datasets. The clarifications with three candidate answers have led to a slightly higher engagement than the rest. It is again in contrary to the manual quality labels; the clarifications with three candidate answers have obtained the lowest answer set quality label. On the other hand, the question quality of clarifications with three candidate answers is higher than the others. This highlights that the question quality may play a key role in increasing user engagements. 4.5 Analyzing Click Entropy Distribution on Candidate Answers MIMICS-Click and MIMICS-ClickExplore both contain conditional click probability on each individual answer, i.e., the probability of clicking on each candidate answer assuming that the user interacts with the clarification pane. The entropy of this probabilistic distribution demonstrates how clicks are distributed across candidate answers. The entropy range depends on the number of candidate answers, therefore, we normalized the entropy values by the maximum entropy per the candidate answer size. The distribution for MIMICS-Click and MIMICS-ClickExplore are reported in Figures 2 and 3, respectively. Note that for the sake of visualization, these plots do not include clarifications with no click (i.e., the engagement level zero) and those with zero entropy. According to the plots, the number of peaks in the entropy distribution is aligned with the number of candidate answers. The entropy values where the histogram peaks suggest that in many cases there is a uniform-like distribution for m out of n candidate answers (for all values of m). Comparing the plots in Figure 2 with those in Figure 3 shoes that this finding is consistent across datasets. 5 INTRODUCING RESEARCH PROBLEMS RELATED TO SEARCH CLARIFICATION MIMICS enables researchers to study a number of research problems. In this section, we introduce these tasks and provide high-level suggestions for evaluating the tasks using MIMICS. 5.1 Clarification Generation Clarification generation (including both clarifying question and candidate answers) is a core task in search clarification. Generating clarification from a passage-level text has been studied in the context of community question answering posts [24]. It has lately attracted much attention in information seeking systems, such as search engines (similar to this study) [29] and recommender systems [32]. Previous work has pointed out the lack of large-scale training data for generating search clarification [1, 29]. MIMICS, especially the click data, provides an excellent resource for training clarification generation models. Evaluating clarification generation models, on the other hand, is difficult. One can use MIMICS for evaluating the generated clarification models using metrics such as BLEU [18] and ROUGE [17]. However, we strongly discourage this evaluation methodologies, as they poorly correlate with user satisfaction and clarification quality. Here is our recommendation for evaluating clarification generation models: \u2022 In case of access to production systems with real users, conducting online experiments (e.g., A/B tests) would be a reliable evaluation methodology and the models can be compared using user engagement measures, such as clickthrough rate. \u2022 Manual annotation of the generated clarifications based on carefullydefined criteria would be an alternative for clarification generation evaluation. Previously, Zamani et al. [29] used this evaluation methodologies. Researchers may adopt the annotation guideline presented in Section 3.3 for designing their crowdsourcing HITs. 5.2 Clarification Selection Since automatic offline evaluation of clarification generation models is difficult, clarification selection (or clarification re-ranking) can be considered as an auxiliary task to evaluate the quality of learned representations for clarification. In addition, as pointed out by Aliannejadi et al. [1], information seeking systems can adopt a two stage process for asking clarification, i.e., generating multiple clarifications and selecting one. Selecting clarification has been previously studied in [1, 13, 30]. Researchers can benefit from MIMICS for both training and evaluating clarification selection models. In more detail, MIMICSClickExplore contains multiple clarifications per query and can be directly used for evaluating clarification selection (or re-ranking) models. The other two datasets can be also used by drawing some negative samples that can be obtained either randomly or using a baseline model. Ranking metrics, such as NDCG, can be used to evaluate clarification selection models. In addition, since only one clarification is often shown to the users, the average engagement of the selected clarification can be also chosen as an evaluation metric. Refer to [30] for more information. \f0.2 0.4 0.6 0.8 1.0 Normalized Entropy 0 500 1000 1500 2000 2500 Frequency (a) # candidate answers = 2 0.2 0.4 0.6 0.8 1.0 Normalized Entropy 0 200 400 600 800 1000 1200 Frequency (b) # candidate answers = 3 0.2 0.4 0.6 0.8 1.0 Normalized Entropy 0 100 200 300 400 500 600 Frequency (c) # candidate answers = 4 0.2 0.4 0.6 0.8 1.0 Normalized Entropy 0 200 400 600 800 1000 Frequency (d) # candidate answers = 5 Figure 2: The distribution of normalized entropy for the conditional clickthrough rates on candidate answers for the MIMICSClick dataset. For the sake of clarity and visualization, we exclude the clarification with no click and those with zero entropy. 0.2 0.4 0.6 0.8 1.0 Normalized Entropy 0 500 1000 1500 2000 2500 3000 3500 Frequency (a) # candidate answers = 2 0.2 0.4 0.6 0.8 1.0 Normalized Entropy 0 500 1000 1500 2000 2500 3000 3500 Frequency (b) # candidate answers = 3 0.2 0.4 0.6 0.8 1.0 Normalized Entropy 0 500 1000 1500 2000 2500 3000 3500 Frequency (c) # candidate answers = 4 0.0 0.2 0.4 0.6 0.8 1.0 Normalized Entropy 0 500 1000 1500 2000 2500 3000 3500 Frequency (d) # candidate answers = 5 Figure 3: The distribution of normalized entropy for the conditional clickthrough rates on candidate answers for the MIMICSClickExplore dataset. For the sake of clarity and visualization, we exclude the clarification with no click and those with zero entropy. 5.3 User Engagement Prediction for Clarification A major task in search clarification is deciding whether to ask clarification, especially in search systems with limited-bandwidth interfaces. This problem can be cast to query performance prediction [6, 10]. In other words, clarification can be asked when the predicted performance for the given query is below a threshold. An alternative to query performance prediction for this task would be user engagement prediction. In more detail, if users enjoy interacting with clarification and find it useful, the system can decide to ask the clarification. Predicting user engagement has been previously studied in various contexts, such as social media and web applications [16, 31], however, user engagement prediction for clarification is fundamentally different. MIMICS-Click and MIMICS-ClickExplore contain engagement levels in the [0, 10] interval. Therefore, they can be directly used for predicting user engagements. For evaluating user engagements prediction models for clarification, we recommend computing correlation between the predicted engagements and the actual observed engagement released in the datasets. Correlation has been also used for evaluating query performance prediction models [6]. Since we only release engagement levels, we suggest using both linear (e.g., Pearson\u2019s \u03c1) and rankbased (e.g., Kendall\u2019s \u03c4) correlation metrics. In addition, mean square error or mean absolute error can be used for evaluating user engagement prediction methods. 5.4 Re-ranking Candidate Answers Previous work has shown that the order of candidate answers in clarification matters [30]. MIMICS enables researchers to study the task of re-ranking candidate answers for a given pair of query and clarifying question. Experiments on both click data (MIMICS-Click and MIMICS-ClickExplore) and manual annotations would provide complementary evaluation for the task. For evaluating the candidate answers re-ranking task, the manual annotations per individual answers based on their landing SERP quality can be used as graded relevance judgement. NDCG would be adopted as the evaluation metric. For evaluation using the click data, researchers should be careful about presentation bias in the data. Refer to [30] for more detail. In summary, the candidate answers with higher ranks and longer text are more likely to attract clicks. This point should be considered prior to using the MIMICSClick and MIMICS-ClickExplore for re-ranking candidate answers. Once this issue is addressed, the conditional click probabilities can be mapped to ordinal relevance labels and typical ranking metrics can be adopted for evaluation. One can also use cross-entropy between the predicted probability distribution for candidate answers and the actual conditional click distribution. The impression level can be also considered in the metric to compute a gain per queryclarification pair with respect to their impression. In more detail, the clarifications that are presented more often should be assigned higher weights. \f5.5 Click Models for Clarification Related to the re-ranking candidate answers task, it is important to design user models for their click behavior while interacting with clarification panes. Zamani et al. [30] showed that the existing click models that have primarily been designed for web search do not perform as expected for search clarification. The reason is that the assumptions made in the web search click models do not hold for search clarification. The MIMICS-ClickExplore dataset contains many clarification pairs for a given query whose only differences are in the order of candidate answers. This allows researchers to train and evaluate click models for search clarification using MIMICSClickExplore. The evaluation methodology used in [30] is suggested for evaluating the task. In summary, it is based on predicting the click probability of swapping adjacent candidate answers. This approach has originally been used for evaluating click models in web search by Craswell et al. [9]. The cross-entropy would be an appropriate metric in this evaluation setup. 5.6 Analyzing User Behavior in Search Clarification Although this paper provides several analyses based on search clarification quality in terms of both manual judgements and engagement levels, future work can benefit from MIMICS-Click and MIMICS-ClickExplore to conduct more in depth analysis of user behaviors while interacting with search clarification in the context of web search. 6" + }, + { + "url": "http://arxiv.org/abs/2006.00166v1", + "title": "Analyzing and Learning from User Interactions for Search Clarification", + "abstract": "Asking clarifying questions in response to search queries has been recognized\nas a useful technique for revealing the underlying intent of the query.\nClarification has applications in retrieval systems with different interfaces,\nfrom the traditional web search interfaces to the limited bandwidth interfaces\nas in speech-only and small screen devices. Generation and evaluation of\nclarifying questions have been recently studied in the literature. However,\nuser interaction with clarifying questions is relatively unexplored. In this\npaper, we conduct a comprehensive study by analyzing large-scale user\ninteractions with clarifying questions in a major web search engine. In more\ndetail, we analyze the user engagements received by clarifying questions based\non different properties of search queries, clarifying questions, and their\ncandidate answers. We further study click bias in the data, and show that even\nthough reading clarifying questions and candidate answers does not take\nsignificant efforts, there still exist some position and presentation biases in\nthe data. We also propose a model for learning representation for clarifying\nquestions based on the user interaction data as implicit feedback. The model is\nused for re-ranking a number of automatically generated clarifying questions\nfor a given query. Evaluation on both click data and human labeled data\ndemonstrates the high quality of the proposed method.", + "authors": "Hamed Zamani, Bhaskar Mitra, Everest Chen, Gord Lueck, Fernando Diaz, Paul N. Bennett, Nick Craswell, Susan T. Dumais", + "published": "2020-05-30", + "updated": "2020-05-30", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Search queries are oftentimes ambiguous or faceted. The information retrieval (IR) community has made significant efforts to effectively address the user information needs for such queries. A general approach for obtaining more accurate query understanding is to utilize contextual information, such as shortand long-term interaction history [5, 23, 26, 45] and situational context [21, 49]. However, contextual features do not always help the system reveal Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGIR \u201920, July 25\u201330, 2020, Virtual Event, China \u00a9 2020 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn the user information needs [38]. An alternative solution is diversifying the result list and covering different query intents in the top ranked documents [40]. Although result list diversification has been successfully deployed in modern search engines, it still can be a frustrating experience for the users who have to assess the relevance of multiple documents for satisfying their information needs [2]. On the other hand, in the search scenarios with limited bandwidth user interfaces, presenting a result list containing multiple documents becomes difficult or even impossible [2, 51]. These scenarios include conversational search systems with speech-only or small screen interfaces. To address these shortcomings, (conversational) search engines can clarify the user information needs by asking a question, when there is an uncertainty in the query intent. Although generating plausible clarifying questions for opendomain search queries has been one of a long-standing desires of the IR community [4], it has not been possible until recently. Zamani et al. [51] has recently proposed a neural sequence-to-sequence model that learns to generate clarifying questions in response to open-domain search queries using weak supervision. They showed that clarifying questions can be of significance even for web search engines with the traditional ten blue link interface. Despite the significant progress in exploring clarification in search [2, 51] and related areas [28, 35, 44], the way users interact with such conversational features of search engines is relatively unknown. Analyzing user interactions with clarifying questions would lead to a better understanding of search clarification, and help researchers realize which queries require clarification and which clarifying questions are preferred by users. Based on this motivation, we conduct a large-scale study of user interactions with clarifying questions for millions of unique queries. This study is based on a relatively new feature, called clarification pane, in the Bing search engine that asks a clarifying question in response to some queries. The interface is shown in Figure 1. We analyze user engagements with clarifying questions based on different attributes of the clarification pane. We also study user interactions with clarifying questions for different query properties, such as query length and query type (natural language question or not, ambiguous or faceted, tail or head). We further perform a preliminary study on click bias in clarification panes. Our comprehensive analyses lead to a number of suggestions for improving search clarification. Following our user interaction analyses, we propose a model for learning representations for clarifying questions together with their candidate answers from user interactions as implicit feedback. Our model consists of two major components: Intents Coverage Encoder and Answer Consistency Encoder. The former encodes the intent coverage of the clarification pane, while the latter encodes the plausibility of the clarification pane, i.e., the coherency of the candidate arXiv:2006.00166v1 [cs.IR] 30 May 2020 \fanswers and their consistency with the clarifying questions. Our model is solely designed based on the attention mechanism. We evaluate the model using click data as well as human labeled data. The experiments suggest significant improvements compared to competitive baselines. In summary, the major contributions of this work include: \u2022 Conducting the first large-scale analysis of user interactions with clarification panes in search. Our study provides suggestions for the future development of algorithms for search clarification. \u2022 Performing preliminary experiments showing different click biases, including both position and presentation biases, in the user interaction data with clarification. \u2022 Proposing a novel neural model, specifically designed for representation learning for clarifying questions. Our model outperforms competitive baselines for the task of clarifying question selection/re-ranking. 2 RELATED WORK In this section, we review prior work on asking clarifying questions, query suggestion, and click bias estimation. Asking Clarifying Question. Clarifying questions have been found useful in a number of applications, such as speech recognition [42] as well as dialog systems and chat-bots [6, 13, 33]. In community question answering websites, users often use clarifying questions to better understand the question [7, 35, 36]. Kiesel et al. [22] studied the impact of voice query clarification on user satisfaction. They concluded that users like to be prompted for clarification. Coden et al. [11] studied clarifying questions for entity disambiguation mostly in the form of \u201cdid you mean A or B?\u201d. Recently, Aliannejadi et al. [2] suggested an offline evaluation methodology for asking clarifying questions in conversational systems by proposing the Qulac dataset. The importance of clarification has been also discussed by Radlinski and Craswell [34]. In the TREC HARD Track [3], participants could ask clarifying questions by submitting a form in addition to their runs. Most recently, Zamani et al. [51] proposed models for generating clarifying questions for open-domain search queries. In another study, Zamani and Craswell [50] developed a platform for conversational information seeking that supports mixed-initiative interactions, including clarification. In addition, Hashemi et al. [18] introduced a neural model for representing user interactions with clarifying questions in an open-domain setting. Asking clarifying questions about item attributes has been also explored in the context of conversational recommender systems [43]. For instance, Christakopoulou et al. [10] designed a system for preference elicitation in venue recommendation. Zhang et al. [52] automatically extracted facet-value pairs from product reviews and considered them as questions and answers. In contrast to prior work on search clarification, this work focuses on understanding user interactions with clarifying questions in a real system based on log analysis. Query Suggestion and Auto-Completion. Query suggestion techniques [14, 30, 39] are used to suggest useful next queries to the users. They have been successfully implemented in search engines. Query suggestion, although related, is fundamentally different from search clarification. The reason is that candidate answers should clarify the intent behind the current search query. While, in query suggestion, the next search query might be a follow up query that is Table 1: Statistics of the data collected from the user interactions with the clarification pane. Total impressions 74,617,653 # unique query-clarification pairs 12,344,924 # unique queries 5,553,850 # unique queries with multiple clarification panes 2,302,532 Average number of candidate answers 2.99 \u00b1 1.14 often searched after the query. The clarification examples presented in Figure 1 clearly show the differences. The provided candidate answers are not the expected query suggestions. Query auto-completion, on the other hand, makes suggestion to complete the current search query [9, 27, 41]. In contrast to query auto-completion, search clarification asks a clarifying question and provides coherent candidate answers which are also consistent with the clarifying question. For more details on the differences between search clarification and query suggestion or auto-completion, we refer the reader to [51]. Click Bias. Click bias in user interactions with search engines has been extensively explored in the literature. It has been shown that users intend to click more on the documents with higher rank positions. There exist different biases, such as position bias [20], presentation bias [48], and trust bias [1]. To address this issue, several user models for simulating user behavior have been proposed, such as the Examination model [37] and the Cascade model [12]. In our clarification interface, the candidate answers are presented horizontally to the users. The answer length is also short, thus multiple answers can be seen at a glance. These unique properties make the click bias in clarification different from document ranking. It is even different from image search, in which the results are shown in a two dimensional grid interface [31, 47]. 3 ANALYZING USER INTERACTIONS WITH CLARIFICATION In this section, we study user interactions with clarifying questions in Bing, a major commercial web search engine. We believe these analyses would lead to better understanding of user interactions and expectations from search clarification, which smooths the path towards further development and improvement of algorithms for generating and selecting clarifying questions. In the following subsections, we first introduce the data we collected from the search logs for our analyses. We further introduce the research questions we study in the analyses and later address these questions one by one. 3.1 Data Collection The search engine asks clarifying questions from users in response to some ambiguous or faceted queries. The user interface for this feature, which is called the clarification pane, is shown in Figure 1. The clarification pane is rendered right below the search bar and on top of the result list. Its location in the result page never changes. The clarification pane consists of a clarifying question and up to five clickable candidate answers. Note that the clarification pane is not triggered for navigational queries. To conduct the analyses, we obtained the clickthrough data for the clarification pane in Bing. For some queries, the data contains multiple clarification panes shown to different set of users. The difference between these clarification \fFigure 1: Few examples of clarification in web search. T1 T2 T3 T4 T5 T6 T7 Question Template 0 1 2 3 4 Rel. Engagement Rate T1: What (would you like | do you want) to know about _____? T2: (Which | What) _____ do you mean? T3: (Which | What) _____ are you looking for? T4: What (would you like | do you want) to do with _____? T5: Who are you shopping for? T6: What are you trying to do? T7: Do you have _____ in mind? Figure 2: Relative engagement rate (compared to the average engagement rate) per question template for the most frequent templates in the data. panes relies on the clarifying question, the candidate answer set, or even the order of candidate answers. For more information on generating clarification panes, we refer the reader to [51]. The collected data consists of over 74.6 million clarification pane impressions (i.e., the number of times the clarification pane was shown to users). The data consists of over 5.5 million unique queries. The average number of candidate answers per clarification pane is equal to 2.99. The statistics of the data is reported in Table 1. Note that we only focus on the query-clarification pairs with at least 10 impressions. 3.2 Research Questions In the rest of Section 3, we study the following research questions by analyzing the user interaction data described in Section 3.1. RQ1 Which clarifying questions would lead to higher engagements? (Section 3.3). RQ2 For which search queries do users prefer to use clarification? (Section 3.4) RQ3 How is the impact of clarification on search experience? (Section 3.5) 3.3 Characterizing Clarifications with High Engagement Rate In this subsection, we address RQ1 defined in Section 3.2. To this end, we study the obtained engagement rate (i.e., click rate) by the clarification pane based on different clarification properties, including (1) the clarifying question template, (2) the number of candidate answers, and (3) the conditional click distribution across candidate answers. Table 2: Relative engagement rate (w.r.t. average engagement) for clarification panes per number of answers. # Candidate Answers 2 3 4 5 Relative Engagement Rate 0.95 1.05 1.03 1.03 Min Max Entropy of answer click distribution 0.0 0.5 1.0 1.5 2.0 Rel. Engagement Rate Figure 3: A box plot for the relative engagement rate (compared to the average engagement rate) with respect to the entropy in the conditional answer click distribution. This plot is only computed for clarifications with five options. 3.3.1 Analyzing Clarifying Question Templates. As recently discovered by Zamani et al. [51], most clarification types can be addressed using a set of pre-defined question templates. We identified all question templates used in the data and focused on the most frequent templates. The average engagement rate obtained by each template relative to the overall average engagement rate is presented in Figure 2. The templates are sorted with respect to their reverse frequency in the data. According to the figure, general question templates than can be potentially used for almost all queries, such as \u201cwhat would you like to know about QUERY?\u201d have the higher frequency in the data, while their engagement is relatively low. On the other hand, more specific question templates,1 such as \u201cwhat are you trying to do?\u201d, \u201cwho are you shopping for?\u201d, and \u201cwhich _____ are you looking for?\u201d lead to much higher engagement rates. The relative difference between the engagement rates received by the templates can be as large as 500% (T2 vs. T6). 3.3.2 Analyzing the Number of Candidate Answers. As mentioned earlier in Section 3.1, the number of candidate answers varies between two and five. Table 2 shows the relative engagement rate per number of candidate answers in the clarification pane. According to the results, the clarification panes with only two candidate answers receive a slightly lower engagement rate. The reason could be that the clarification panes with two candidate answers do not always cover all aspects of the submitted query. The clarifying questions with more than two candidate answers generally receive similar engagement rates with each other. Generally speaking, the number of candidate answers is not a strong indicator of user engagement rate. 1By more specific, we mean the questions that cannot be asked for all queries, as opposed to general templates, like T1, that can be asked in response to any query. \f1 2 3 4 5 6 7 8 9 10+ Query length 0.0 0.5 1.0 1.5 2.0 2.5 Rel. Engagement Rate Figure 4: Relative engagement rate (compared to the average engagement rate) per query length. 3.3.3 Analyzing Answer Click Distribution. Figure 3 plots the relative engagement rate received by the clarification pane with respect to the entropy of conditional click distribution on the candidate answers. In case of no observed click for a clarification pane, we assigned equal conditional click probability to all candidate answers. The box plot is computed for five equal-width bins between the minimum and maximum entropy. In this experiments, we only focus on the clarification panes with exactly five candidate answers. The goal of this analysis is to discuss whether higher click entropy (i.e., closer to the uniform click distributions on the candidate answers) would lead to higher engagement rate. According to the plot, the clarification panes with the highest click entropy lead to the highest average and median engagement rate. The second bin from the left also achieves a relatively high average and median engagement rate. This plot shows that the data points in the minimum entropy bin achieve the lowest average and median engagement rates, however, the increase in answer click entropy dose not always lead to higher engagement rate. The reason is that some clarification panes with high engagement rates contain a dominant answer. As an example, for the clarifying question \u201cWhat version of Windows are you looking for?\u201d, we observe over 10 times more clicks on \u201cWindows 10\u201d compared to the other versions. Note that this may change over time. Analyzing the temporal aspect of click distribution and engagement rate is left for future work. In summary the majority of engagement comes for one of two reasons: (1) high ambiguity in the query with many resolutions (i.e., the high click entropy case); (2) ambiguity but where there is a dominant \u201cassumed\u201d intent by users where they only realize the ambiguity after issuing the query (e.g., the mentioned Windows 10 example). 3.4 Characterizing Queries with High Clarification Engagement We address the second research question (RQ2: For which web search queries, do users prefer to use clarification?) by analyzing the user engagements with the clarification pane based on different query properties, such as query length, query type (natural language questions vs. other queries; ambiguous vs. faceted queries; head vs. torso vs. tail queries), and historical clicks observed for the query. 3.4.1 Analyzing Clarification Engagement Based on Query Length. In the research literature, long queries have often given rise to more challenges in producing quality results. One reason is that longer queries are more likely to be less frequent and among tail queries [17]. We study the engagement rates received by the clarification pane with respect to the query length. The result is shown in Figure 4. Interestingly, as the query length increases, we observe Table 3: Relative engagement rate (compared to the average engagement rate) per query type. Query type Relative engagement rate Natural language question 1.58 Other queries 0.96 Faceted queries 1.52 Ambiguous queries 0.70 Tail queries 1.01 Torso queries 1.02 Head queries 0.99 1 2 3 4 5 Position 0.0 0.2 0.4 0.6 Conditional Click Rate Faceted queries Ambiguous queries Figure 5: Conditional click rate per position for ambiguous vs. faceted queries for clarifications with five answers. substantial increase in the average engagement rate. Note that the clarification pane is not shown to the user for navigational queries, thus the data does not contain such queries. 3.4.2 Analyzing Clarification Engagement for Natural Language Questions. According to the first two rows in Table 3, the average engagement rate observed for natural language questions are 64% (relatively) higher than the other queries. Therefore, users who issue natural language questions are more likely to interact with the clarification pane. This observation demonstrates yet another motivation for using clarifying questions in the information seeking systems with natural language user interactions, such as conversational search systems. 3.4.3 Analyzing Clarification Engagement for Ambiguous versus Faceted Queries. Clarifying questions in web search can be useful for revealing the user information needs behind the submitted ambiguous or faceted queries. In Figure 1, few clarification examples are shown. The third example in the figure (right) shows the clarification pane for an ambiguous query, while the other two are faceted queries. The middle part of Table 3 reports the relative engagement rate received by the clarification pane for ambiguous and faceted queries. The category of each query was automatically identified based on the clarifying question and the candidate answers generated in the clarification pane. According to the figure, the clarification pane for faceted queries are approximately 100% more likely to receive a click compared to the ambiguous queries. We plot the conditional click distribution per position for ambiguous and faceted queries in Figure 5. The graph shows that the gap between the first and the second position for ambiguous queries are substantially higher than the gap for faceted queries. This shows that for ambiguous queries, it is more likely that one query intent dominates the user information needs for the query. In fact, this might be one of the reasons that the clarification pane for ambiguous queries receives less engagement, because it is likely that the \f0 -10 10 -20 20 -30 30 -40 40 -50 50 -60 60 -70 70 -80 80 -90 90 -100 100+ # unique clicked URLs 0 1 2 3 4 Rel. Engagement Rate 0.0 -0.1 0.1 -0.2 0.2 -0.3 0.3 -0.4 0.4 -0.5 0.5 -0.6 0.6 -0.7 0.7 -0.8 0.8 -0.9 0.9 -1.0 Normalized URL click entropy 0.0 0.5 1.0 1.5 2.0 2.5 Rel. Engagement Rate Figure 6: A box plot for the relative engagement rate with respect to (a) the number of unique clicked URLs for the query, and (2) the normalized entropy of click distribution on URLs. SERP often covers the most dominant query intent in the top position, thus users skip the clarification pane and directly move to the result list. 3.4.4 Analyzing Clarification Engagement for Head, Torso, and Tail Queries. We use the search traffic to identify the query types. The most frequent queries for a third of search traffic was considered as head queries, the second third as torso, and the rest as tail queries. This results in a small number of high frequency head queries and a large number of low frequency tail queries. We further compute the average engagement rate per query types and report the results in the last part of Table 3. According to the results, all query types achieve similar clarification engagement. Note that the data contains the queries that the clarification pane was triggered for, therefore, there should be too many tail queries that the system does not generate a clarifying question for. 3.4.5 Analyzing Clarification Engagement Based on Historical Click Data. We hypothesize that as the number of aspects for the query increases, the necessity for clarification also increases. To study this hypothesis, we measure the number of aspects per query based on the following criteria: \u2022 Using click data on SERP: for each query q in our data, we looked at a historical click logs and counted the number of unique URLs clicked for the query q. \u2022 Since some clicked URLs may be very related and do not represent different aspects, we follow the approach used in [24] and computed the click distribution entropy normalized by the maximum entropy as an indicator of aspect diversity for the query. The detailed description of click data used in this analysis is presented in Section 5.1.5. The results are plotted in Figure 6 and show that as the number of unique clicked URLs increases the relative engagement rate (both average and median) increases. This is also generally the case when the entropy of click distribution increases. Generally speaking, the unique number of clicked URLs and the click entropy are good indicators of user engagement with clarifying questions. 3.5 Analyzing Clarification Impact and Quality In Sections 3.3 and 3.4, we analyze user interactions with clarification panes in web search. In the next set of analysis, we study the impact of clarification on search experience (i.e., RQ3 in Section 3.2). Since SERP contains multiple elements, such as the result list, the entity card, and the answer box, one cannot simply compute the satisfying click ratio as a full indicator of search satisfaction. Hassan Table 4: The human labels for the clarification panes. Label % Good % Fair % Bad Overall label 6.4% 86.5% 7.1% Landing page label 89.1% 6.6% 4.3% et al. [19] shows that measuring user satisfaction can go beyond clicks and for example query reformulation can be used as a signal for user satisfaction. Therefore, because there are multiple SERP elements that can satisfy user satisfaction, we instead focus on dissatisfaction. Clicking on the result list with a small dwell time (i.e., unsatisfying clicks) or reformulating a query with a similar query within a time interval that is short enough (such as five minutes) implies dissatisfaction [19]. We measured dissatisfaction for the sessions in which users interact with clarification, and observed 16.6% less dissatisfaction compared to the overall dissatisfaction of the search engine. Note that there are many queries for which the clarification pane is not shown. Therefore, this relative number is not a completely representative comparison, however it gives us some idea on the overall impact of clarification on search quality. Since clicking on a candidate answer in clarification leads to a new query and a new SERP, A/B testing for measuring the impact of clarification in search could be also quite challenging here. Some of these challenges have been discussed by Machmouchi and Buscher [25]. A comprehensive study of user satisfaction while interacting with clarifying questions is left for future work. We also observe that in 7.30% of the interactions with the clarification pane, users click on multiple candidate answers. This suggests that in many of these cases, the users would like to explore different candidate answers provided by the system. In other words, this observation shows that there is a promise in using clarification with candidate answers for exploratory search. Another approach to measure the impact of search clarification is measuring search quality using human annotations. To do so, we sampled 2000 unique queries from the search logs and asked three trained annotators to provide labels for each query-clarification pair. Following [2, 51], we first asked the trained annotators to first skim multiple pages of search results for the query to have a sense on different possible intents of the query. We then asked them to provide the following labels for each clarification pane: \u2022 Overall label: the overall label is given to the whole clarification pane in terms of its usefulness for clarification, comprehensiveness, coverage, understandability, grammar, diversity, and importance order. In summary, they are asked to assign a Good label, if all the mentioned criteria are met. While, the Fair label should be assigned to an acceptable candidate answer set that does not satisfy at least one of the above criteria. Otherwise, the Bad label should be chosen. \u2022 Landing page quality: the search quality of the secondary SERP obtained by clicking on each candidate answer. A secondary SERP is considered as Good, if the answer to all possible information needs behind the the selected answer can be easily found in a prominent location in the page (e.g., an answer box on top of the page or the top three documents) and the retrieved information correctly satisfies the possible information needs. If the result page is still useful but finding the answer is not easy, the Fair label should be chosen. Otherwise, the landing page is Bad. \fA detailed description of each label with multiple examples is provided to the annotators. In some rare cases (less than 2%), there is no agreement between the annotators (i.e., no label with more than 1 voter). In such cases, we dropped the query-clarification pair from the data. The overall Fleiss\u2019 kappa inter-annotator agreement is 72.15%, which is considered as good. The results for human annotations are shown in Table 4. According to the table, the majority of secondary search results (i.e., landing page) after clicking on each individual option are labeled as Good, so the query intent was addressed in a prominent location of the SERP. For the overall label, most annotators tend to choose Fair as the label. Note that Fair still meets some high standards due to the description provided to the annotators. The reason is that they could mostly argue that there is an intent that is not covered by the clarification pane, and thus it should not get a Good label. 4 EXPLORING CLICK BIAS In the last section, we study the engagement rates received by the clarification pane in web search. In this section, we extend our analysis to the interactions with individual candidate answers. Such analysis would be useful for developing effective models for re-ranking candidate answers or even replacing them. However, implicit feedback could be biased for a number of reasons, such as presentation. Figure 5 shows that for both query types, the conditional click probability decreases by the increase in the candidate answer position. Note that the candidate answers are presented horizontally in the interface and the first position means the far left candidate answer in Figure 1. This observation might be due to the fact that the clarification pane sorts candidate answers based on their popularity and relevance. On the other hand, this could be also due to position and presentation biases in user behaviors. This section provides a preliminary analysis of bias in the click data observed on each candidate answer. In the experiments designed for this section, we followed the process used by Craswell et al. [12] for studying position bias in web search. In more detail, we created a data set D whose instances are in the form of (q,C,C\u2032), where q is a query while C and C\u2032 are two difference clarification panes for q. We make sure that the clarifying question and the candidate answer set in both C and C\u2032 are the same. The only different between C and C\u2032 is that two adjacent candidate answers are swapped. Therefore, as suggested in [12], this data allows us to focus on the click distribution on two adjacent candidate answers where their contents and their relevance do not change, while their positions change. This resulted in 46, 573 unique queries and 132, 981 data points in our data. To study click bias in D, we first solely focus on the position. To do so, for each triplet (q,C,C\u2032) \u2208D, assume that the candidate answer in position i is swapped with the one in position i + 1. In other words, Ci = C\u2032 i+1 and Ci+1 = C\u2032 i, where the subscripts show the position of candidate answer (note that \u2200j , i,i + 1 : Cj = C\u2032 j). We then construct the following two-dimensional data points: < click rate for Ci, click rate for C\u2032 i+1 > < click rate for C\u2032 i, click rate for Ci+1 > These pairs show what would be the click rate on the same candidate answer if it ranks higher for only one position. We repeat this process for all the data points in D. The scatter plots for the Table 5: Percentage of points that would receive higher click rate if moved to a higher position (i.e., % points above the diagonal in Figure 7). Note that the distance from diagonal is visualized by the line fitted on the data in Figure 7. # candidate answers 1 \u21942 2 \u21943 3 \u21944 4 \u21945 2 56.34% 3 56.17% 57.89% 4 47.28% 57.63% 55.62% 5 48.50% 52.32% 53.54% 49.77% created data points in a log odds space (log_odds(p) = log( p 1\u2212p )) are shown in Figure 7. Note that in a perfect scenario, all points should be on the diagonal in the figures. However, this perfect scenario never happens in practice. We also fit a line (i.e., the solid line) to the data points in each scatter plot to better demonstrate the distribution of data points in this space. As shown in the figure, the slope of the line generally gets closer to the diagonal as the number of options increases. The reason is that as the number of options increases, the click bias in the lower positions are far less than the bias in the higher positions and this influences the overall click bias. We also compute the percentage of points above the diagonal in each setting. This shows for what percentage of data points, the same answer with a higher positions would attract more clicks. The result is reported in Table 5. Each column in the table shows the position of swapped adjacent answers. The closer the percentage to 50%, the less likely there is a click bias. Moreover, all the percentages are typically expected to be higher than or equal to 50%, which means options with higher ranks (left) are more likely to be clicked. However, our observation in Table 5 is different. As shown in the table, when the number of answers are 4 or 5, the percentage of points above the diagonal is lower than 50% for the 1 \u21942 setting (this also happens for 4 \u21945 when the number of candidate answers is 5, but it is close to 50%). The reason for such observation is that position (i.e., rank) is not the only variable that influences click bias. The size of candidate answers also varies as the candidate answer content gets longer (e.g., see Figure 1). Proper visualization of the click bias considering all of these variables is difficult. Therefore, to study the influence of each variable on click distribution, we train a logistic regression for click prediction. This is similar to the technique used by Yue et al. [48] to study click bias based on different result presentations in web search (e.g., the number of bold terms in the snippets). Therefore, for each triplet (q,C,C\u2032) \u2208D, the goal is to predict the click rate for the swapped candidate answers in C\u2032 given the observation we had from C. We use the following features for the logistic regression model: \u2022 CTR_L: The click rate observed for candidate answer Ci. \u2022 CTR_R: The click rate observed for candidate answer Ci+1. \u2022 SIZE_DIFF: The relative size difference between the candidate answers Ci and Ci+1. In other words, this feature is equal to (size(Ci) \u2212size(Ci+1))/(size(Ci) + size(Ci+1)). \u2022 OFFSET: The offset of the candidate answer Ci. For the first candidate answer, the offset is equal to zero. We train two logistic regressions to predict the following labels: \u2022 L: The click rate for candidate answer C\u2032 i. \u2022 R: The click rate for candidate answer C\u2032 i+1. \fFigure 7: Log odds scatter plot for the click rates of the same candidate answer on the lower position (x axis) and the higher position (y axis) when swapping adjacent candidate answers. CTR_L CTR_R SIZE_DIFF OFFSET R L # candidate answers = 2 0.06 0.04 0.02 0.00 0.02 0.04 0.06 CTR_L CTR_R SIZE_DIFF OFFSET R L # candidate answers = 3 0.02 0.01 0.00 0.01 0.02 CTR_L CTR_R SIZE_DIFF OFFSET R L # candidate answers = 4 0.04 0.02 0.00 0.02 0.04 CTR_L CTR_R SIZE_DIFF OFFSET R L # candidate answers = 5 0.010 0.005 0.000 0.005 0.010 Figure 8: Feature weights learned by logistic regression for predicting click rate when two adjacent candidate answers are swapped. The figure should be viewed in color. Table 6: Cross entropy for click rate estimation models. Lower cross entropy indicates more accurate click rate estimation. Model 2 options 3 options 4 options 5 options Best Possible 0.0216 \u00b1 0.0058 0.0100 \u00b1 0.0040 0.0097 \u00b1 0.0049 0.0053 \u00b1 0.0012 Blind click (relevance independent) 0.1193 \u00b1 0.0294 0.0604 \u00b1 0.0275 0.0561 \u00b1 0.0330 0.0283 \u00b1 0.0064 Baseline (no click bias) 0.1105 \u00b1 0.0264 0.0578 \u00b1 0.0254 0.0539 \u00b1 0.0329 0.0272 \u00b1 0.0064 Examination 0.1084 \u00b1 0.0237 0.0544 \u00b1 0.0186 0.0517 \u00b1 0.0260 0.0275 \u00b1 0.0093 Cascade 0.1063 \u00b1 0.0145 0.0551 \u00b1 0.0174 0.0510 \u00b1 0.0189 0.0273 \u00b1 0.0090 Logistic regression 0.0482 \u00b1 0.0058 0.0336 \u00b1 0.0055 0.0333 \u00b1 0.0064 0.0264 \u00b1 0.0012 Note that the candidate answer C\u2032 i (or C\u2032 i+1) is in position i + 1 (or i) inC. We perform 10 fold cross-validation for training the logistic regression model. The learned feature weights were consistent across folds. The average weights are shown in Figure 8. In all the plots, CTR_L gets a positive weight for the label R and CTR_R also gets a positive weight for the label L. This shows that the click rate on the same candidate answer in the reverse order is a positive signal for click prediction, which is expected. The weights for two candidate answers shows that the size difference of candidate answers are also very effective in predicting the click bias. As the number of answers increases, the influence of size difference decreases, while the influence of offset increases. The size difference for the label R always gets a negative weight, while this feature gets a positive weight for label L. This is again expected, showing that if we replace the left candidate answer with a larger size answer, the click rate on L would increase, and at the same time the click rate on R would decrease. In other words, the candidate answer size is a strong signal for predicting click rate. The offset has a negative weight for both labels L and R. This suggests that the further the candidate answers from the left, the less likely to observe a click. Note that when the number of candidate answers is two, the offset for all examples is equal to zero and thus it has no effect. To show that this simple logistic regression predicts the click rate accurately, we compare this model against some simple baselines. The results are reported in Table 6. Following Craswell et al. [12], we use cross entropy between the true and the predicted click rates as the evaluation metric. The results show that a baseline model that assumes there is no click bias has a much higher cross entropy than the best possible cross entropy (i.e., the entropy of the true labels). The Examination model [37] and the Cascade model [12] are user models borrowed from the web search literature. The Examination model assumes each rank has a certain probability of being examined by the user. The Cascade model, on the other hand, assumes that the user views search results from top to bottom, deciding whether to click before moving to the next. Therefore, it also models a skip probability. The assumptions made by both of these models (and many other click models) may not hold in our scenario, where the answers are presented horizontally and their length is small and many of them can be examined by the user at a glance. The results also suggest that these models do not predict the click rate much better than the baseline which assumes there is no click bias. The logistic regression model, however, achieves a much lower cross entropy. Note that the goal of this section is providing some insights into the click bias in the data, and not proposing effective user models for click estimation. We believe that this preliminary click bias analysis provides some insights into how bias is the user interactions with individual candidate answers. Deeper analyses, for example based on mouse movement and eye-tracking, can shed light on the user click behaviors with clarifying questions and can lead to accurate user models for click estimation and debiasing the data. 5 IMPROVING CLARIFICATION USING USER INTERACTION DATA A fundamental task in search clarification is re-ranking and selecting the clarifying questions generated by different models under different assumptions. A few clarifying question generation models are presented in [51]. Based on the analyses presented in Section 3, \fTEXTENCODER \ud835\udc5e # \ud835\udc4e\u0b35 # \ud835\udc56\u0bdd TEXTENCODER \ud835\udc5e # \ud835\udc4e\u0b36 # \ud835\udc56\u0bdd TEXTENCODER \ud835\udc5e # \ud835\udc4e\u0bc4 # \ud835\udc56\u0bdd \u2026 Transformer \u2026 \ud835\udc79\ud835\udfcf \ud835\udfcf \ud835\udc79\ud835\udfd0 \ud835\udfcf \ud835\udc79\ud835\udc72 \ud835\udfcf (a) The Individual Intent Encoder model. Individual Intent Encoder \ud835\udc5e, \ud835\udc34, \ud835\udc56\u0b35 Individual Intent Encoder \ud835\udc5e, \ud835\udc34, \ud835\udc56\u0b36 Individual Intent Encoder \ud835\udc5e, \ud835\udc34, \ud835\udc56\u0be1 \u2026 Transformers \ud835\udc5d(\ud835\udc56\u0bdd|\ud835\udc5e) Intent Frequency Attention Point-wise Feed Forward TEXTENCODER \ud835\udc4e\u0b35 # \ud835\udc52\u0b35 TEXTENCODER \ud835\udc4e\u0bc4 # \ud835\udc52\u0bc4 \u2026 Transformers TEXTENCODER \ud835\udc5e\u2217 Label Prediction Answer Consistency Encoder Intents Coverage Encoder \u2026 \ud835\udc79\ud835\udfcf \ud835\udfd0 \ud835\udc79\ud835\udfd0 \ud835\udfd0 \ud835\udc79\ud835\udc8f \ud835\udfd0 \ud835\udc79\ud835\udfd1 \ud835\udc79\ud835\udfd2 \ud835\udc79\ud835\udc70\ud835\udc6a\ud835\udc6c \ud835\udc79\ud835\udc68\ud835\udc6a\ud835\udc6c (b) The RLC architecture. Figure 9: The neural network architecture for RLC. Same color indicates shared parameters. we introduce the following features for re-ranking clarification panes in response to a query: (1) question template (a categorical feature), (2) query length, (3) query types (see Table 3), (4) the number of candidate answers, (5) the number of unique clicked URLs, and (6) the URL normalized click entropy. A number of these features are query-specific. To measure how much the clarification pane clarifies different query intents, we can use the Clarification Estimation model presented in [51]. However, some aspects of clarification (e.g., candidate answer coherency) is missing or is not effectively addressed in this feature. In the following, we propose an end to end neural model to fill these gaps. The model is mainly trained based on user interaction data and further fine-tuned using a small set of human labeled data. Let us first introduce our notation. Let T denote a training set containing triplets of (q,C, L), where q is a unique query, C = [c1,c2, \u00b7 \u00b7 \u00b7 ,cm] is a set of m clarification panes for the query, and L = [l1,l2, \u00b7 \u00b7 \u00b7 ,lm] is the labels associated with the clarification panes. Each clarification pane cj includes a clarifying question q\u2217 and a list of K candidate answers A = [a1,a2, \u00b7 \u00b7 \u00b7 ,aK], where K = 5 in our setting. Additionally, let Iq denote the intent set for the query q with n intents, whose jth element is a pair (ij,wj), where denotes an intent (ij) and its weight (wj). Note that the query intent set is often unknown to a system, but there exist few approaches for estimating the intent set based on query logs and click data. We later explain how we built the intent set Iq for our experiments (See Section 5.1.5). The goal is to train a representation learning model for each query-clarification pair. This model can be used for selecting or re-ranking clarification panes. 5.1 Representation Learning for Clarification We design our neural model based on the following assumptions: Assumption 1. A good clarification pane should clarify different intents of the query, in particular the most frequent intents. Assumption 2. The candidate answers in a good clarification pane should be coherent and also consistent with the clarifying question. Assumption 1 is indirectly related to the analysis done in Figure 6, which shows that queries with more unique clicked URLs would lead to higher engagement rates. This shows that covering a wide range of intents is an important factor in clarifying questions, which leads us to the first assumption. Given these assumptions, our model, called RLC,2 is built based on two major components: 2stands for Representation Learning for Clarification. Table 7: The training and test data used in our experiments. Data # training queries # test queries # clarifications per query Click data 137,392 3925 6.2 Labeled data 1848 122 10 Intents Coverage Encoder and Answers Consistency Encoder. The architecture of RLC is depicted in Figure 9. 5.1.1 Intents Coverage Encoder. This component learns a highdimensional vector representing the intent coverage of the candidate answer set. We first create K\u00d7n triplets (q,ak,ij) for 1 \u2264k \u2264K and 1 \u2264j \u2264n. For each of these triplets, we create a sequence query answer intent with some boundary tokens and feed the sequence to a text encoder network for obtaining the representation R(1) kj = TextEncoder(q,ak,ij). See Section 5.1.4 for more information on TextEncoder. Next, we would like to see whether each intent is covered by the candidate answer set. Therefore, we concatenate all the representations R(1) kj for all 1 \u2264k \u2264K and feed the obtained vector to a Transformer encoder, which consists of multiple Transformer layers [46]. The self-attention mechanism in Transformer helps the model learn a representation for the coverage of the jth intent by the answer set. This results in n representations R(2) j , one per query intent. Different query intents may be related, especially since they are automatically estimated using some algorithms. Therefore, we apply a Transformer Encoder layer on top of all individual intent representations, whose self-attention mechanism would lead to learning accurate representations for related intents. This layer gives us R(3) j for each intent ij. In addition, some intents are more common than the others. According to Assumption 1, we expect the model to particularly cover those common intents. Therefore, we use the intent weights as attentions for intent coverage representation. Formally, R(4) j = wj \u00cd j\u2032 wj\u2032 R(3) j . This layer is followed by two point-wise feed-forward layers to adjust the representation space and add non-linearity. This component returns the intent coverage encoding R(ICE). 5.1.2 Answers Consistency Encoder. This component focuses on the clarifying question and its answer set. Answer entity types are found useful for generating clarifying questions [51]. Therefore, in this component, we first learn a representation for each candidate \fTable 8: Experimental results for re-ranking clarification panes for a query. The superscripts 1/2/3 indicate statistically significant improvements compared to Clarification Estimation/BERT/LambdaMART without RLC, respectively. Method Click Data Labeled Data Landing Pages Quality Eng. Rate Impr. nDCG@1 nDCG@3 nDCG@5 %Bad %Fair %Good Clarification Estimation [51] \u2013 0.8173 0.9356 0.9348 11.68% 13.24% 75.08% BERT [15] 25.96%1 0.85151 0.9449 0.9425 10.52% 17.24% 72.24% LambdaMART w/o RLC 67.27%12 0.900112 0.95841 0.95651 5.21% 19.45% 75.34% RLC 92.41%123 0.9312123 0.9721123 0.9702123 5.63% 12.33% 82.04% LambdaMART w/ RLC 106.18%123 0.9410123 0.9822123 0.9767123 4.94% 10.21% 84.85% answer ak based on the answer text and its entity type (denoted as ek) if exists, concatenated using a separation token and fed into TextEncoder. We also feed the clarifying question to the TextEncoder. This results in K + 1 representations. We further apply a Transformer encoder whose self-attention mechanism helps the model identify coherent and consistent answers. In other words, the attention weights from each candidate answer to the others as well as the question help the model observe the similarity of answers and their entity types. The use of entity type would increase generalization and entity similarity better represents the answer coherency. R(ACE) is the output of this component. 5.1.3 Label Prediction. For the label prediction sub-network, we simply concatenate R(ICE) and R(ACE) and feed the obtained vector to a feed-forward network with two layers. The output dimensionality of this component is 1, which indicates the final score for the given query-clarification pair. 5.1.4 TextEncoder. As mentioned above, each major component in the network starts with a TextEncoder. There are several approaches for implementing this component. In this paper, we use BERT [15] \u2013 a Transformer-based network pre-trained on a masked language modeling task. BERT has recently led to significant improvements in several NLP and IR tasks [15, 29, 32]. We use BERT-base which consists of 12 layers, 768 representation dimensions, 12 attention heads, and 110M parameters.3 The BERT parameters are fine-tuned in our end-to-end training. The components with the same color in Figure 9 share parameters. Note that the TextEncoder functions with different colors still share the embedding layer (i.e., the first layer), while their attention weight matrices are different and learned for the specific input type. 5.1.5 The Intent Set Iq. We use two datasets for estimating the intents of each query.4 The first one is the query reformulation data and the second one is click data on documents. These two datasets were obtained from the Bing query logs, randomly sub-sampled from the data collected in a 2 year period of the EN-US market. The query reformulation data is a set of triplets (q,q\u2032,w), where w is the frequency of the q \u2212 \u2212\u2192q\u2032 query reformulation in the same session. We use the reformulations in which q\u2032 contains q as an estimation for query intent. A similar assumption has been made in [51]. From the click data, we use the title of the clicked URLs as an additional source for estimating query intents. We only kept the query reformulations and clicks with a minimum frequency of 2. 5.2 Training We train our model using a pair-wise loss function. For two clarification panes for the same query, we get the score from RLC and use the softmax operator to convert the scores to probabilities. We 3The pre-trained models can be found at https://github.com/google-research/bert. 4Therefore, there are two Intents Coverage Encoders whose outputs are concatenated. use the binary cross entropy loss function for training, i.e., the label for the clarification pane with higher engagement rate is 1. We further fine tune the model using a small set of human labeled data. We optimize the network parameters using Adam with L2 weight decay, learning rate warm-up for the first 5000 steps and linear decay of the learning rate. The learning rate was set to 105. In the following, we introduce our datasets: Clarification Click Data: From the data described earlier in Table 1, we kept clarifying questions with at least 10 impressions, and at least two different clarification panes that have different engagement rates, i.e., click rates. We split the data randomly into train and test based on the queries. For more details, see Table 7. Clarification Labeled Data: We obtained an overall label for clarification and the secondary search result page (landing page) quality labels using the instructions mentioned in Section 3.5. We split the data into train and test sets and no query is shared between the sets. The statistics of this data is also reported in Table 7. Note that in the labeled data we re-rank 10 clarifying questions per query. If the number of labeled clarifying questions are less than 10, we randomly add negative samples with label 0 from the clarifying questions for other queries. Entity Type Data: For answer entity types, we used an open information extraction toolkit, i.e., Reverb [16], to extract \u201cis a\u201d relations from a large-scale corpus (over 35 petabyte of search snippets). We only kept the relations with the confidence of at least 96%. This results in over 27 millions relations for over 20 millions unique phrases. The data contains over 6 millions entity types. 5.3 Clarification Re-Ranking Results We first trained the model using 90% of the training set and use the remaining 10% for hyper-parameter tuning of all models, including the baselines. Once the hyper-paraters were selected, we trained the final model on the whole training set and computed the result on the test set. The results for the proposed method and some baselines are reported in Table 8. For the click data, we re-rank the clarification panes and select the first one and report the engagement rate. We finally compute the average engagement rates across queries. The engagement rates are reported relative to the performance of Clarification Estimation [51]. The BERT model uses all the inputs we used in RLC, i.e., the query, the clarification pane and the estimated intents. All of these inputs are concatenated using separation tokens and fed to BERT-base with different segment embeddings. LambdaMART [8] w/o RLC uses all the features described earlier in Section 5 plus the BERT-base output. The results show that the proposed method outperforms all the baselines. According to the paired t-test with Bonferroni correction, the improvements are statistically significant (p_value < 0.05). The best model (i.e., LambdaMART w/ RLC) achieves an nDCG@1 of 0.9410. \f6" + }, + { + "url": "http://arxiv.org/abs/1912.08904v1", + "title": "Macaw: An Extensible Conversational Information Seeking Platform", + "abstract": "Conversational information seeking (CIS) has been recognized as a major\nemerging research area in information retrieval. Such research will require\ndata and tools, to allow the implementation and study of conversational\nsystems. This paper introduces Macaw, an open-source framework with a modular\narchitecture for CIS research. Macaw supports multi-turn, multi-modal, and\nmixed-initiative interactions, and enables research for tasks such as document\nretrieval, question answering, recommendation, and structured data exploration.\nIt has a modular design to encourage the study of new CIS algorithms, which can\nbe evaluated in batch mode. It can also integrate with a user interface, which\nallows user studies and data collection in an interactive mode, where the back\nend can be fully algorithmic or a wizard of oz setup. Macaw is distributed\nunder the MIT License.", + "authors": "Hamed Zamani, Nick Craswell", + "published": "2019-12-18", + "updated": "2019-12-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.HC" + ], + "main_content": "INTRODUCTION The rapid growth in speech and small screen interfaces, particularly on mobile devices, has significantly influenced the way users interact with intelligent systems to satisfy their information needs. The growing interest in personal digital assistants, such as Amazon Alexa, Apple Siri, Google Assistant, and Microsoft Cortana, demonstrates the willingness of users to employ conversational interactions [10]. As a result, conversational information seeking (CIS) has been recognized as a major emerging research area in the Third Strategic Workshop on Information Retrieval (SWIRL 2018) [4].2 Research progress in CIS relies on the availability of resources to the community. There have been recent efforts on providing data 1Macaw is available at https://github.com/microsoft/macaw. 2https://sites.google.com/view/swirl3/ Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Pre-Print, Microsoft, AI & Research \u00a9 2019 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn (a) Multi-modal interactions. (b) Multi-turn interactions. Figure 1: Example screenshots of the Macaw interface on mobile devices using Telegram bots. Macaw supports multimodal and multi-turn interactions. for various CIS tasks, such as the TREC 2019 Conversational Assistance Track (CAsT),3 MISC [14], Qulac [1], CoQA [11], QuAC [3], SCS [15], and CCPE-M [9]. In addition, Dalton et al. [5] have implemented a demonstration for conversational movie recommendation based on Google\u2019s DialogFlow. Despite all of these resources, the community still feels the lack of a suitable platform for developing CIS systems. We believe that providing such platform will speed up the progress in conversational information seeking research. Therefore, we developed a general framework for supporting CIS research. The framework is called Macaw. This paper describes the high-level architecture of Macaw, the supported functionality, and our future vision. Researchers working on various CIS tasks should be able to take advantage of Macaw in their projects. Macaw is designed based on a modular architecture to support different information seeking tasks, including conversational 3http://www.treccast.ai/ arXiv:1912.08904v1 [cs.IR] 18 Dec 2019 \fsearch, conversational question answering, conversational recommendation, and conversational natural language interface to structured and semi-structured data. Each interaction in Macaw (from both user and system) is a Message object, thus a conversation is a list of Messages. Macaw consists of multiple actions, each action is a module that can satisfy the information needs of users for some requests. For example, search and question answering can be two actions in Macaw. Even multiple search algorithms can be also seen as multiple actions. Each action can produce multiple outputs (e.g., multiple retrieved documents). For every user interaction, Macaw runs all actions in parallel. The actions\u2019 outputs produced within a predefined time interval (i.e., an interaction timeout constant) are then post-processed. Macaw can choose one or combine multiple of these outputs and prepare an output Message object as the user\u2019s response. The modular design of Macaw makes it relatively easy to configure a different user interface or add a new one. The current implementation of Macaw supports a command line interface as well as mobile, desktop, and web apps. In more detail, Macaw\u2019s interface can be a Telegram bot, which supports a wide range of devices and operating systems (see Figure 1). This allows Macaw to support multi-modal interactions, such as text, speech, image, click, etc. A number of APIs for automatic speech recognition and generation have been employed to support speech interactions. Note that the Macaw\u2019s architecture and implementation allows mixed-initiative interactions. The research community can benefit from Macaw for the following purposes: \u2022 Developing algorithms, tools, and techniques for CIS. \u2022 Studying user interactions with CIS systems. \u2022 Performing CIS studies based on an intermediary person and wizard of oz. \u2022 Preparing quick demonstration for a developed CIS model. 2 MACAW ARCHITECTURE Macaw has a modular design, with the goal of making it easy to configure and add new modules such as a different user interface or different retrieval module. The overall setup also follows a Model-View-Controller (MVC) like architecture. The design decisions have been made to smooth the Macaw\u2019s adoptions and extensions. Macaw is implemented in Python, thus machine learning models implemented using PyTorch,4 Scikit-learn,5 or TensorFlow6 can be easily integrated into Macaw. The high-level overview of Macaw is depicted in Figure 2. The user interacts with the interface and the interface produces a Message object from the current interaction of user. The interaction can be in multi-modal form, such as text, speech, image, and click. Macaw stores all interactions in an \u201cInteraction Database\u201d. For every interaction, Macaw looks for most recent user-system interactions (including the system\u2019s responses) to create a list of Messages, called the conversation list. It is then dispatched to multiple information seeking (and related) actions. The actions run in parallel, and each should respond within a pre-defined time interval. The output selection component selects 4https://pytorch.org/ 5https://scikit-learn.org/ 6http://tensorflow.org/ from (or potentially combines) the outputs generated by different actions and creates a Message object as the system\u2019s response. This message is logged into the interaction database and is sent to the interface to be presented to the user. Again, the response message can be multi-modal and include text, speech, link, list of options, etc. Macaw also supports Wizard of Oz studies or intermediary-based information seeking studies. The architecture of Macaw for such setup is presented in Figure 3. As shown in the figure, the seeker interacts with a real conversational interface that supports multimodal and mixed-initiative interactions in multiple devices. The intermediary (or the wizard) receives the seeker\u2019s message and performs different information seeking actions with Macaw. All seeker-intermediary and intermediary-system interactions will be logged for further analysis. This setup can simulate an ideal CIS system and thus is useful for collecting high-quality data from real users for CIS research. 3 RETRIEVAL AND QUESTION ANSWERING IN MACAW The overview of retrieval and question answering actions in Macaw is shown in Figure 4. These actions consist of the following components: \u2022 Co-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the coreferences from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component. \u2022 Query Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing. \u2022 Retrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface [6, 12].7 We also provide the support for web search using the Bing Web Search API.8 Macaw also allows multi-stage document re-ranking. \u2022 Result Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model [2] for question answering. These components are implemented in a generic form, so researchers can easily replace them with their own favorite algorithms. 7Indri [12] is an open-source search engine originally implemented to support language models in information retrieval as part of the Lemur Project (http://lemurproject. org/). It features a wide range of retrieval models. For more information visit http: //lemurproject.org/indri.php. 8https://azure.microsoft.com/en-us/services/cognitive-services/ bing-web-search-api/ \fInterface Conversation Retrieval Request Dispatcher Action 1 Output Selection Output Presentation List of Request Messages Action 2 Action N Interaction DB UI Specific Request Message Response Message Model Specific Figure 2: The high-level architecture of Macaw for developing conversation information seeking systems. Interface Interface Request Dispatcher Action 1 Output Selection Action 2 Action N Response Message Seeker (User) Intermediary (Wizard) Figure 3: The high-level architecture of Macaw for user studies. In this architecture, user interacts with a human intermediary who is an expert of the system and can interact with the system to address the user\u2019s information need. 4 USER INTERFACES We have implemented the following interfaces for Macaw: \u2022 File IO: This interface is designed for experimental purposes, such as evaluating the performance of a conversational search technique on a dataset with multiple queries. This is not an interactive interface. \u2022 Standard IO: This interactive command line interface is designed for development purposes to interact with the system, see the logs, and debug or improve the system. \u2022 Telegram: This interactive interface is designed for interaction with real users (see Figure 1). Telegram9 is a popular instant messaging service whose client-side code is open-source. We 9https://telegram.org/ \fCo-reference Resolution Query Generation Retrieval Model Document Collection Result Generation Figure 4: The overview of retrieval and question answering in Macaw. have implemented a Telegram bot that can be used with different devices (personal computers, tablets, and mobile phones) and different operating systems (Android, iOS, Linux, Mac OS, and Windows). This interface allows multi-modal interactions (text, speech, click, image). It can be also used for speech-only interactions. For speech recognition and generation, Macaw relies on online APIs, e.g., the services provided by Google Cloud and Microsoft Azure. In addition, there exist multiple popular groups and channels in Telegram, which allows further integration of social networks with conversational systems. For example, see the Naseri and Zamani\u2019s study on news popularity in Telegram [8]. Similar to the other modules, one can easily extend Macaw using other appropriate user interfaces. 5 LIMITATIONS AND FUTURE WORK The current implementation of Macaw lacks the following actions. We intend to incrementally improve Macaw by supporting more actions and even more advanced techniques for the developed actions. \u2022 Clarification and Preference Elicitation: Asking clarifying questions has been recently recognized as a necessary component in a conversational system [1, 9]. The authors are not aware of a published solution for generating clarifying questions using public resources. Therefore, Macaw does not currently support clarification. \u2022 Explanation: Despite its importance, result list explanation is also a relatively less explored topic. We intend to extend Macaw with result list explanation as soon as we find a stable and mature solution. \u2022 Recommendation: In our first release, we focus on conversational search and question answering tasks. We intend to provide support for conversational recommendation, e.g., [7, 13, 18], and joint search and recommendation, e.g., [16, 17], in the future. \u2022 Natural Language Interface: Macaw can potentially support access to structured data, such as knowledge graph. We would like to ease conversational natural language interface to structured and semi-structured data in our future releases. 6 CONTRIBUTION Macaw is distributed under the MIT License. We welcome contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. This project has adopted the Microsoft Open Source Code of Conduct. When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. 7" + }, + { + "url": "http://arxiv.org/abs/1807.05631v1", + "title": "Joint Modeling and Optimization of Search and Recommendation", + "abstract": "Despite the somewhat different techniques used in developing search engines\nand recommender systems, they both follow the same goal: helping people to get\nthe information they need at the right time. Due to this common goal, search\nand recommendation models can potentially benefit from each other. The recent\nadvances in neural network technologies make them effective and easily\nextendable for various tasks, including retrieval and recommendation. This\nraises the possibility of jointly modeling and optimizing search ranking and\nrecommendation algorithms, with potential benefits to both. In this paper, we\npresent theoretical and practical reasons to motivate joint modeling of search\nand recommendation as a research direction. We propose a general framework that\nsimultaneously learns a retrieval model and a recommendation model by\noptimizing a joint loss function. Our preliminary results on a dataset of\nproduct data indicate that the proposed joint modeling substantially\noutperforms the retrieval and recommendation models trained independently. We\nlist a number of future directions for this line of research that can\npotentially lead to development of state-of-the-art search and recommendation\nmodels.", + "authors": "Hamed Zamani, W. Bruce Croft", + "published": "2018-07-15", + "updated": "2018-07-15", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION A quarter century has passed since Belkin and Croft [3] discussed the similarity and unique challenges of information retrieval (IR) and information filtering (IF) systems. They concluded that their underlying goals are essentially equivalent, and thus they are two sides of the same coin. This is why content-based filtering approaches, especially those deal with unstructured data, employ several techniques initially developed for IR tasks, e.g., see [13, 14, 20, 30]. With Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). DESIRES 2018, August 2018, Bertinoro, Italy \u00a9 2018 Copyright held by the owner/author(s). Users Items Recommendation Engine Search Engine search query Recommendation List Search Result List Figure 1: An example of joint search (without personalization) and recommendation systems where items are shared, e.g., in e-commerce websites. The intuition behind joint modeling of search and recommendation is making use of training data from both sides to learn more accurate item representations. the growth of collaborative filtering approaches, IR and recommender system (RecSys) have become two separate fields with a little overlap between the two communities. Nevertheless, IR models and evaluation methodologies are still common in recommender systems. For instance, common IR evaluation metrics such as mean average precision (MAP) and normalized discounted cumulative gain (NDCG) [9] are frequently used by the RecSys community [22]. IR models such as learning to rank approaches are also popular in the RecSys literature [10]. Costa and Roda [4] formulated recommender systems as an IR task. The language modeling framework for information retrieval [19] and relevance models [12] have been also adapted for the collaborative filtering task [17, 24, 25]. On the other hand, RecSys techniques have been also used in a number of IR tasks. For instance, Zamani et al. [27] cast the query expansion task to a recommendation problem, and used a collaborative filtering approach to design a pseudo-relevance feedback model. In this paper, we revisit the Belkin and Croft\u2019s insights to relate these two fields once again. We believe that search engines and recommender systems seek the same goal: arXiv:1807.05631v1 [cs.IR] 15 Jul 2018 \fHelping people get the information they need at the right time. Therefore, from an abstract point of view, joint modeling and optimization of search engines and recommender systems, if possible, could potentially benefit both systems. Successful implementation of such joint modeling could close the gap between the IR and RecSys communities. Moreover, joint optimization of search and recommendation is an interesting and feasible direction from the application point of view. For example, in e-commerce websites, such as Amazon1 and eBay2, users use the search functionality to find the products relevant to their information needs, and the recommendation engine recommends them the products that are likely to address their needs. This makes both search and recommendation the two major components in e-commerce websites. As depicted in Figure 1, they share the same set of products (and potentially users in case of personalized search), and thus the user interactions with both search engine and recommender system can be used to improve the performance in both retrieval and recommendation. Note that this is not only limited to the e-commerce websites; any service that provides both search and recommendation functionalities can benefit from such joint modeling and optimization. This includes media streaming services, such as Netflix and Spotify, media sharing services, such as YouTube, academic publishers, and news agencies. Deep learning approaches have recently shown state-of-the-art performance in various retrieval [5, 6, 16, 29] and recommendation tasks [2, 8]. Recently, Ai et al. [1] and Zhang et al. [31] showed that using multiple sources of information is useful in both product search and recommendation, which was made possible by neural models in both applications. These neural retrieval and recommendation models can be combined and trained jointly, which is the focus of this paper. We propose a general framework, called JSR,3 to jointly model and train search engines and recommender systems. As the first step towards implementing the JSR framework, we use simple fully-connected neural networks to investigate the promise of such joint modeling. We evaluate our models using Amazon\u2019s product dataset. Our experiments suggest that joint modeling can lead to substantial improvements in both retrieval and recommendation performance, compared to the models trained separately. We show that joint modeling can also lead to higher generalization by preventing the model to overfit on the training data. The observed substantial improvements suggest this research direction as a new promising avenue in the IR and RecSys literature. We finish by describing potential outcomes for this research direction. 2 THE JOINT SEARCH-RECOMMENDATION FRAMEWORK In this section, we describe our simple framework for joint modeling and optimization of search engines and recommender systems, called JSR. The purpose of JSR is to take advantage of both search and recommendation training data in order to improve the performance in both tasks. This can be achieved by learning joint representations and simultaneous optimization. In the following 1https://www.amazon.com/ 2https://www.ebay.com/ 3JSR stands for the joint search and recommendation framework. subsections, we simplify and formalize the task and further introduce the JSR framework. 2.1 Problem Statement Given a set of retrieval training data (e.g., a set of relevant and non-relevant query-item pairs) and a set of recommendation training data (e.g., a set of user-item-rating triples), the task is to train a retrieval model and a recommender system, jointly. Formally, assume that I = {i1,i2, \u00b7 \u00b7 \u00b7 ,ik } is a set of k items. Let DIR = {(q1,R1,R1), (q2,R2,R2), \u00b7 \u00b7 \u00b7 , (qn,Rn,Rn)} be a set of retrieval data, where Ri \u2286I and Ri \u2286I respectively denote the set of relevant and non-relevant items for the query qi. Hence, Ri \u2229Ri = \u2205. Also, let DRS = {(u1, I1), (u2, I2), \u00b7 \u00b7 \u00b7 , (um, Im)} be a set of recommendation data where Ii \u2286I denotes the set of items favored (e.g., purchased) by the user ui.4 Assume that DIR is split to two disjoint subsets Dtrain IR and Dtest IR by query, i.e., there is no query overlap between these two subsets. Also, assume that DRS is split to two disjoint subsets Dtrain RS and Dtest RS , such that both subsets include all users and Dtrain RS contains a random subset of purchased items by each user and Dtest RS contains the remaining items. This means that there is no user-item overlap between Dtrain RS and Dtest RS . Note that although the training data for search ranking differs from the data used for training a recommender system, they both share the same set of items. The task is to train a retrieval model MIR and a recommendation model MRS on the training sets Dtrain IR and Dtrain RS . The models MIR and MRS will be respectively evaluated based on the retrieval performance on the test queries in Dtest IR and the recommendation performance based on predicting the favorite (e.g., purchased) items for each user in the test set Dtest RS . Note that MIR and MRS may share some parameters. 2.2 The JSR Framework JSR is a general framework for jointly modeling search and recommendation and consists of two major components: a retrieval component and a recommendation component. The retrieval component computes the retrieval score for an item i given a query q and a query context cq. The query context may include the user profile, long-term search history, session information, or situational context such as location. The recommendation component computes a recommendation score for an item i given a user u and a user context cu. The user context may consist of the recent user\u2019s activities, the user\u2019s mood, situational context, etc. Figure 2 depicts a high-level overview of the JSR framework. Formally, the JSR framework calculates the following two scores: retrieval score = \u03c8(\u03d5Q(q,cq),\u03d5I (i)) (1) recommendation score = \u03c8 \u2032(\u03d5\u2032 U (u,cu),\u03d5\u2032 I (i)) (2) where\u03c8 and\u03c8 \u2032 are the matching functions, and \u03d5Q, \u03d5I , \u03d5\u2032 U , and \u03d5\u2032 I are the representation learning functions. In the following subsection, we describe how we implement these functions using fullyconnected feed-forward networks. This framework can be further implemented using more sophisticated and state-of-the-art search and recommendation network architectures. Note that the items 4This can be simply generalized to numeric ratings, as well. \f\ud835\udc56 \ud835\udf19\u0bca \ud835\udf19\u0bc2 \ud835\udf13 (\ud835\udc5e, \ud835\udc50\u0be4) (\ud835\udc62, \ud835\udc50\u0be8) \ud835\udf19\u0bc2 \u11f1 \ud835\udf19\u0bce \u11f1 \ud835\udf13\u2032 \ud835\udc56\u2032 Joint Loss Shared Item Set Retrieval Model Recommendation Model Figure 2: Overview of the JSR Framework. JSR learns a retrieval model and a recommendation model based on a shared set of items and a joint loss function. are shared by both search and recommendation systems, thus they can benefit from an underlying shared representation for each item. For simplicity, we do not consider context in the initial framework described here. Independent from the way each component is implemented, we train the JSR framework by minimizing a joint loss function L that is equal to the sum of retrieval loss and recommendation loss, as follows: L(b,b\u2032) = LIR(b) + LRS(b\u2032) (3) where b and b\u2032 are two mini-batches containing training data for search and recommendation, respectively. We train both search and recommendation models using pairwise training. Therefore, each training instance for the retrieval model is a query qj from Dtrain IR , a positive item sampled from Rj, and a negative item sampled from Rj. LIR(b) is a binary cross-entropy loss function (i.e., equivalent to negative likelihood) as follows: LIR(b) = \u2212 |b | \u00d5 j=1 logp(ij > ij |qj) = \u2212 |b | \u00d5 j=1 log exp(\u03c8(\u03d5Q(qj),\u03d5I (ij))) exp(\u03c8(\u03d5Q(qj),\u03d5I (ij))) + exp(\u03c8(\u03d5Q(qj),\u03d5I (ij))) The recommendation loss is also defined similarly; for each user uj, we draw a positive sample ij from the user\u2019s favorite items (i.e., Ij in Dtrain RS ), and a random negative sample ij from I. LRS(b) is also defined as a binary cross-entropy loss function as follows: LRS(b\u2032) = \u2212 |b\u2032| \u00d5 j=1 logp(ij > ij |uj) = \u2212 |b | \u00d5 j=1 log exp(\u03c8 \u2032(\u03d5\u2032 U (uj),\u03d5\u2032 I (ij))) exp(\u03c8 \u2032(\u03d5\u2032 U (uj),\u03d5\u2032 I (ij))) + exp(\u03c8 \u2032(\u03d5\u2032 U (uj),\u03d5\u2032 I (ij))) In summary, the search and recommendation components in the JSR framework are modeled as two distinct functions that may share some parameters. They are optimized via a joint loss function that minimizes pairwise error in both retrieval and recommendation, simultaneously. 2.3 Implementation of JSR Since the purpose of this paper is to only show the potential importance of joint modeling and optimization of search and recommendation models, we simply use fully-connected feed-forward networks to implement the components of the JSR framework. The performance of more sophisticated search and recommendation models will be investigated in the future. As mentioned earlier in Section 2.2, we do not consider query and user contexts in our experiments. We model the query representation function \u03d5Q as a fully-connected network with a single hidden layer. The weighted average of embedding vectors for individual query terms is fed to this network. In other words, \u00cd t \u2208q c W(t) \u00b7 E(t) is the input of the query representation network, where W : V \u2192R maps each term in the vocabulary set V to a global real-valued weight and E : V \u2192Rd maps each term to a d-dimensional embedding vector. Note that the matrices W and E are optimized as part of the model at the training time. c W(t) is just a normalized weight computed using a softmax function as exp(W(t)) \u00cd t\u2032\u2208q exp(W(t\u2032)). This simple yet effective bagof-words representation has been previously used in [5, 26] for the ad-hoc retrieval and query performance prediction tasks. The item representation functions \u03d5I and \u03d5\u2032 I are also implemented similarly. The matrices W and E are shared by all of these functions for transferring knowledge among the retrieval and recommendation components. The user representation function \u03d5\u2032 U is simply implemented as a look-up table that returns the corresponding row of a user embedding matrix U : U \u2192Rd\u2032 that maps each user to a d\u2032-dimensional \fTable 1: Statistics for the three product categories used in our experiments. The data is extracted from Amazon\u2019s product data. Category # reviews # items # users # queries Electronics 1,689,188 63,001 192,403 989 Kindle Store 989,618 61,934 68,223 4,603 Cell Phones and Accessories 194,439 10,429 27,879 165 dense vector. The model learns appropriate user representations based on the items they previously rated (or favored) in the training data. The matching functions \u03c8 and \u03c8 \u2032 are implemented as two layer fully-connected networks. The input of\u03c8 is \u03d5Q \u25e6\u03d5I where \u25e6denotes the Hadamard product. Similarly, \u03d5\u2032 U \u25e6\u03d5\u2032 I is fed to the \u03c8 \u2032 network. This enforces the outputs of \u03d5Q and \u03d5I as well as \u03d5\u2032 U and \u03d5\u2032 I to have equal dimensionalities. Note that both \u03c8 and \u03c8 \u2032 each returns a single real-valued score. These matching functions are similar to those used in [16, 29] for web search. In each network, we use ReLU as the activation function in the hidden layers and sigmoid as the output activation function. We also use dropout in all hidden layers to prevent overfitting. 3 PRELIMINARY EXPERIMENTS In this section, we present a set of preliminary results that provide insights into the advantages of jointly modeling and optimizing search engines and recommender systems. Note that to fully understand the value of the proposed framework, large-scale and detailed evaluation and analysis are required and will be done in future work. In the following, we first introduce our data for training and evaluating both search and recommendation components. We further review our experimental setup and evaluation metrics, which are followed by the preliminary results and analysis. 3.1 Data Experiment design for the search-recommendation joint modeling task is challenging, since there is no public data available for both tasks with a shared set of items. To evaluate our models, we used the Amazon product dataset5 [7, 15], consisting of millions of users and products, as well as rich meta-data information including user reviews, product categories, and product descriptions. The data only contains the users and items with at least five associated reviews. In our experiments, we used three subsets of this dataset associated with the following categories: Electronics, Kindle Store, and Cell Phones & Accessories. The first two are large-scale datasets covering common product types, while the last one is a small dataset suitable for evaluating the models in a scenario where data is limited. Recommendation Data: In the Amazon website, users can only submit reviews for the products that they have already purchased. Therefore, from each review we can infer that the user who wrote it has purchased the corresponding item. This results in a set of purchased (user, item) pairs for constructing the set DRS (see Section 2.1) that can be used for training and evaluating a recommender system. 5http://jmcauley.ucsd.edu/data/amazon/ Retrieval Data: The Amazon product data does not contain search queries, thus cannot be directly used for evaluating retrieval models. As Rowley [21] investigated, directed product search queries contain either a producer\u2019s name, a brand, or a set of terms describing the product category. Following this observation, Van Gysel et al. [23] proposed to automatically generate queries based on the product categories. To be exact, for each item in a category c, a query q is generated based on the terms in the category hierarchy of c. Then, all the items within that category are marked as relevant for the query q. The detailed description of the query generation process can be found in [1]. A set of random negative items are also sampled as non-relevant items to construct DIR (see Section 2.1) for training. 3.2 Experimental Setup We cleaned up the data by removing non-alphanumerical characters and stopwords from queries and reviews. Similar to previous work [1], the content of reviews for each item i were concatenated to represent the item. We implemented our model using TensorFlow.6 In all experiments, the network parameters were optimized using Adam optimizer [11]. Hyper-parameters were optimized using grid search based on the loss value obtained on a validation set (the model was trained on 90% of the training set and the remaining 10% was used for validation). The learning rate was selected from {1E \u2212 5, 5E \u22124, 1E \u22124, 5E \u22124, 1E \u22123}. The batch sizes for both search and recommendation (see |b| and |b\u2032| in Section 2.2) were selected from {32, 64, 128, 256}. The dropout keep probability was selected from {0.5, 0.8, 1.0}. The word and user embedding dimensionalities were set to 200 and the word embedding matrix was initialized by the GloVe vectors [18] trained on Wikipedia 2014 and Gigawords 5.7 3.3 Evaluation Metrics To evaluate the retrieval model, we use mean average precision (MAP) of the top 100 retrieved items and normalized discounted cumulative gain (NDCG) of the top 10 retrieved items (NDCG@10). To evaluate the recommendation performance, we use NDCG, hit ratio (Hit), and recall. The cut-off for all recommendation metrics is 10. Hit ratio is defined as the ratio of users that are recommended at least one relevant item. 3.4 Results and Discussion Table 2 reports the retrieval performance for an individual retrieval model and the one jointly learned with a recommendation model. The results on three categories of the Amazon product dataset 6https://www.tensorflow.org/ 7The pre-trained vectors are accessible via https://nlp.stanford.edu/projects/glove/. \fTable 2: Retrieval performance of the model trained independently or jointly with a recommendation model. The superscript \u2217indicates that the improvements are statistically significant, at the 0.05 level using the paired two-tailed t-test. Method Electronics Kindle Store Cell Phones MAP NDCG@10 MAP NDCG@10 MAP NDCG@10 Individual Training 0.243 0.283 0.031 0.028 0.073 0.086 Joint Training 0.317* 0.388* 0.149* 0.126* 0.130* 0.204* Table 3: Recommendation performance of the model trained independently or jointly with a retrieval model. The superscript \u2217indicates that the improvements are statistically significant, at the 0.05 level using the paired two-tailed t-test. Method Electronics Kindle Store Cell Phones NDCG Hit Recall NDCG Hit Recall NDCG Hit Recall Individual Training 0.143 0.318 0.075 0.047 0.136 0.021 0.038 0.108 0.014 Joint Training 0.197* 0.343* 0.092* 0.063* 0.187* 0.034* 0.062* 0.160* 0.034* demonstrate that the jointly learned model significantly outperforms the individually trained model, in all cases. Note that the network architecture in both models is the same and the only difference is the way that they were trained, i.e., individual training vs. co-training with the recommendation component. We followed the same procedure to optimize the hyper-parameters for both models to have a fair comparison. The results reported in Table 3 also show that the recommendation model jointly learned with a retrieval model significantly outperforms the one trained individually with the same recommendation training data. In summary, joint modeling and optimization of search and recommendation offers substantial improvements in both search ranking and recommendation tasks. This indicates the potential in joint modeling of these two highly correlated applications. It is important to fully understand the reasons behind such improvements. To this aim, Figure 3 plots the recommendation loss curves on the Cell Phones & Accessories training data for two recommendation models, one trained individually and the other one trained jointly with the retrieval model. Although the individually learned model underperforms the joint model (see Table 3), its recommendation loss on the training data is less (see Figure 3). Similar observation can be made from the retrieval loss curves, which are omitted due to the space constraints. It can be inferred that the individually learned model overfits on the training data. Therefore, joint training can be also used as a means to improve generalization by prevention from overfitting. Example. Here, we provide an example to intuitively justify the superior performance of the proposed joint modeling. Assume that a query \u201ciphone accessories\u201d is submitted. Relevant products include various types iPhone accessories including headphones, phone cases, screen protectors, etc. However, the description and the reviews of most of these items do not match with the term \u201caccessories\u201d. This results in poor retrieval performance for a retrieval model trained individually. On the other hand, from the recommendation training data, users who bought iPhones, they also bought different types of iPhone accessories. Therefore, the representations learned for these items, e.g., headphones, phone cases, and screen protectors, are close in a jointly trained model. Thus, the retrieval 0 10000 20000 30000 40000 50000 Training steps 0.0 0.2 0.4 0.6 RecSys loss Independent training Joint training Figure 3: The loss curves for both independent and joint training of the recommendation model on the training data. performance for the query \u201ciphone accessories\u201d improves, when joint training is employed. The recommender system can also benefit from the joint modeling. For example, to a user who bought a cell phone, few headphones that have been previously purchased together with this phone by other users have been recommended. From the retrieval training data, all the headphones are relevant to the query \u201cheadphones\u201d and thus, close representations are learned for all the headphones. This results in recommending the headphones that have not been necessarily purchased by the users together with that phone. This results in substantial improvements in the recall and the overall performance achieved by the recommendation model. 4" + }, + { + "url": "http://arxiv.org/abs/1806.04815v1", + "title": "Towards Theoretical Understanding of Weak Supervision for Information Retrieval", + "abstract": "Neural network approaches have recently shown to be effective in several\ninformation retrieval (IR) tasks. However, neural approaches often require\nlarge volumes of training data to perform effectively, which is not always\navailable. To mitigate the shortage of labeled data, training neural IR models\nwith weak supervision has been recently proposed and received considerable\nattention in the literature. In weak supervision, an existing model\nautomatically generates labels for a large set of unlabeled data, and a machine\nlearning model is further trained on the generated \"weak\" data. Surprisingly,\nit has been shown in prior art that the trained neural model can outperform the\nweak labeler by a significant margin. Although these obtained improvements have\nbeen intuitively justified in previous work, the literature still lacks\ntheoretical justification for the observed empirical findings. In this position\npaper, we propose to theoretically study weak supervision, in particular for IR\ntasks, e.g., learning to rank. We briefly review a set of our recent\ntheoretical findings that shed light on learning from weakly supervised data,\nand provide guidelines on how train learning to rank models with weak\nsupervision.", + "authors": "Hamed Zamani, W. Bruce Croft", + "published": "2018-06-13", + "updated": "2018-06-13", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Neural network models have recently shown promising results in a number of information retrieval (IR) tasks, including ad-hoc retrieval [7], answer sentence retrieval [15], and context-aware ranking [16]. Neural approaches often require a large volume of training data to perform e\ufb00ectively. Although large-scale relevance signals, e.g., clickthrough data, are available for a few IR tasks, e.g., web search, this data is not available for many real-world problems and domains. Moreover, academia and smaller companies also suffer from lack of access to large-scale labeled data or implicit user feedback. This is critical for \ufb01elds, such as information retrieval, that have been developed based on extensive and accurate evaluations. The aforementioned limitations call for developing e\ufb00ective learning approaches to mitigate the shortage of training data. In this line of research, weak supervision has been proposed to train neural models for information retrieval tasks, such as learning to Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\ufb01t or commercial advantage and that copies bear this notice and the full citation on the \ufb01rst page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior speci\ufb01c permission and/or a fee. Request permissions from permissions@acm.org. LND4IR \u201918, July 12, 2018, Ann Arbor, Michigan, USA \u00a9 2018 Copyright held by the owner/author(s). Publication rights licensed to Association for Computing Machinery. rank documents in the context of ad-hoc retrieval [5] and learning relevance-based word embedding [17]. The substantial improvements achieved by weakly supervised IR models have recently attracted the attention of the IR community [1, 4, 9, 10, 12, 14, 19]. Although the obtained improvements have been intuitively well justi\ufb01ed in previous work [5, 17], to the best of our knowledge, no theoretical justi\ufb01cation has been proposed to support the empirical \ufb01ndings from weak supervision in information retrieval. To close the gap between theory and practice, thorough theoretical analysis is required. We believe that theoretical understanding of learning from weakly supervised data could potentially provide guidelines on how to design and train e\ufb00ective models using weak supervision. In this position paper, we review our recent theoretical \ufb01ndings that shed light on learning from weakly supervised data for information retrieval. Refer to [18] for more details. 2 WEAK SUPERVISION FOR IR In this section, we formalize learning from weakly supervised data and further brie\ufb02y describe the di\ufb00erent ways that we can bene\ufb01t from this learning strategy in the context of information retrieval.1 In typical supervised learning problems, we are given a training set T = {(x1, y1), (x2, y2), \u00b7 \u00b7 \u00b7 , (xm, ym)} with m elements, where xi is feature vector(s) for the ith training instance and yi denotes the corresponding true label(s). In classi\ufb01cation and regression, xi is a vector containing the features for the corresponding item; in contrast, in learning to rank, xi is a list of n feature vectors representing n items in a rank list. In weak supervision, however, the true labels (i.e., yis) are unknown, which is similar to typical unsupervised learning problems. Weak supervision assumes that a pseudo labeler (or \u201cweak\u201d labeler) is available that can generate labels for all the feature vectors in T . This results in a weak supervision training set b T = {(x1,b y1), (x2,b y2), \u00b7 \u00b7 \u00b7 , (xm,b ym)}, where the labels (i.e., b yis) are generated using a weak labeler M. Learning a model M from b T is called weak supervision. If M is an unsupervised model, then M is also unsupervised. Weak supervision can be used with one of the following goals: \u2022 Improving e\ufb00ectiveness: Dehghani et al. [5] showed that a simple pairwise neural ranking model trained on the labels generated by BM25 as the weak labeler can outperform the weak labeler with a signi\ufb01cant margin. More recently, Zamani et al. [19] studied the problem of learning from multiple weak supervision labels to achieve state-of-the-art results on the query performance prediction task. These studies suggest that weak supervision can be used to improve the e\ufb00ectiveness. 1Note that weak supervision is an established sub-\ufb01eld of machine learning. In this paper, we focus on weak supervision for information retrieval. \f\u2022 Bypassing the need to external resources: Zamani and Croft [17] proposed to learn relevance-based word embedding based on the relevance distributions generated by the Lavrenko and Croft\u2019s relevance models [8]. They shows that not only the learned word embedding vectors are better representations compared to the general-purpose word embeddings, e.g., word2vec [11], but also can perform on par with the relevance models for the query expansion task without the need to the top retrieved documents (i.e., pseudo-relevant documents) at the testing time. This shows that the huge number of parameters in neural networks are capable of memorizing useful information that can be captured from external resources and weak supervision can be employed as an approach for getting rid of these resources at the testing time. \u2022 Improving e\ufb03ciency: Recently, Cohen et al. [2] showed that expensive regression forest learning to rank models, e.g., LamdaMART, can be replaced by simple feed-forward networks. The network is trained with a weak supervision setting, where the learning to rank model plays the role of weak labeler. The authors demonstrated that up to 10x (on CPU) and 100-1000x (on GPU) speeds up can be obtained compared to state-of-the-art implementation of regression forest learning to rank models, with no measurable loss in e\ufb00ectiveness. This recent work suggests that weak supervision can be used to obtain more e\ufb03cient models. \u2022 Learning from private data: Dehghani et al. [3] proposed to share a model trained on sensitive private data, instead of sharing the data itself. Although this should be done in caution due to various membership attacks, see [13], the shared model can be used as the weak labeler. 3 OUR THEORETICAL FINDINGS We believe that the following theoretical questions are researchworthy and answering them sheds light on learning from weak supervision. \u2022 Why and how a weakly supervised model can outperform the weak labeler? \u2022 What properties should the weakly supervised models have to perform e\ufb00ectively? Our recent work [18] studies weak supervision for information retrieval with a focus on learning to rank and models weak supervision as a noisy channel that introduces some noise to the true labels. Motivated by the symmetry condition de\ufb01ned for classi\ufb01cation [6], we de\ufb01ne symmetric ranking loss function as follows. De\ufb01nition 1. [Symmetric Ranking Loss Function] A ranking loss function L(\u00b7, \u00b7) is symmetric, if it satis\ufb01es the following constraint: \u00d5 y\u2208Y L(M(x), y) = c \u2200x, \u2200M where c is a constant number and Y is a \ufb01nite and discrete output space. In case of binary relevance judgments, the output space Y is {0, 1}n for a rank list of n items. Based on the risk minimization framework, we proved that symmetric ranking loss functions are noise tolerant under the uniform weak supervision noisy channel assumption. On the other hand, with a non-uniformity assumption, we \ufb01nd an upper bound for the risk function of the model trained on the weak supervision data. Our theorems provide insights into how and why training models on weakly supervised data can perform well and even outperform the weak labeler. They also introduce some guidelines on what loss functions to use while training on weakly supervised data. We also studied how learning from multiple weak supervision signals can improve the performance and found an information theoretic lower bound for the number of independent weak labelers required to guarantee an arbitrary maximum error probability of \u03f5. More information can be found in [18]. 4" + }, + { + "url": "http://arxiv.org/abs/1711.09174v1", + "title": "Neural Ranking Models with Multiple Document Fields", + "abstract": "Deep neural networks have recently shown promise in the ad-hoc retrieval\ntask. However, such models have often been based on one field of the document,\nfor example considering document title only or document body only. Since in\npractice documents typically have multiple fields, and given that non-neural\nranking models such as BM25F have been developed to take advantage of document\nstructure, this paper investigates how neural models can deal with multiple\ndocument fields. We introduce a model that can consume short text fields such\nas document title and long text fields such as document body. It can also\nhandle multi-instance fields with variable number of instances, for example\nwhere each document has zero or more instances of incoming anchor text. Since\nfields vary in coverage and quality, we introduce a masking method to handle\nmissing field instances, as well as a field-level dropout method to avoid\nrelying too much on any one field. As in the studies of non-neural field\nweighting, we find it is better for the ranker to score the whole document\njointly, rather than generate a per-field score and aggregate. We find that\ndifferent document fields may match different aspects of the query and\ntherefore benefit from comparing with separate representations of the query\ntext. The combination of techniques introduced here leads to a neural ranker\nthat can take advantage of full document structure, including multiple instance\nand missing instance data, of variable length. The techniques significantly\nenhance the performance of the ranker, and also outperform a learning to rank\nbaseline with hand-crafted features.", + "authors": "Hamed Zamani, Bhaskar Mitra, Xia Song, Nick Craswell, Saurabh Tiwary", + "published": "2017-11-25", + "updated": "2017-11-25", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Deep neural networks have shown impressive performance in many machine learning tasks, including information retrieval models for ranking documents [5, 7, 8, 15, 23, 27]. Tese deep neural ranking models (NRMs) ofen consider a single source of document description, such as document title [8, 23] or body text [5, 15]. However, in many retrieval scenarios, additional sources of document descriptions may be available. For instance in web search, each document consists of text felds specifed by the document\u2019s HTML tags, such as title and body, as well as external sources of meta-information, such as the anchor text from incoming hyperlinks or the query text for which the document has been previously viewed. Learning a document representation suitable for retrieval tasks can be challenging when multiple document felds should be considered. Tese challenges primarily stem from the distinct properties of these diverse feld types: (i) while the body of a web page is ofen long, the content of many other felds, such as title, are typically only a few terms in length, (ii) while some felds (e.g., body) contain a single instance of text, other felds may contain bags of multiple short texts (e.g., anchor text), (iii) multi-instance felds generally contain variable number of instances, e.g., zero or more instances of incoming anchor text for a given document, (iv) some felds, such as URL, may not contain natural language text, and fnally (v) felds vary in coverage and accuracy, for example a feld that memorizes past queries that led to a click on the document may provide a very useful (high-accuracy) ranking signal [1], but the coverage of that feld may be relatively low because not every document has been clicked before. Each of these challenges increases the complexity of the representation learning task for documents with multiple felds. However, multiple felds associated with each document may contain complementary information that has motivated us to learn representation for documents by considering multiple felds in order to improve the retrieval performance. In this paper, we propose NRM-F1, a general framework for learning multiple-feld document representation for ad-hoc retrieval. NRM-F is designed to address the aforementioned challenges. More specifcally, NRM-F can handle multiple felds, both with single and multiple instances. In NRM-F, although the neural network parameters are shared among multiple instances of the same feld, they are distinct across felds. Tis enables NRM-F to uniquely model the content of each feld based on its specifc characteristics. 1Te naming is inspired by BM25F [21]. arXiv:1711.09174v1 [cs.IR] 25 Nov 2017 \fWe employ the same topology for the sub-networks corresponding to the diferent felds. However, there are a number of controlling hyper-parameters that determine the exact sub-network confguration for each feld. We introduce feld-level masking to better cope with variable length inputs, i.e., felds with variable number of text instances. We also propose a novel feld-level dropout technique that efectively regularizes the network and prevents it from over-dependence on high-accuracy felds, such as clicked queries. Given the intuition that diferent felds may match diferent aspects of the query, our model learns diferent query representations corresponding to diferent document felds. We evaluate our models in the context of web search, using the queries sampled from the Bing\u2019s search logs. We study fve felds in our experiments: title (single short text), body (single long text), URL (single short text, but not in a natural language), anchor texts (multiple short texts), and clicked queries (multiple short texts providing a ranking signal with relatively high accuracy). We consider this efective and diverse set of felds to make our fndings more likely to generalize to other combinations of document felds. In this work, we study the following research hypotheses: H1 Te ad-hoc retrieval performance of NRM-F improves as we incorporate multiple document felds. H2 NRM-F performs beter than competitive baselines, such as term matching and learning to rank. H3 Learning a multiple-feld document representation is superior to scoring based on individual feld representations and summing. H4 Learning per-feld query representations performs beter than learning a single query representation. H5 Te additional techniques of feld-level masking and feld-level dropout yield additional performance improvements. Our experiments validate all these hypotheses, and investigate the efectiveness of our overall NRM-F framework. 2 RELATED WORK 2.1 Retrieval with Multiple Fields Information retrieval tasks may involve semi-structured data, meaning that the text of each document is divided into sections. Given a sufciently fne-grained structure, some past research has studied the retrieval of the particular sections that best satisfy the user\u2019s query, such as in the INEX XML retrieval initiative [6, 12]. In web search it is more typical to consider coarse-grained sections such as title and body, also referred to as felds, and use them to generate features in a document ranking task. Using evidence from structure to improve document retrieval is well studied in information retrieval. Wilkinson [26] proposed a number of hypotheses about how to combine section-level and document-level evidence. For example, taking the maximum section score, or a weighted sum of section scores, and then potentially combining with a document-level score. Robertson et al. [21] further proposed BM25F, an extension to the original BM25 model [22], arguing that the linear combination of feld-level scores is \u201cdangerous\u201d, because it bypasses the careful balance across query terms in the BM25 model. Te BM25F solution is to frst combine frequency information across felds on a per-term basis, then compute a retrieval score using the balanced BM25 approach. Tere are a number of alternative approaches to BM25F for the multiple-feld document retrieval task. For instance, Piwowarski and Gallinari [19] proposed a model based on Bayesian networks for retrieving semi-structured documents. Myaeng et al. [16] extended the InQery retrieval system to semi-structured documents. Svore and Burges [25] proposed a supervised approach, called LambdaBM25, that learns a BM25-like retrieval model based on the LambdaRank algorithm [3]. LambdaBM25 can also consider multiple document felds, without resorting to a linear combination of perfeld scores. Dealing with multiple document felds without a linear combination was also studied by Ogilvie and Callan [18], who proposed and tested various combinations for a known-item search task, using a language modeling framework. Kim et al. [9] proposed a probabilistic model for the task of XML retrieval. Later on, Kim and Crof [10] introduced a model based on relevance feedback for estimating the weight of each document feld. 2.2 Neural Networks for Ranking Several recent studies have applied deep neural network methods to various information retrieval applications, including question answering [29], click models [2], ad-hoc retrieval [5, 15, 27], and context-aware ranking [30]. Neural ranking models can be partitioned into early and late combination models [5]. Tey can also be categorized based on whether they focus on lexical matching or learning text representations for semantic matching [15]. Te early combination models are designed based on the interactions between query and document as the networks\u2019 input. For instance, the deep relevance matching model [7] gets histogrambased features as input, representing the interactions between query and document. DeepMatch [13] is another example that maps the input to a sequence of terms and computes the matching score using a feed-forward network. Te local component of the duet model in [15] and the neural ranking models proposed in [5, 27] are the other examples for early combination models. Te late combination models, on the other hand, separately learn a representation for query and document and then compute the relevance score using a matching function applied on the learned representations. DSSM [8] is an example of late combination models that learns representations using feed-forward networks and then uses cosine similarity as the matching function. DSSM was further extended by making use of convolutional neural networks, called CDSSM [23]. Te distributed component of the duet model [15] also uses a similar architecture for learning document representation. We refer the reader to [14] that provides an overview of various (deep) neural ranking models. In all of the aforementioned work, each document is assumed to be a single instance of text (i.e., single feld). However, documents ofen exist in a semi-structured format. In this paper, we focus on late combination models and propose a neural ranking model that takes multiple felds of document into account. Given the hypothesis provided in [15], our neural model can be further enriched by making use of lexical matching in addition to distributed matching. We leave the study of lexical matching for the future and focus on document representation learning. \f3 THE NRM-F FRAMEWORK In this section, we frst provide our motivation for studying the task of representation learning for documents with multiple felds, and formalize the task. We then introduce a high-level overview of our framework, and further describe how we implement each component of the proposed framework. We fnally explain how we optimize our neural ranking model. 3.1 Motivation and Problem Statement In many retrieval scenarios, there exist various sources of textual information (felds) associated with each documentd. In web search in particular, these sources of information can be partitioned into three categories. Te frst category includes the information provided by the structure and the content of document d itself. Diferent elements of the web page specifed by the HTML tags, e.g., title, header, keyword, and body, as well as the URL are examples of felds of this type. Te second category includes the information provided by the other documents for representing d. For instance, when there is a hyperlink from documentd\u2032 tod, the corresponding anchor text may provide useful description of d. Te third category contains information that we can infer from interactions between the retrieval system and its users. For instance in web search, when a user clicks on the document d for a query q, the text of query q can be used to describe d. Svore and Burges [25] refer to these last two categories as popularity felds. Tere are several previous studies showing that diferent felds may contain complementary information [21, 25]. Terefore, incorporating multiple felds can lead to more accurate document representation and beter retrieval performance. For example, clicked queries are highly efective for the retrieval tasks [1, 25, 28]. A number of prior studies [21, 25], have also investigated the usefulness of anchor texts for web search. However, for fresh or less popular documents that may not have enough anchor or clicked query text associated with them, the body text provides important description of the document. Similarly, the URL feld may be useful for matching when the query expresses an explicit or implicit intent for a specifc domain. Tese complementary and diverse sources of textual descriptions have motivated us to study representation learning for ad-hoc retrieval by incorporating multiple felds. Te unique properties of these diverse document felds, however, make it challenging to model them within the same neural architecture. For example, the vocabulary and the language structure of clicked queries may be distinct from those of the body text, and in turn both may be distinct from the URL feld. Te document body text may contain thousands of terms, while the text in other felds may be only few terms in length. Finally, a key challenge also stems from the fact that a number of felds consist of multiple instances. For example, there are multiple anchor texts for each document d, and multiple queries can be found that previously led users to click on document d. A neural ranking model that considers these felds for document ranking must handle variable number of text instances per document feld. To formulate the task, let Fd = {F1, F2, \u00b7 \u00b7 \u00b7 , Fk } denote a set of felds associated with the document d. Each feld Fi consists of a set of instances {fi1, fi2, \u00b7 \u00b7 \u00b7 , fimi } where mi denotes the number of instances in the feld Fi. Te task is to learn a function \u03a6D(Fd) whose output is \ud835\udc37 query representation doc representation matching network \ud835\udc44 Figure 1: A neural ranking model architecture that consists of three major components: query representation, document representation, and matching network. a representation for document d, suitable for the ad-hoc retrieval task. 3.2 High-Level Overview of the Framework In this paper, as shown in Figure 1, we focus on a late-combination and representation-focused neural ranking model. Tis architecture consists of three major components: document representation (\u03a6D), query representation (\u03a6Q), and the matching network (\u03a8) which takes both representations and computes the retrieval score (i.e., score = \u03a8(\u03a6Q, \u03a6D)). In this section, we describe the high-level architecture used for the document representation network, which is the focus of the paper. Sections 3.7 and 3.8 review how the query representation and matching network components are respectively implemented. To learn multiple-feld document representation (i.e., \u03a6D), the framework frst learns a representation for each individual instance in a feld. Te framework then aggregates these learned vector representations to represent the feld as a whole. It fnally aggregates all the feld specifc representations for the document. Tis framework is visualized in Figure 2. To formally describe our framework, the document representation learning function \u03a6D can be calculated as: \u03a6D(Fd) = \u039bD(\u03a6F1(F1), \u03a6F2(F2), \u00b7 \u00b7 \u00b7 , \u03a6Fk (Fk)) (1) where \u03a6Fi denotes the representation learning function for the feld Fi. Note that the representation learning functions difer for diferent felds, since the felds have their own unique characteristics and need their own specifc functions. \u039bD aggregates representations learned for all the felds. Each \u03a6Fi is also calculated as: \u03a6Fi (Fi) = \u039bFi (\u03a6fi (fi1), \u03a6fi (fi2), \u00b7 \u00b7 \u00b7 , \u03a6fi (fimi )) (2) where \u03a6fi denotes the representation learning function for each instance of the ith feld (e.g., each anchor text). Note that \u03a6fi is the same function for all the instances of a given feld. Te function \u039bFi aggregates the representation of all instances in the ith feld. To summarize, our document representation framework consists of three major components: learning representation for an instance of each feld (i.e., \u03a6fi ), feld-level aggregation (i.e., \u039bFi ), \f\u03a6\ud835\udc53 1 \u03a6\ud835\udc53 1 \u03a6\ud835\udc53 1 \u039b\ud835\udc39 1 \ud835\udc39 1: [\ud835\udc53 11, \ud835\udc53 12, \u2026 , \ud835\udc53 1\ud835\udc5a1] \u2026 \u03a6\ud835\udc53 2 \u03a6\ud835\udc53 2 \u03a6\ud835\udc53 2 \u039b\ud835\udc39 2 \ud835\udc392: [\ud835\udc53 21, \ud835\udc53 22, \u2026 , \ud835\udc53 2\ud835\udc5a2] \u2026 \u03a6\ud835\udc53\ud835\udc58 \u03a6\ud835\udc53\ud835\udc58 \u03a6\ud835\udc53\ud835\udc58 \u039b\ud835\udc39\ud835\udc58 \ud835\udc39\ud835\udc58: [\ud835\udc53 \ud835\udc581, \ud835\udc53 \ud835\udc582, \u2026 , \ud835\udc53 \ud835\udc58\ud835\udc5a\ud835\udc58] \u2026 \u2026 \u039b\ud835\udc37 \u03a6\ud835\udc37 Figure 2: Te high-level architecture of our document representation framework. In this architecture, aggregating feld-level representations using \u039bD produces the document representation \u03a6D. Te representation for the ith feld is computed by aggregating (\u039bFi ) the representations learned for the instances of the feld using \u03a6fi . \u2026 \u2026 \u2026 \u2026.. \u2026.. \u2026.. \u2026 hashing embedding convolution convolution pooling fully-connected \ud835\udc641 \ud835\udc642 \ud835\udc64\ud835\udc5b Figure 3: Instance-level representation learning network. Tis model embeds the character n-gram representation of each word wi which is followed by two 1-D convolutional layers. Te outputs of the second set of convolutional operations are pooled and then fed to a fully-connected layer to compute the fnal representation for each instance of a feld. and document-level aggregation (i.e., \u039bD). Sections 3.3 and 3.4 describe how we defne or learn these functions. 3.3 Instance-Level Representation Learning In this subsection, we describe our neural architecture for learning representations of individual text instances in a document feld. In particular, we explain how the functions \u03a6fi are implemented. As pointed out in Section 3.1, each feld has its own unique characteristics. One approach would be to use diferent neural architectures for diferent felds. However, in the interest of proposing a general framework, we choose an architecture that can be used for all felds, but the exact confgurations are controlled by a set of hyper-parameters specifed per feld. Tese hyper-parameters are selected for each feld individually based on a validation set. Figure 3 shows the design of the per-instance model architecture. In our architecture, each term is represented using a character n-gram hashing vector, introduced by Huang et al. [8]. Tese are extremely sparse vectors whose dimensions correspond to all possible character n-grams. Terefore, we represent the input layer of our network using sparse tensors which is memory-efcient and also improves the efciency of the model. Similar to [8, 23], n was set to 3 in our experiments which causes limited number of term hash collisions. We use the character n-gram representation for the following reasons: (1) it can represent out of vocabulary terms, and (2) the number of all possible tri-grams is much lower than the term-level vocabulary size which signifcantly reduces the number of parameters needed to be learned by the network. We use a linear embedding layer to map a character n-gram representation to a dense low-dimensional representation by multiplying the sparse input tensor for each term and an embedding matrix E \u2208RN \u00d7l where N is total number of possible n-grams and l denotes the embedding dimensionality. Te output of this layer for each word is normalized to prevent over-weighting long words, and represents relevance-based word embedding [32]. Inspired by C-DSSM [23] and the Duet Model [15], this layer is followed by a one-dimensional convolution layer. Te aim of this layer is to capture the dependency between terms. We further use an additional convolution layer whose window size is set to be larger for the body feld to capture sentence-level representations, and smaller for the short texts felds. We pool the output of the second convolution layer which is followed by a fully-connected layer to compute the fnal representation for an instance of a given feld. Te choice of max-pooling and average-pooling is a hyper-parameter in our model. In this network, we use dropout [24] to avoid over-fting. 3.4 Aggregating Representations As shown in Figure 2, NRM-F consists of two sets of aggregation components: \u039bFi and \u039bD. \u039bFi aggregates the representations learned for the instances of a specifc feld. For each multi-instance feld Fi, we select a bag of at most Mi instances, and use zero padding when less than Mi instances are available. \u039bFi averages the individual representations learned for the instances of the feld Fi. Te component \u039bD aims at aggregating the representations learned for diferent felds. To be able to learn diferent query representations for each feld, \u039bD only concatenates the input vectors to be served in the matching function explained in Section 3.8. \f3.5 Field-Level Masking Te number of instances in multi-instance felds, such as anchor text, varies across documents. As shown in Table 1, a signifcant number of documents may not contain any anchor text or clicked queries. To deal with such cases, as mentioned in Section 3.4, we use zero padding. Although padding is a popular approach and has been previously used in neural ranking models [5, 15, 23], it sufers from a major drawback: by doing padding, the network assumes that a part of the input vector is zero; however padding represents missing values. In the extreme case, assume that there is no available anchor text for a given document; therefore, the input for the anchor text feld is all zero. Te gradients, however, are not zero (because of the bias parameters). Tis means that the backpropagation algorithm updates the weights for the sub-network corresponding to the anchor text feld; which is not desirable\u2014we do not want to update the weights when the input data is missing. Tis becomes crucial when there are many missing values in training data, similar to our task. To tackle this problem, we propose a simple approach, called feld-level masking. Let Ri \u2208RMi\u00d7Di denote the representation learned for the ith feld (i.e., the output of \u039bFi ) where Mi and Di respectively represent the maximum number of instances (fxed value) and the dimensionality for instance representation. We generate a binary masking matrix Bi \u2208BMi\u00d7Di whose rows are all zero or all one, showing whether each feld instance exists or is missing. In masking, we use Ri \u25e6Bi (i.e., element-wise multiplication) as the representation for Fi. We multiply the representations for existing feld instances by one (means no change) and those for the missing instances by zero. Tis not only results in zero representation for missing values, but also forces the gradients to become zero. Terefore, the back-propagation algorithm does not update the weights for the sub-networks corresponding to missing values. Te masking matrix is also useful for computing the average in \u039bFi (see Section 3.4). Averaging is a common approach for aggregating diferent representations, such as average word embedding for query representation [31] and neural ranking models [5]. However, in case of variable length inputs, averaging penalizes short inputs which are padded by zero. To address this issue, we can compute the exact average vector by summing the inputs and dividing them by the summation over the masking matrix. Note that the masking technique should be applied at both training and testing times. 3.6 Field-Level Dropout As widely known and also demonstrated in our experiments, clicked queries is an efective feld for representing documents in the retrieval task [1, 25]. When such a high-accuracy feld is available, there is a risk that the network relies on that feld, and pays less attention to learning proper representations for the other felds. Tis can lead to poor performance of the model when the high-accuracy feld is absent (low coverage). Although we use dropout in our neural ranking model (see Section 3.3), it is not sufcient for the task of document representation learning with multiple document felds, in particular when at least a dominant input feld exists. To regularize the network in such cases, we propose a simple feld-level dropout technique\u2014randomly dropping all the units corresponding to a feld. In other words, we may randomly drop, say, the clicked queries feld or the body feld at training time to prevent the neural ranking model from overdependence on any single feld. Tis approach is back-propagation friendly (all the proofs presented in [24] are applicable to the feldlevel dropout). Field-level dropout contains k hyper-parameters, where k denotes the total number of felds and each parameter controls the probability of keeping the corresponding feld. Note that dropout only happens at the training time and all the units are kept at the validation and test times. 3.7 Qery Representation Since in this paper we focus on the ad-hoc retrieval task, the only available information for the query is the query text. Terefore, to represent the query (i.e., \u03a6Q), we use the same network architecture as the one used for each instance of a document feld (see Section 3.3). Note that diferent document felds may match with diferent aspects of a query. Terefore, the output dimensionality of the query representation network is equal to the sum of the dimensions for all felds\u2019 representations. In other words, NRM-F learns diferent representations of the query for each document feld. 3.8 Matching Network In this subsection, we describe how we compute the retrieval score given the output of query representation and document representation networks (i.e., the function \u03a8). To do so, we compute the Hadamard product of the representations; which is the elementwise product of two matrices with the same dimensionality. We then use a fully-connected neural network with a single non-linear hidden layer to compute the fnal retrieval score. We avoid computing dot product or cosine similarity which would reduce the contribution of each feld to a single score, forcing us to combine them linearly which is less efective as demonstrated by Robertson et al. [21] and our results in Section 4.3. 3.9 Training We use a pairwise seting to train the designed neural ranking model. Let T = {(q1,d11,d12,y11,y12), (q2,d21,d22,y21,y22), \u00b7 \u00b7 \u00b7 , (qn,dn1, dn2,yn1,yn2)} be a set of n training instances. Each training instance consists of a query qi, two documents di1 and di2, as well as their corresponding labels yi1 and yi2. We consider cross entropy loss function to train neural ranking models: L = \u22121 |T | |T | \u00d5 i=1 \u0434(yi1) \u0434(yi1) + \u0434(yi2) logpi1 + \u0434(yi2) \u0434(yi1) + \u0434(yi2) log(1 \u2212pi1) where \u0434(\u00b7) is a gain function. We use an exponential gain function same as the one used in calculating NDCG.pi1 is the estimated probability fordi1 being more relevant thandi2. pi1 is calculated via sofmax on the predicted labels: pi1 = exp (\u02c6 yi1)/(exp (\u02c6 yi1) + exp (\u02c6 yi2)), where \u02c6 yi1 and \u02c6 yi2 denote the estimated scores for di1 and di2, respectively. 4 EXPERIMENTS 4.1 Data To evaluate our models, we randomly sampled \u223c140k queries from the Bing\u2019s search logs for the English United States market from a one-year period. For each query, the documents returned by the Bing\u2019s production ranker in addition to those retrieved by a \fTable 1: Statistics and characteristics of the document felds used in our experiments. Fields Type Coverage Specifc Feature Title Single Instance 100% Short text. URL Single Instance 100% Short text, but not in a natural language. Body Single Instance 100% Long text. Anchor texts Multiple Instance 61% Short texts with relatively low coverage. Clicked queries Multiple Instance 73% Short texts with relatively low coverage. A high-accuracy feld. diverse set of experiments were labelled by human judges on a fvepoint scale: perfect, excellent, good, fair, and bad. In total, the data consists of \u223c3.8 million query-document pairs which was randomly partitioned into three sets\u201480% for training, 10% for validation, and 10% for testing\u2014such that no distinct query appears in more than one set. Similar to [8, 15, 17], we evaluate all models under the telescoping seting by re-ranking the candidate documents for each query. Since our neural ranking model is a pairwise learning to rank model, for each query we generate all possible < q,d1,d2 > triples such that the relevance label for d1 and d2 are diferent with respect to q. To avoid biasing towards the queries with many documents, at most 50 triples per query were sampled for training based on a uniform distribution over all possible label pairs. Te contents of web pages were retrieved from the Bing\u2019s web index and were parsed using a proprietary HTML parser. We made sure that all the documents in our data contain title and body. All texts were normalized by lower-casing and removing non-alphanumerical characters. Te URLs were split using a simple proprietary approach. We set the maximum length of 20, 10, 1000, 10, and 10 for title, URL, body, anchor text, and clicked query, respectively. We used at most 5 anchor texts and at most 5 clicked queries per document.2 Tey were selected based on a simple countbased functions; means that the most common anchor texts and clicked queries for each document were selected. Te statistics of our data for each feld is reported in Table 1. 4.2 Experimental Setup All the models were implemented using TensorFlow3. We used Adam optimizer [11] to train our models. Te learning rate was selected from [1e \u22123, 5e \u22124, 1e \u22124, 5e \u22125, 1e \u22125]. We set the batch size to 64 and tuned the hyper-parameters based on the loss values obtained on the validation set. We selected the layer sizes from {100, 300, 500} and the convolution window sizes from {1, 3, 10, 20, 50} for long texts (i.e., body) and from {1, 3, 5, 10} for short texts (i.e., the other felds). Te convolution strides were selected from {1, \u230aws/4\u230b, \u230aws/2\u230b,ws} where ws denotes the convolution window size. Te keep probability parameters for both conventional and feld-level dropouts were selected from {0.5, 0.8, 1.0}. As explained in Section 3.3, the input layer of the networks uses tri-gram hashing with \u223c50k dimensions, i.e, all possible character tri-grams with alphanumerical characters plus a dummy character for the start and the end of each word. Te tri-gram embedding dimensionality (i.e., the frst layer) was set to 300. Tis embedding 2Te maximum number of instances per feld can be set to a much larger value. Since the network parameters for instances of each feld are shared and the inputs are represented as sparse tensors, increasing the maximum number of instances would have a minor memory efect. 3htp://tensorfow.org/ Table 2: Performance of the proposed framework with different felds. Te superscript + shows signifcant improvements for the models with two felds compared to the ones with each of the felds, individually. Te superscript * denotes signifcant improvements over all the other models. Field(s) NDCG@1 NDCG@10 Title 0.4226 0.5883 URL 0.4366 0.5865 Body 0.4115 0.5850 Anchor texts 0.4386 0.5933 Clicked queries 0.4661 0.6116 Title + URL 0.4425+ 0.6065+ Title + Body 0.4316+ 0.6098+ Title + Anchor texts 0.4507+ 0.6062+ Title + Clicked queries 0.4680 0.6180+ All 0.4906* 0.6380* matrix is shared among all felds. Following [8, 15, 23], we used tanh as the activation function for all hidden layers. We use NDCG at two diferent ranking levels (NDCG@1 and NDCG@10) to evaluate the models. Te signifcance diferences between models are determined using the paired t-test at a 95% confdence level (p value < 0.05). 4.3 Experimental Results In this subsection, we empirically address the hypotheses mentioned in Section 1. H1: The ad-hoc retrieval performance of NRM-F improves as we incorporate multiple document felds. In this set of experiments, we address our frst hypothesis (H1) by evaluating our model with each single feld individually, with feld pairs with title, and fnally with all the felds together. Te results are reported in Table 2. Although title, URL, and body have much higher coverage compared to anchor texts and clicked queries (see Table 1), the performances achieved by anchor texts and clicked queries are superior to the other felds.4 Incorporating clicked queries demonstrates the highest performance. Pairing Title with any of the other feld \u201cX\u201d leads to a beter performance compared to Title and \u201cX\u201d, individually. Tese improvements are statistically signifcant, except for NDCG@1 in Title+Clicked queries. Te reason is that clicked queries are very efective for web search, especially for the frst retrieved document. Adding title to clicked queries, however, significantly improves the search quality for the top 10 documents. Te NRM-F model with all felds achieves the highest performance with 4We randomly shufed the documents with equal retrieval scores for a query. Tis process was repeated for 10 times and the average performance is reported. \fTable 3: Comparison of the proposed model with baselines for a single feld (Title or Body). Te superscripts denote signifcant improvements over the models specifed by the ID column. ID Model Title Body NDCG@1 NDCG@10 NDCG@1 NDCG@10 1 BM25 0.4039 0.5752 0.3957 0.5693 2 LTR 0.4122 0.5861 0.3996 0.5792 3 DSSM 0.4112 0.5858 0.3961 0.5713 4 C-DSSM 0.4148 0.5874 0.3957 0.5695 5 Duet (distributed) 0.4164 0.5877 0.4066 0.5788 6 NRM-F Single Field 0.422612345 0.5883123 0.411512345 0.585012345 statistically signifcant margins. Tis suggests that the proposed framework is able to learn a more accurate document representation for the ad-hoc retrieval task by considering multiple document felds; thus the hypothesis H1 is validated. H2: NRM-F performs beter than competitive baselines, such as term matching and learning to rank. To demonstrate that the proposed instance-level representation model performs reasonably well for both short and long texts, we frst evaluate our models against a set of baselines using a single feld, title only and body only. We consider the following baselines: BM25 [22], a state-ofthe-art learning to rank model with hand-crafed features (LTR), DSSM [8], C-DSSM [23], and the distributed part5 of the duet model proposed by Mitra et al. [15]. Te LTR baseline uses an internal advanced implementation of the LambdaMART algorithm [4] that has been used in the production. We used the features that have been typically extracted from query and document texts. Indeed, from those listed in [20], we used all the features that can be extracted from query and title/body. To have a fair comparison, we trained all the models using the same training data and pairwise seting.6 Te hyper-parameters in all the models, including the baselines, were optimized for Title and Body, separately. Due to the memory constraints, the C-DSSM and Duet cannot use \u223c50k tri-grams for the word hashing phase (only for Body). Terefore, as suggested in [15], we use top 2k popular n-grams for these models. Note that since our model use sparse tensors for word hashing, it is memory-efcient and does not have the same issue.7 Te results for Title as an example of short text and Body as an example of long text are reported in Table 3. According to this table, the proposed method outperforms all the baselines for both Title and Body. Te improvements are statistically signifcant in nearly all cases. Tis demonstrates the potential of our model to be used for both short and long texts. Te improvements are higher for Body, which makes our model even more suitable for long text. Tis experiment suggests that our instance-level representation model performs reasonably well. 5To have a fair comparison, we only consider the distributed part of the model. Note that all the listed neural models, including NRM-F, can be further enriched by using lexical matching, similar to the local part of the duet model. 6Te original DSSM and C-DSSM models use binary labels (click data) and random negative sampling for training; however, as suggested by Mitra et al. [15] using explicit judgments leads to a beter performance compared to random negative sampling 7C-DSSM and Duet perform convolution on top of word hashing layer; thus, the word hashing phase cannot be implemented using sparse tensors (at least not supported by deep learning libraries, such as TensorFlow and CNTK). Table 4: Performance of the proposed framework with all felds compared to baselines. Te superscript * denotes signifcant improvements over all the other models. Model NDCG@1 NDCG@10 BM25-Field Concatenation 0.4281 0.5953 BM25F 0.4431 0.6020 LTR 0.4888 0.6341 NRM-Field Concatenation 0.4582 0.6110 NRM-Score Aggregation-Ind. Training 0.4729 0.6229 NRM-Score Aggregation-Co-training 0.4743 0.6279 NRM-F -Single Qery Representation 0.4846 0.6345 NRM-F 0.4906* 0.6380* To evaluate our model with multiple instances, we consider the following baselines: (1) BM25 by concatenating all the felds, (2) BM25F [21] which has been widely used for ad-hoc retrieval with multiple document felds, (3) a learning to rank (LTR) model with hand-crafed features extracted from all the felds, and (4) our neural ranking model with concatenation of all felds as a single input text (i.e., NRM Field Concatenation). Similar to the last experiments, for LTR we consider all the typical features that can be extracted from text inputs (among those listed in [20] for the LETOR dataset). Te features were extracted for all the felds. Te learning algorithm for LTR is the same as the one used in the previous experiment. All models were trained on the same training set, and their hyper-parameters were tuned on the same validation set. As shown in Table 4, NRM-F signifcantly outperforms all the baselines. Tis suggests that NRM-F not only eliminates the handcrafed feature engineering for ad-hoc retrieval, but also learns an accurate document representation that leads to higher retrieval performance. Te results also validate our second hypothesis. H3: Learning a multiple-feld document representation is superior to scoring based on individual feld representations and summing. A simple approach for coping with multiple document felds is to calculate the matching score for the query and each of the document felds and then aggregate the scores. We tried two score aggregation methods, one learns a neural ranking model for each document feld individually and then linearly interpolates their scores. Although the other one also interpolates the scores obtained by diferent felds, the neural networks for diferent felds are co-trained together. Te results in Table 4 show that co-training leads to a beter performance compared to isolated training of the \fTable 5: Investigating the efectiveness of feld-level masking and dropout. Te superscripts denote signifcant improvements over the models specifed by the ID column. ID Model All felds All felds except clicked queries NDCG@1 NDCG@10 NDCG@1 NDCG@10 1 NRM-F (no masking, no dropout) 0.4818 0.6327 0.4577 0.6152 2 NRM-F with masking 0.48561 0.63531 0.46021 0.61741 3 NRM-F with masking & dropout 0.490612 0.638012 0.46131 0.61811 Table 6: Performance analysis based on query length, dividing the test queries into three evenly-sized groups. Model Short queries Medium-length queries Long queries NDCG@1 NDCG@10 NDCG@1 NDCG@10 NDCG@1 NDCG@10 LTR 0.5040 0.6470 0.4753 0.6332 0.4799 0.6162 NRM-F 0.5132 0.6584 0.4846 0.6355 0.4723 0.6186 model for diferent felds, which is expected. Te results also suggest that NRM-F performs beter than neural ranking models with score aggregation. Te improvements are statistically signifcant. Terefore, this experiment validates our third hypothesis. H4: Learning per-feld query representations performs better than learning a single query representation. As mentioned in Section 3.7, we believe that diferent aspects of the query can match diferent felds, and thus diferent query representations are needed for diferent felds. Our empirical results in Table 4 also validate this hypothesis by showing that NRM-F provides a superior performance in comparison with exactly the same neural ranking model, but with single query representation for diferent felds. H5: The additional techniques of feld-level masking and feld-level dropout yield additional performance improvements. To study this hypothesis, we report the results for the following models: (1) our neural ranking model with no feld-level masking and dropout, (2) our model with only feld-level masking, and eventually (3) our model with both feld-level masking and dropout. Note that all the models use conventional dropout [11]. Table 5 reports the results for all felds and for all felds except clicked queries. According to this table, feld-level masking is useful to cope with multi-instance felds and signifcantly improves the performance. Te model with both feld-level masking and dropout achieves the highest performance; however, the feld-level dropout technique is signifcantly helpful, when at least one of the felds is dominant (i.e., the high-accuracy felds like clicked queries). 4.4 Additional Analysis Learning curve. It has always been important to know how much data is needed to train the model. We plot the learning curve for our NRM-F model with all felds in Figure 4. Te performance is reported in terms of NDCG@10 on the test set. According to this fgure, we need approximately two million training instances to have a relatively stable performance. Analysis by query length. In this analysis, we uniformly split the test queries into three buckets based on their query length. Terefore, the number of queries in the buckets are approximately equal. Te frst bucket includes the shortest and the last one includes the longest queries. Te results for NRM-F and the LTR baseline with all felds (the one used in Table 4) are reported in G G G G G G G G G G G G G 160 480 800 1120 1440 1760 2080 0.59 0.60 0.61 0.62 0.63 0.64 # Training instances (*10^3) NDCG@10 Figure 4: Learning curve demonstrating the performance of NRM-F in terms of NDCG@10 with respect to the size of training set. Table 6. According to this table, our improvements over the LTR baseline generally decrease by increasing the query length. In other words, NRM-F performs relatively beter for shorter queries. Te reason is that long queries are ofen rare and thus it is likely that the models based on representation learning work much beter for shorter queries. On the other hand, the experiment is in a telescoping seting with anchor texts and clicked queries. Terefore, the additional terms and synonyms provided by, let say, clicked queries empower the LTR method that uses term matching features. In addition, for long queries in a telescoping seting, ignoring a query term is relatively likely to do not harm the results. 5" + }, + { + "url": "http://arxiv.org/abs/1705.03556v2", + "title": "Relevance-based Word Embedding", + "abstract": "Learning a high-dimensional dense representation for vocabulary terms, also\nknown as a word embedding, has recently attracted much attention in natural\nlanguage processing and information retrieval tasks. The embedding vectors are\ntypically learned based on term proximity in a large corpus. This means that\nthe objective in well-known word embedding algorithms, e.g., word2vec, is to\naccurately predict adjacent word(s) for a given word or context. However, this\nobjective is not necessarily equivalent to the goal of many information\nretrieval (IR) tasks. The primary objective in various IR tasks is to capture\nrelevance instead of term proximity, syntactic, or even semantic similarity.\nThis is the motivation for developing unsupervised relevance-based word\nembedding models that learn word representations based on query-document\nrelevance information. In this paper, we propose two learning models with\ndifferent objective functions; one learns a relevance distribution over the\nvocabulary set for each query, and the other classifies each term as belonging\nto the relevant or non-relevant class for each query. To train our models, we\nused over six million unique queries and the top ranked documents retrieved in\nresponse to each query, which are assumed to be relevant to the query. We\nextrinsically evaluate our learned word representation models using two IR\ntasks: query expansion and query classification. Both query expansion\nexperiments on four TREC collections and query classification experiments on\nthe KDD Cup 2005 dataset suggest that the relevance-based word embedding models\nsignificantly outperform state-of-the-art proximity-based embedding models,\nsuch as word2vec and GloVe.", + "authors": "Hamed Zamani, W. Bruce Croft", + "published": "2017-05-09", + "updated": "2017-07-16", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.LG", + "cs.NE" + ], + "main_content": "INTRODUCTION Representation learning is a long-standing problem in natural language processing (NLP) and information retrieval (IR). Te main motivation is to abstract away from the surface forms of a piece of text, e.g., words, sentences, and documents, in order to alleviate sparsity and learn meaningful similarities, e.g., semantic or syntactic similarities, between two diferent pieces of text. Learning representations for words as the atomic components of a language, also known as word embedding, has recently atracted much atention in the NLP and IR communities. A popular model for learning word representation is neural network-based language models. For instance, the word2vec model proposed by Mikolov et al. [24] is an embedding model that learns word vectors via a neural network with a single hidden layer. Continuous bag of words (CBOW) and skip-gram are two implementations of the word2vec model. Another successful trend in learning semantic word representations is employing global matrix factorization over word-word matrices. GloVe [28] is an example of such methods. A theoretical relation has been discovered between embedding models based on neural network and matrix factorization in [21]. Tese models have been demonstrated to be efective in a number of IR tasks, including query expansion [11, 17, 40], query classifcation [23, 41], short text similarity [15], and document model estimation [2, 31]. Te aforementioned embedding models are typically trained based on term proximity in a large corpus. For instance, the word2vec model\u2019s objective is to predict adjacent word(s) given a word or context, i.e., a context window around the target word. Tis idea aims to capture semantic and syntactic similarities between terms, since semantically/syntactically similar words ofen share similar contexts. However, this objective is not necessarily equivalent to the main objective of many IR tasks. Te primary objective in many IR methods is to model the notion of relevance [20, 34, 43]. In this paper, we revisit the underlying assumption of typical word embedding methods, as follows: Te objective is to predict the words observed in the documents relevant to a particular information need. Tis objective has been previously considered for developing relevance models [20], a state-of-the-art (pseudo-) relevance feedback approach. Relevance models try to optimize this objective given a set of relevant documents for a given query as the indicator of user\u2019s information need. In the absence of relevance information, the top ranked documents retrieved in response to the query are assumed to be relevant. Terefore, relevance models, and in general all pseudo-relevance feedback models, use an online seting to obtain training data: retrieving documents for the query and arXiv:1705.03556v2 [cs.IR] 16 Jul 2017 \fthen using the top retrieved documents in order to estimate the relevance distribution. Although relevance models have been proved to be efective in many IR tasks [19, 20], having a retrieval run for each query to obtain the training data for estimating the relevance distribution is not always practical in real-world search engines. We, in this paper, optimize a similar objective in an ofine seting, which enables us to predict the relevance distribution without any retrieval runs during the test time. To do so, we consider the top retrieved documents for millions of training queries as a training set and learn embedding vectors for each term in order to predict the words observed in the top retrieved documents for each query. We develop two relevance-based word embedding models. Te frst one, the relevance likelihood maximization model (RLM), aims to model the relevance distribution over the vocabulary terms for each query, while the second one, the relevance posterior estimation model (RPE), classifes each term as relevant or non-relevant to each query. We provide efcient learning algorithms to train these models on large amounts of training data. Note that our models are unsupervised and the training data is generated automatically. To evaluate our models, we performed two sets of extrinsic evaluations. In the frst set, we focus on the query expansion task for ad-hoc retrieval. In this set of experiments, we consider four TREC collections, including two newswire collections (AP and Robust) and two large-scale web collections (GOV2 and ClueWeb09 Cat. B). Our results suggest that the relevance-based embedding models outperform state-of-the-art word embedding algorithms. Te RLM model shows beter performance compared to RPE in the context of query expansion, since the goal is to estimate the probability of each term given a query and this distribution is not directly learned by the RPE model. In the second set of experiments, we focus on the query classifcation task using the KDD Cup 2005 [22] dataset. In this extrinsic evaluation, the relevance-based embedding models again perform beter than the baselines. Interestingly, the query classifcation results demonstrate that the RPE model outperforms the RLM model, for the reason that in this task, unlike the query expansion task, the goal is to compute the similarity between two query vectors, and RPE can learn more accurate embedding vectors with less training data. 2 RELATED WORK Learning a semantic representation for text has been studied for many years. Latent semantic indexing (LSI) [8] can be considered as early work in this area that tries to map each text to a semantic space using singular value decomposition (SVD), a well-known matrix factorization algorithm. Subsequently, Clinchant and Perronnin [5] proposed Fisher Vector (FV), a document representation framework based on continuous word embeddings, which aggregates a non-linear mapping of word vectors into a document-level representation. However, a number of popular IR models, such as BM25 and language models, ofen signifcantly outperform the models that are based on semantic similarities. Recently, extremely efcient word embedding algorithms have been proposed to model semantic similarly between words. Word embedding, also known as distributed representation of words, refers to a set of machine learning algorithms that learn high-dimensional real-valued dense vector representation \u00ae w \u2208Rd for each vocabulary term w, where d denotes the embedding dimensionality. GloVe [28] and word2vec [24] are two well-known word embedding algorithms that learn embedding vectors based on the same idea, but using diferent machine learning techniques. Te idea is that the words that ofen appear in similar contexts are similar to each other. To do so, these algorithms try to accurately predict the adjacent word(s) given a word or a context (i.e., a few words appeared in the same context window). Recently, Rekabsaz et al. [30] proposed to exploit global context in word embeddings in order to avoid topic shifing. Word embedding representations can be also learned as a set of parameters in an end-to-end neural network model. For instance, Zamani et al. [39] trained a context-aware ranking model in which the embedding vectors of frequent n-grams are learned using click data. More recently, Dehghani et al. [9] trained neural ranking models with weak supervision data (i.e., a set of noisy training data automatically generated by an existing unsupervised model) that learn word representations in an end-to-end ranking scenario. Word embedding vectors have been successfully employed in several NLP and IR tasks. Kusner et al. [16] proposed word mover\u2019s distance (WMD), a function for calculating semantic distance between two documents, which measures the minimum traveling distance from the embedded vectors of individual words in one document to the other one. Zhou et al. [47] introduced an embeddingbased method for question retrieval in the context of community question answering. Vuli\u00b4 c and Moens [37] proposed a model to learn bilingual word embedding vectors from document-aligned comparable corpora. Zheng and Callan [46] presented a supervised embedding-based technique to re-weight terms in the existing IR models, e.g., BM25. Based on the well-defned structure of language modeling framework in information retrieval, a number of methods have been introduced to employ word embedding vectors within this framework in order to improve the performance in IR tasks. For instance, Zamani and Crof [40] presented a set of embedding-based query language models using the query expansion and pseudo-relevance feedback techniques that beneft from the word embedding vectors. Qery expansion using word embedding has been also studied in [11, 17, 35]. All of these approaches are based on word embeddings learned based on term proximity information. PhraseFinder [14] is an early work using term proximity information for query expansion. Mapping vocabulary terms to HAL space, a low-dimensional space compared to vocabulary size, has been used in [4] for query modeling. As is widely known in the information retrieval literature [11, 38], there is a big diference between the unigram distribution of words on sub-topics of a collection and the unigram distribution estimated from the whole collection. Given this phenomenon, Diaz et al. [11] recently proposed to train word embedding vectors on the top retrieved documents for each query. However, this model, called local embedding, is not always practical in real-word applications, since the embedding vectors need to be trained during the query time. Furthermore, the objective function in local embedding is based on term proximity in pseudo-relevant documents. In this paper, we propose two models for learning word embedding vectors, that are specifcally designed for information retrieval needs. All the aforementioned tasks in this section can potentially beneft from the vectors learned by the proposed models. \f3 RELEVANCE-BASED EMBEDDING Typical word embedding algorithms, such as word2vec [24] and GloVe [28], learn high-dimensional real-valued embedding vectors based on the proximity of terms in a training corpus, i.e., cooccurrence of terms in the same context window. Although these approaches could be useful for learning the embedding vectors that can capture semantic and syntactic similarities between vocabulary terms and have shown to be useful in many NLP and IR tasks, there is a large gap between their learning objective (i.e., term proximity) and what is needed in many information retrieval tasks. For example, consider the query expansion task and assume that a user submited the query \u201cdangerous vehicles\u201d. One of the most similar terms to this query based on the typical word embedding algorithms (e.g., word2vec and GloVe) is \u201csafe\u201d, and thus it would get a high weight in the expanded query model. Te reason is that the words \u201cdangerous\u201d and \u201csafe\u201d ofen share similar contexts. However, expanding the query with the word \u201csafe\u201d could lead to poor retrieval performance, since it changes the meaning and the intent of the query. Tis example together with many others have motivated us to revisit the objective used in the learning process of word embedding algorithms in order to obtain the word vectors that beter match with the needs in IR tasks. Te primary objective in many IR tasks is to model the notion of relevance. Several approaches, such as the relevance models proposed by Lavrenko and Crof [20], have been proposed to model relevance. Given the successes achieved by these models, we propose to learn word embedding vectors based on an objective that maters in information retrieval. Te objective is to accurately predict the terms that are observed in a set of relevant documents to a particular information need. In the following subsections, we frst describe our neural network architecture, and then explain how to build a training set for learning relevance-based word embeddings. We further introduce two models, relevance likelihood maximization (RLM) and relevance posterior estimation (RPE), with diferent objectives using the described neural network. 3.1 Neural Network Architecture We use a simple yet efective feed-forward neural network with a single linear hidden layer. Te architecture of our neural network is shown in Figure 1. Te input of the model is a sparse query vector \u00ae qs with the length of N, where N denotes the total number of vocabulary terms. Tis vector can be obtained by a projection function given the vectors corresponding to individual query terms. In this paper, we simply consider average as the projection function. Hence, \u00ae qs = 1 |q | \u00cd w \u2208q \u00ae ew, where \u00ae ew and |q| denote the one-hot vector representation of term w and the query length, respectively. Te hidden layer in this network maps the given query sparse vector to a query embedding vector \u00ae q, as follows: \u00ae q = \u00ae qs \u00d7 WQ (1) where WQ \u2208RN \u00d7d is a weight matrix for estimating query embedding vectors and d denotes the embedding dimensionality. Te output layer of the network is a fully-connected layer given by: \u03c3(\u00ae q \u00d7 W w + bw) (2) query sparse vector \u2026\u2026... \u2026\u2026... W1 hidden layer output layer W2 W3 WN d neurons N neurons qs Figure 1: Te relevance-based word embedding architecture. Te objective is to learn d-dimensional distributed representation for words based on the notion of relevance, instead of term proximity. N denotes the total number of vocabulary terms. where W w \u2208Rd\u00d7N and bw \u2208R1\u00d7N are the weight and the bias matrices for estimating the probability of each term. \u03c3 is the activation function which is discussed in Sections 3.3 and 3.4. To summarize, our network contains two sets of embedding parameters, WQ and W w. Te former aims to map the query into the \u201cquery embedding space\u201d, while the later is used to estimate the weights of individual terms. 3.2 Modeling Relevance for Training Relevance feedback has been shown to be highly efective in improving retrieval performance [7, 32]. In relevance feedback, a set of relevant documents to a given query is considered for estimating accurate query models. Since explicit relevance signals for a given query are not always available, pseudo-relevance feedback (PRF) assumes that the top retrieved documents in response to the given query are relevant to the query and uses these documents in order to estimate beter query models. Te efectiveness of PRF in various retrieval scenarios indicates that useful information can be captured from the top retrieved documents [19, 20, 44]. In this paper, we make use of this well-known assumption to train our model. It should be noted that there is a signifcant diference between PRF and the proposed models: In PRF, the feedback model is estimated from the top retrieved documents of the given query in an online seting. In other words, PRF retrieves the documents for the initial query and then estimates the feedback model using the top retrieved documents. In this paper, we propose to train the model in an ofine seting. Moving from the online to the ofine seting would lead to substantial improvements in efciency, because an extra retrieval run is not needed in the ofine seting. To learn a model in an ofine seting, we consider a fxed-length dense vector for each vocabulary term and estimate these vectors based on the information extracted from the top retrieved documents for large numbers of training queries. Note that our models are \funsupervised. However, if explicit relevance data is available, such as click data, without loss of generality, both the explicit or implicit relevant documents can be considered for training our models. We leave studying the vectors learned based on supervised signals for future work. To formally describe our training data, letT = {(q1, R1), (q2, R2), \u00b7 \u00b7 \u00b7 , (qm, Rm)} be a training set with m training queries. Te ith element of this set is a pair of query qi and the corresponding pseudo-relevance feedback distribution. Tese distributions are estimated based on the topk retrieved documents (in our experiments, we set k to 10) for each query. Te distributions can be estimated using any PRF model, such as those proposed in [20, 36, 42, 44]. In this paper, we only focus on the relevance model [20], a state-of-the-art PRF model, that estimates the relevance distribution as: p(w|Ri) \u221d \u00d5 d \u2208Fi p(w|d) \u00d6 w\u2032\u2208qi p(w\u2032|d) (3) where Fi denotes a set of top retrieved documents for query qi. Note that the probability of terms that do not appear in the top retrieved documents is equal to zero. 3.3 Relevance Likelihood Maximization Model In this model, the goal is to learn the relevance distribution R. Given a set of training data, we aim to fnd a set of parameters \u03b8R in order to maximize the likelihood of generating relevance model probabilities for the whole training set. Te likelihood function is defned as follows: m \u00d6 i=1 \u00d6 w \u2208Vi b p(w|qi;\u03b8R)p(w |Ri) (4) where b p is the relevance distribution that can be obtained given the learning parameters \u03b8R and p(w|Ri) denotes the relevance model distribution estimated for the ith query in the training set (see Section 3.2 for more detail). Vi denotes a subset of vocabulary terms that appeared in the top ranked documents retrieved for the query qi. Te reason for iterating over the terms that appeared in this set instead of the whole vocabulary set V is that the probability p(w|Ri) is equal to zero for all terms w \u2208V \u2212Vi. In this method, we model the probability distribution b p using the sofmax function (i.e., the function \u03c3 in Equation (2)) as follows:1 b p(w|q;\u03b8R) = exp ( \u00ae wT \u00ae q) \u00cd w\u2032\u2208V exp ( \u00ae w\u2032T \u00ae q) (5) where \u00ae w denotes the learned embedding vector for term w and \u00ae q is the query vector came from the output of the hidden layer in our network (see Section 3.1). According to the sofmax modeling and the log-likelihood function, we have the following objective: arg max \u03b8R m \u00d5 i=1 \u00d5 w \u2208Vi p(w|Ri) log exp ( \u00ae wT \u00ae qi) \u2212log \u00d5 w\u2032\u2208V exp ( \u00ae w\u2032T \u00ae qi) ! (6) Computing this objective function and its derivatives would be computationally expensive (due to the presence of the normalization factor \u00cd w\u2032\u2208V exp ( \u00ae w\u2032T \u00ae q) in the objective function). Since all the word embedding vectors as well as the query vector are 1For simplicity, we drop the bias term in these equations. changed during the optimization process, we cannot simply omit the normalization term as is done in [41] for estimating query embedding vectors based on pre-trained word embedding vectors. To make the computations more tractable, we consider a hierarchical approximation of the sofmax function, which was introduced by Morin and Bengio [26] in the context of neural network language models and then successfully employed by Mikolov et al. [24] in the word2vec model. Te hierarchical sofmax approximation uses a binary tree structure to represent the vocabulary terms, where each leaf corresponds to a unique word. Tere exists a unique path from the root to each leaf, and this path is used for estimating the probability of the word representing by the leaf. Terefore, the complexity of calculating sofmax probabilities goes down from O(|V |) to O(log(|V |)) which is the height of the tree. Tis leads to a huge improvement in computational complexity. We refer the reader to [25, 26] for the details of calculating the hierarchical sofmax approximation. 3.4 Relevance Posterior Estimation Model As an alternative to maximum likelihood estimation, we can estimate the relevance posterior probability. In the context of pseudorelevance feedback, Zhai and Lafery [44] assumed that the language model of the top retrieved documents is estimated based on a mixture model. In other words, it is assumed that there are two language models for the feedback set: the relevance language model2 and a background noisy language model. Tey used an expectation-maximization algorithm to estimate the relevance language model. In this model, we make use of this assumption in order to cast the problem of estimating the relevance distribution R as a classifcation task: Given a pair of word w and query q, does w come from the relevance distribution of the query q? Instead of p(w|R), this model estimatesp(R = 1|w,q;\u03b8R) where R is a Boolean variable and R = 1 means that the given term-query pair (w,q) comes from the relevance distribution R. \u03b8R is a set of parameters that is going to be learned during the training phase. Terefore, the problem is cast as a binary classifcation task that can be modeled by logistic regression (which means the function \u03c3 in Equation (2) is the sigmoid function): b p(R = 1| \u00ae w, \u00ae q;\u03b8R) = 1 1 + e(\u2212\u00ae wT \u00ae q) (7) where \u00ae w is the relevance-based word embedding vector for term w. Similar to the previous model, \u00ae q is the output of the hidden layer of the network, representing the query embedding vector. In order to address this binary classifcation problem, we consider a cross-entropy loss function. In theory, for each training query, our model should learn to model relevance for the terms appearing in the corresponding pseudo-relevant set and non-relevance for all the other vocabulary terms, which could be impractical, due to the large number of vocabulary terms. Similar to [24], we propose to use the noise contrastive estimation (NCE) [12] which hypothesizes that we can achieve a good model by only diferentiating the data from noise via a logistic regression model. Te main concept in NCE is similar to those proposed in the divergence from randomness model [3] and the divergence minimization feedback model [44]. 2Te phrase \u201ctopical language model\u201d was used in the original work [44]. We call it \u201crelevance language model\u201d to have consistent defnitions in our both models. \fBased on the NCE hypothesis, we defne the following negative cross-entropy objective function for training our model: arg max \u03b8R m \u00d5 i=1 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u03b7+ \u00d5 j=1 Ewj\u223cp(w |Ri) \u0002 log b p(R = 1| \u00ae wj, \u00ae qi;\u03b8R) \u0003 + \u03b7\u2212 \u00d5 j=1 Ewj\u223cpn(w) \u0002 log b p(R = 0| \u00ae wj, \u00ae qi;\u03b8R) \u0003\uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (8) where pn(w) denotes a noise distribution and \u03b7 = (\u03b7+,\u03b7\u2212) is a pair of hyper-parameters to control the number of positive and negative instances per query, respectively. We can easily calculate b p(R = 0| \u00ae wj, \u00ae qi) = 1 \u2212b p(R = 1| \u00ae wj, \u00ae qi). Te noise distribution pn(w) can be estimated using a function of unigram distribution U (w) in the whole training set. Similar to [24], we use pn(w) \u221dU (w)3/4 which has been empirically shown to work efectively for negative sampling. It is notable that although this model learns embedding vectors for both queries and words, it is not obvious how to calculate the probability of each term given a query; because Equation 7 only gives us a classifcation probability and we cannot simply use the Bayes rule here (since, not all probability components are known). Tis model can perform well when computing the similarity between two terms or two queries, but not a query and a term. However, we can use the model presented in [41] to estimate the query model using the word embedding vectors (not the ones learned for query vectors) and then calculate the similarity between a query and a term. 4 EXPERIMENTS In this section, we frst describe how we train the relevance-based word embedding models. We further extrinsically evaluate the learned embeddings using two IR tasks: query expansion and query classifcation. Note that the main aim here is to compare the proposed models with the existing word embedding algorithms, not with the state-of-the-art query expansion and query classifcation models. 4.1 Training In order to train relevance-based word embeddings, we obtained millions of unique queries from the publicly available AOL query logs [27]. Tis dataset contains a sample of web search queries from real users submited to the AOL search engine within a three-month period from March 1, 2006 to May 31, 2006. We only used query strings and no session and click information was obtained from this dataset. We fltered out the navigational queries containing URL substrings, i.e., \u201chtp\u201d, \u201cwww.\u201d, \u201c.com\u201d, \u201c.net\u201d, \u201c.org\u201d, \u201c.edu\u201d. All nonalphanumeric characters were removed from all queries. Applying all these constraints leads to over 6 millions unique queries as our training query set. To estimate the relevance model distributions in the training set, we considered top 10 retrieved documents in a target collection in response to each query using the Galago3 implementation of the query likelihood retrieval model [29] with Dirichlet prior smoothing (\u00b5 = 1500) [45]. 3htp://www.lemurproject.org/galago.php We implemented and trained our models using TensorFlow4. Te networks are trained based on the stochastic gradient descent optimizer using the back-propagation algorithm [33] to compute the gradients. All model hyper-parameters were tuned on the training set (the hyper-parameters with the smallest training loss value were selected). For each model, the learning rate and the batch size were selected from [0.001, 0.01, 0.1, 1] and [64, 128, 256], respectively. For RPE , we also tuned the number of positive and negative instances (i.e., \u03b7+ and \u03b7\u2212). Te value of \u03b7+ was swept between [20, 50, 100, 200] and the parameter \u03b7\u2212was selected from [5\u03b7+, 10\u03b7+, 20\u03b7+]. As suggested in [40], in all the experiments (unless otherwise stated) the embedding dimensionality was set to 300, for all models including the baselines. 4.2 Evaluation via Qery Expansion In this subsection, we evaluate the embedding models in the context of query expansion for the ad-hoc retrieval task. In the following, we frst describe the retrieval collections used in our experiments. We further explain our experimental setup as well as the evaluation metrics. We fnally report and discuss the query expansion results. 4.2.1 Data. We use four standard test collections in our experiments. Te frst two collections (AP and Robust) consist of thousands of news articles and are considered as homogeneous collections. AP and Robust were previously used in TREC 1-3 Ad-Hoc Track and TREC 2004 Robust Track, respectively. Te second two collections (GOV2 and ClueWeb) are large-scale web collections containing heterogeneous documents. GOV2 consists of the \u201c.gov\u201d domain web pages, crawled in 2004. ClueWeb (i.e., ClueWeb09Category B) is a common web crawl collection that only contains English web pages. GOV2 and ClueWeb were previously used in TREC 2004-2006 Terabyte Track and TREC 2009-2012 Web Track, respectively. Te statistics of these collections as well as the corresponding TREC topics are reported in Table 1. We only used the title of topics as queries. 4.2.2 Experimental Setup. We cleaned the ClueWeb collection by fltering out the spam documents. Te spam fltering phase was done using the Waterloo spam scorer5 [6] with the threshold of 60%. Stopwords were removed from all collections using the standard INQUERY stopword list and no stemming were performed. For the purpose of query expansion, we consider the language modeling framework [29] and estimate a query language model based on a given set of word embedding vectors. Te expanded query language model p(w|\u03b8\u2217 q) is estimated as: p(w|\u03b8\u2217 q) = \u03b1pML(w|q) + (1 \u2212\u03b1)p( \u00ae w|\u00ae q) (9) where pML(w|q) denotes maximum likelihood estimation of the original query and \u03b1 is a free hyper-parameter that controls the weight of original query model in the expanded model. Te probability p( \u00ae w|\u00ae q) is calculated based on the trained word embedding vectors. In our frst model, this probability can be estimated using Equation (5); while in the second model, we should simply use the Bayes rule given Equation (7) to estimate this probability. However, since we do not have any information about the probability of each 4htp://tensorfow.org/ 5htp://plg.uwaterloo.ca/\u223cgvcormac/clueweb09spam/ \fTable 1: Collections statistics. ID collection queries (title only) #docs avg doc length #qrels AP Associated Press 88-89 TREC 1-3 Ad-Hoc Track, topics 51-200 165k 287 15,838 Robust TREC Disks 4 & 5 minus Congressional Record TREC 2004 Robust Track, topics 301-450 & 601-700 528k 254 17,412 GOV2 2004 crawl of .gov domains TREC 2004-2006 Terabyte Track, topics 701-850 25m 648 26,917 ClueWeb ClueWeb 09 Category B TREC 2009-2012 Web Track topics 1-200 50m 1506 18,771 Table 2: Evaluating relevance-based word embeddings in the context of query expansion. Te superscripts 0/1/2/3/4 denote that the MAP improvements over MLE/word2vec-external/word2vec-target/GloVe-external/GloVe-target are statistically signifcant. Te highest value in each row is marked in bold. Collection Metric MLE word2vec GloVe Rel.-based Embedding external target external target RLM RPE AP MAP 0.2197 0.2399 0.2420 0.2319 0.2389 0.258001234 0.254301234 P@20 0.3503 0.3688 0.3738 0.3581 0.3631 0.388601234 0.3812034 NDCG@20 0.3924 0.4030 0.4181 0.4025 0.4098 0.424201234 0.422601234 Robust MAP 0.2149 0.2218 0.2215 0.2209 0.2172 0.245001234 0.237201234 P@20 0.3319 0.3357 0.3337 0.3345 0.3281 0.347601234 0.3409024 NDCG@20 0.3863 0.3918 0.3881 0.3918 0.3844 0.398201234 0.39550 GOV2 MAP 0.2702 0.2740 0.2723 0.2718 0.2709 0.286701234 0.285501234 P@20 0.5132 0.5257 0.5172 0.5186 0.5128 0.536701234 0.535801234 NDCG@20 0.4482 0.4571 0.4509 0.4539 0.4485 0.45760234 0.4557024 ClueWeb MAP 0.1028 0.1033 0.1033 0.1029 0.1026 0.106601234 0.1031 P@20 0.3025 0.3040 0.3053 0.3033 0.3048 0.3073 0.3030 NDCG@20 0.2237 0.2235 0.2252 0.2244 0.2244 0.227301 0.2241 term given a query, we use the uniform distribution. For other word embedding models (i.e., word2vec and GloVe), we use the standard method described in [11]. For all the models, we ignore the terms whose embedding vectors are not available. We retrieve the documents for the expanded query language model using the KL-divergence formula [18] with Dirichlet prior smoothing (\u00b5 = 1500) [45]. All the retrieval experiments were carried out using the Galago toolkit [7]. In all the experiments, the parameters \u03b1 (the linear interpolation coefcient) and m (the number of expansion terms) were set using 2-fold cross-validation over the queries in each collection. We selected the parameter \u03b1 from {0.1, . . . , 0.9} and the parameter m from {10, 20, ..., 100}. 4.2.3 Evaluation Metrics. To evaluate the efectiveness of query expansion models, we report three standard evaluation metrics: mean average precision (MAP) of the top ranked 1000 documents, precision of the top 20 retrieved documents (P@20), and normalized discounted cumulative gain [13] calculated for the top 20 retrieved documents (nDCG@20). Statistically signifcant diferences of MAP, P@20, and nDCG@20 values based on the two-tailed paired t-test are computed at a 95% confdence level (i.e., p value < 0.05). 4.2.4 Results and Discussion. To evaluate our models, we consider the following baselines: (i) the standard maximum likelihood estimation (MLE) of the query model without query expansion, (ii) two sets of embedding vectors (one trained on Google News as a large external corpus and one trained on the target retrieval collection) learned by the word2vec model6 [24], and (iii) two sets of embedding vectors (one trained on Wikipedia 2004 plus Gigawords 5 as a large external corpus7 and the other on the target retrieval collection) learned by the GloVe model [28]. Table 2 reports the results achieved by the proposed models and the baselines. According to this table, all the query expansion models outperform the MLE baseline in nearly all cases, which indicates the efectiveness of employing high-dimensional word representations for query expansion. Similar observations have been made in [11, 17, 40, 41]. According to the results, although word2vec performs slightly beter than GloVe, no signifcant diferences can be observed between their performances. According to Table 2, both relevance-based embedding models outperform all the baselines in all the collections, which shows the importance of taking relevance into account for training embedding vectors. Tese improvements are ofen statistically signifcant compared to all the baselines. Te relevance likelihood maximization model (RLM) performs beter than the relevance posterior estimation model (RPE) in all cases and the reason is related to their objective function. RLM learns the relevance distribution for all terms, while RPE learns the classifcation probability of being relevance for vocabulary terms (see Equations (5) and (7)). 6We use the CBOW implementation of the word2vec model. Te skip-gram model also performs similarly. 7Available at htp://nlp.stanford.edu/projects/glove/. \fTable 3: Top 10 expansion terms obtained by the word2vec and the relevance-based word embedding models for two sample queries \u201cindian american museum\u201d and \u201ctibet protesters\u201d. query: \u201cindian american museum\u201d query: \u201ctibet protesters\u201d word2vec Rel.-based Embedding word2vec Rel.-based Embedding external target RLM RPE external target RLM RPE history powwows chumash heye demonstrators tibetan tibetan tibetan art smithsonian heye collection protestors lhasa lama tibetans culture afro artifacts chumash tibetan demonstrators tibetans lama british mesoamerica smithsonian smithsonian protests tibetans lhasa independence heritage smithsonians collection york tibetans marchers dalai lhasa society native washington new protest lhasas independence dalai states heye institution apa activists jokhang protest open contemporary hopi york native protesting demonstrations open protest part mayas native americans lhasa dissidents zone zone united cimam apa history demonstrations barkhor followers jokhang To get a sense of what is learned by each of the embedding models8, in Table 3 we report the top 10 expansion terms for two sample queries from the Robust collection. According to this table, the terms added to the query by the word2vec model are syntactically or semantically related to individual query terms, which is expected. For the query \u201cindian american museum\u201d as an example, the terms \u201chistory\u201d, \u201cart\u201d, and \u201cculture\u201d are related to the query term \u201cmuseum\u201d, while the terms \u201cunited\u201d and \u201cstates\u201d are related to the query term \u201camerican\u201d. In contrast, looking at the expansion terms obtained by the relevance-based word embeddings, we can see that some relevant terms to the whole query were selected. For instance, \u201cchumash\u201d (a group of native americans)9, \u201cheye\u201d (the national museum of the American Indian in New York), \u201csmithsonian\u201d (the national museum of the American Indian in Washington DC), and \u201capa\u201d (the American Psychological Association that actively promotes American Indian museums). A similar observation can be made for the other sample query (i.e., \u201ctibet protesters\u201d). For example, the word \u201cindependence\u201d is related to the whole query that was only selected by the relevance-based word embedding models, while the terms \u201cprotestors\u201d, \u201cprotests\u201d, \u201cprotest\u201d, and \u201cprotesting\u201d that are syntactically similar to the query term \u201cprotesters\u201d were considered by the word2vec model. We believe that these diferences are due to the learning objective of the models. Interestingly, the expansion terms added to each query by the two relevance-based models look very similar, but according to Table 2, their performances are quite diferent. Te reason is related to the weights given to each term by the two models. Te weights given to the expansion terms by RPE are very close to each other because its objective is to just classify each term and all of these terms are classifed with a high probability as \u201crelevant\u201d. In the next set of experiments, we consider the methods that use the top retrieved documents for query expansion: the relevance model (RM3) [1, 20] as a state-of-the-art pseudo-relevance feedback model, and the local embedding approach recently proposed by Diaz et al. [11] with the general idea of training word embedding models on the top ranked documents retrieved in response to a given query. Similar to [11], we use the word2vec model to train 8For the sake of space, we only report the expanded terms estimated by the word2vec model and the proposed models. 9see htps://en.wikipedia.org/wiki/Chumash people Table 4: Evaluating relevance-based word embedding in pseudo-relevance feedback scenario. Te superscripts 1/2/3 denote that the MAP improvements over RM3/Local Embedding/ERM with Local Embedding are statistically signifcant. Te highest value in each row is marked in bold. Collection Metric RM3 Local ERM Emb. Local RLM AP MAP 0.2927 0.2412 0.3047 0.311912 P@20 0.4034 0.3742 0.4105 0.423312 NDCG@20 0.4368 0.4173 0.4411 0.4495123 Robust MAP 0.2593 0.2235 0.2643 0.2761123 P@20 0.3486 0.3366 0.3498 0.3605123 NDCG@20 0.4011 0.3868 0.4080 0.4173123 GOV2 MAP 0.2863 0.2748 0.2924 0.2986123 P@20 0.5318 0.5271 0.5379 0.541712 NDCG@20 0.4503 0.4576 0.4584 0.4603123 ClueWeb MAP 0.1079 0.1041 0.1094 0.112112 P@20 0.3111 0.3062 0.3145 0.3168 NDCG@20 0.2309 0.2261 0.2328 0.23602 word embedding vectors on top 1000 documents. Te results are reported in Table 4. In this table, ERM refers to the embedding-based relevance model recently proposed by Zamani and Crof [40] in order to make use of semantic similarities estimated based on the word embedding vectors in a pseudo-relevance feedback scenario. According to Table 4, the ERM model that uses the relevance-based word embedding (RLM10) outperforms all the other methods. Tese improvements are statistically signifcant in most cases. By comparing the results obtained by local embedding and those reported in Table 2, it can be observed that there are no substantial diferences between the results for local embedding and word2vec. Tis is similar to what is reported by Diaz et al. [11] when the embedding vectors are trained on the top documents in the target collection, similar to our seting. Note that the relevance-based model was also trained on the target collection. 10For the sake of space, we only consider RLM which shows beter performance compared to RPE in query expansion. \f0.06 0.07 0.08 0.09 0.1 0.23 0.24 0.25 0.26 G G G G G 5 10 15 20 25 # expansion terms MAP G AP Robust GOV2 ClueWeb (a) # expansion terms G G G G G G G G G G G 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.05 0.10 0.15 0.20 0.25 0.30 interpolation coefficient MAP G AP Robust GOV2 ClueWeb (b) interpolation coefcient Figure 2: Sensitivity of RLM to the number of expansion terms and the interpolation coefcient (\u03b1), in terms of MAP. 0.07 0.09 0.1 0.11 0.25 0.27 0.29 G G G G G 100 200 300 400 500 embedding dimension MAP G AP Robust GOV2 ClueWeb Figure 3: Sensitivity of RLM to the dimension of embedding vectors, in terms of MAP. An interesting observation from Tables 2 and 4 is that the RLM performance (without using pseudo-relevant documents) in Robust and GOV2 is very close to the RM3 performance, and is slightly beter in the GOV2 collection. Note that RM3 needs two retrieval runs11 and uses top retrieved documents, while RLM only needs one retrieval run. Tis is an important issue in many real-world applications, since the efciency constraints do not always allow them to have two retrieval runs per query. Parameter Sensitivity. In the next set of experiments, we study the sensitivity of RLM as the best performing word embedding model in Table 2 to the expansion parameters. Figure 2a plots the sensitivity of RLM to the number of expansion terms where the parameter \u03b1 is set to 0.5. According to this fgure, in both newswire collections, the method shows its best performance when the queries are expanded with only 10 words. In the GOV2 collection, 15 words are needed for the method to show its best performance. Figure 2b plots the sensitivity of the methods to the interpolation coefcient \u03b1 (see Equation 9) where the number of expansion terms is set to 10. According to the curves correspond to AP and Robust, the original query language model needs to be interpolated with the model estimated using relevance-based word embeddings 11Diaz [10] showed that for precision-oriented tasks, the second retrieval run can be restricted to the initial rank list for improving the efciency of PRF models. However, for recall-oriented metrics, e.g., MAP, the second retrieval helps a lot. 0.06 0.08 0.1 0.18 0.2 0.22 0.24 0.26 0.28 G G G G G 1 2 3 4 5 million queries MAP G AP Robust GOV2 ClueWeb Figure 4: Te Performance of RLM with respect to diferent amount of training data (training queries), in terms of MAP. with equal weights (i.e., \u03b1 = 0.5). Tis shows the quality of the estimated distribution via the learned embedding vectors. In the GOV2 collection, a higher weight should be given to the original query model, which indicates that the original query plays a key role in achieving good retrieval performance in this collection. We also study the performance of RLM as the best performing word embedding model for query expansion with respect to the embedding dimensionality. Te results are shown in Figure 3, where the query expansion performance generally improves as we increase the embedding dimensionality. Te performances become stable when the dimension is larger than 300. Tis experiment suggests that 400 dimensions would be enough for the relevance-based embedding model. Due to the large number of parameters in the neural networks, they can require large amounts of training data to achieve good performance. In the next set of experiments, we study how much training data is needed for training our best model. Te results are ploted in Figure 4. According to this fgure, by increasing the number of training queries from one million to four million queries, the performance signifcantly increases, and becomes more stable afer four million queries. 4.3 Evaluation via Qery Classifcation In this subsection, we evaluate the proposed embedding models in the context of query classifcation. In this task, each query is \fTable 5: Evaluating embedding algorithms via query classifcation. Te superscripts 1/2 denote that the improvements over word2vec/GloVe are signifcant. Te highest value in each column is marked in bold. Method Precision F1-measure word2vec 0.3712 0.4008 GloVe 0.3643 0.3912 Rel.-based Embedding RLM 0.394312 0.426712 Rel.-based Embedding RPE 0.396112 0.429412 assigned to a number of labels (categories) which are pre-defned and a few training queries are available for each label. Tis is a supervised multi-label classifcation task with litle training data. 4.3.1 Data. We consider the dataset that was introduced in KDD Cup 2005 [22] for the internet user search query categorization task and was previously used in [41] for evaluating query embedding vectors. Tis dataset contains 800 web queries submited by real users randomly collected from the MSN search logs. Te queries do not contain \u201cjunk\u201d text or non-English terms. Te queries were labelled by three human editors. 67 categories were pre-defned and up to 5 labels were selected for each query by each editor. 4.3.2 Experimental Setup. In our experiments, we performed 5-fold cross-validation over the queries and the reported results are the average of those obtained over the test folds. In all experiments, the spelling errors in queries were corrected in a pre-processing phase, the stopwords were removed from queries (using the INQUERY stopword list), and no stemming was performed. To classify each query, we consider a very simple kNN-based approach proposed in [41]. We frst compute the probability of each category/label given each query q and then select the top t categories with the highest probabilities. Te probability p(Ci |q) is computed as follows: p(Ci |q) = \u03b4( \u00ae Ci, \u00ae q) \u00cd j \u03b4( \u00ae Cj, \u00ae q) \u221d\u03b4( \u00ae Ci, \u00ae q) (10) where Ci denotes the ith category. \u00ae Ci is the centroid vector of all query embedding vectors with the label of Ci in the training set. We ignore the query terms whose embedding vectors are not available. Te number of labels assigned to each query was tuned on the training set from {1, 2, 3, 4, 5}. In the query classifcation experiments, we trained relevance-based word embedding using Robust as the collection. 4.3.3 Evaluation Metrics. We consider two evaluation metrics that were also used in KDD Cup 2005 [22]: precision and F1measure. Since the labels assigned by the three human editors difer in some cases, all the label sets should be taken into account. Tese metrics are computed in the same way as what is described in [22] for evaluating the KDD Cup 2005 submited runs. Statistically signifcant diferences are determined using the two-tailed paired t-test computed at a 95% confdence level (p \u2212value < 0.05). 4.3.4 Results and Discussion. We compare our models against the word2vec and GloVe methods trained on the external collections that are described in the query expansion experiments. Te results are reported in Table 5, where the relevance-based embedding G G G G G 100 200 300 400 500 0.420 0.422 0.424 0.426 0.428 0.430 embedding dimension F1\u2212measure G RLM RPE Figure 5: Sensitivity of the relevance-based embedding models to the embedding dimensionality, in terms of F1measure. G G G G G 1 2 3 4 5 0.34 0.36 0.38 0.40 0.42 million queries F1\u2212measure G RLM RPE Figure 6: Te Performance of relevance-based embedding models with respect to diferent amount of training data (training queries), in terms of F1-measure. models signifcantly outperform the baselines in terms of both metrics. An interesting observation here is that contrary to the query expansion experiments, RPE performs beter than RLM in query classifcation. Te reason is that in query expansion the weight of each term is considered in order to generate the expanded query language model. Terefore, in addition to the order of terms, their weights should be also efective for improving the retrieval performance with query expansion. In query classifcation, we only assign a few categories to each query, and thus as long as the order of categories is correct, the similarity values between the queries and the categories do not mater. In the next set of experiments, we study the performance of our relevance-based word embedding models with respect to the embedding dimensionality. Te results are ploted in Figure 5. According to this fgure, the performance is generally improved by increasing the embedding dimensionality, and becomes stable when the dimension is greater than 400. Tis is similar to our observation in the query expansion experiments. We also study the amount of data needed for training our models in Figure 6. According to this fgure, at least 4 million queries are needed in order to learn accurate relevance-based word embeddings. It can be seen from Figure 6 that RLM needs more training data compared to RPE in order to perform well, because by increasing the amount of training data the learning curves of these two models get closer. \f5" + }, + { + "url": "http://arxiv.org/abs/1501.07467v1", + "title": "Regression and Learning to Rank Aggregation for User Engagement Evaluation", + "abstract": "User engagement refers to the amount of interaction an instance (e.g., tweet,\nnews, and forum post) achieves. Ranking the items in social media websites\nbased on the amount of user participation in them, can be used in different\napplications, such as recommender systems. In this paper, we consider a tweet\ncontaining a rating for a movie as an instance and focus on ranking the\ninstances of each user based on their engagement, i.e., the total number of\nretweets and favorites it will gain.\n For this task, we define several features which can be extracted from the\nmeta-data of each tweet. The features are partitioned into three categories:\nuser-based, movie-based, and tweet-based. We show that in order to obtain good\nresults, features from all categories should be considered. We exploit\nregression and learning to rank methods to rank the tweets and propose to\naggregate the results of regression and learning to rank methods to achieve\nbetter performance. We have run our experiments on an extended version of\nMovieTweeting dataset provided by ACM RecSys Challenge 2014. The results show\nthat learning to rank approach outperforms most of the regression models and\nthe combination can improve the performance significantly.", + "authors": "Hamed Zamani, Azadeh Shakery, Pooya Moradi", + "published": "2015-01-29", + "updated": "2015-01-29", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG", + "H.2.8; J.4" + ], + "main_content": "INTRODUCTION Twitter is an online social information network which has become tremendously popular in the past few years [19]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\ufb01t or commercial advantage and that copies bear this notice and the full citation on the \ufb01rst page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior speci\ufb01c permission and/or a fee. Request permissions from permissions@acm.org. RecSysChallenge\u201914, October 10, 2014, Foster City, CA, USA. Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$15.00. Millions of users are sharing rich information using social media sites, such as Twitter, which can be used by social recommender systems [12]. Item providers often let users express their opinion about an item in social networks. For instance, users can give a rating to each movie in Internet Movie Database (IMDb) website1 and also share it in Twitter. This intensi\ufb01es the importance of considering social media sites for recommendation and information \ufb01ltering systems [31]. Product rating prediction is a traditional recommender system problem which has been studied extensively in the literature [10, 23, 24]. One important issue in recommender systems is the engagement which can be gained by the users\u2019 comments/opinions. When users share their comments on di\ufb00erent items, the amount of user interactions achieved by each comment can be used to improve the quality of recommender systems. In this paper, we focus on ranking these comments by their engagements. We focus on movie ratings tweeted by IMDb users in Twitter. Hereafter, we use the word \u201cengagement\u201d as the user interaction which is expressed by adding up the number of retweets and favorites a tweet has gained. Our purpose is to rank the tweets of each user, each containing a rating for a movie in IMDb, by their engagements. For this task, we \ufb01rst extract several features from the tweets. The features are categorized into three groups: userbased, movie-based, and tweet-based. It should be noted that the content of the tweets are hidden and there is no textual feature among our de\ufb01ned features. Then, we propose two di\ufb00erent supervised approaches in order to rank the tweets. The \ufb01rst approach tires to predict the tweets engagements globally. In other words, although our purpose is to sort the tweets of each user, we consider tweets of all the users together and then try to predict the tweets engagements. We can then extract the sorted list of each user from the global ranked list. Therefore, we \ufb01t regression models to predict the engagement of each tweet. In the second approach, for each user, we rank the tweets by their engagement without predicting the engagements. To this aim, we use learning to rank approach which is extensively exploited in information retrieval, natural language processing, and recommender systems. Learning to rank methods rank the tweets for each user. In contrary to regression models which try to predict the engagements by considering all the tweets together, learning to rank methods emphasize on maximizing an objective function for each user. According to the di\ufb00erent points of view of regression and learning to 1http://imdb.com \frank methods, we further propose to aggregate the results obtained by di\ufb00erent regression and learning to rank methods to improve the performance. In the experiments, we use an extended version of MovieTweetings dataset [9] provided by ACM RecSys Challenge 2014 and report the results of a number of state-of-the-art regression and learning to rank methods, separately. We further discuss the aggregation of the results of these two approaches. The experimental results show that although the results of regression methods are not so impressive, aggregation of regression and learning to rank methods improves the results signi\ufb01cantly. 2. RELATED WORK The problem of engagement prediction or online participation has been studied from di\ufb00erent points of view in news websites, social networks, and discussion forums. Several machine learning algorithms have been used in the literature for this task. To address the problem of engagement prediction, several features have been proposed for training a model. Suh et al. [28] have provided an analysis on the factors impacting the number of retweets. They have concluded that hashtags, number of followers, number of followees, and the account age play important roles in increasing the probability of the tweets to be retweeted. Zaman et al. [34] have trained a probabilistic collaborative \ufb01ltering model to predict the future retweets using the history of the previous ones. Linear models have been used in some other studies to predict the popularity of videos on YouTube by observing their popularity after regular periods [29]. Petrovic et al. [26] have proposed a passive-aggressive algorithm to predict whether a tweet will be retweeted or not. Recognizing popular messages is also one of the similar problems which is used for breaking news detection and personalized tweet/content recommendation. Hong et al. [13] have formulated this task as a classi\ufb01cation problem by exploiting content-based features, temporal information, meta-data of messages, and the users social graph. Predicting the extent to which a news is going to be breaking or how many comments a news is going to gain is one of the engagement prediction problems. Tatar et al. [30] have analyzed a news dataset to address this problem. They have focused on sorting the articles based on their future popularity and they have proposed to use linear regression for this task. It is worth noting that ranking instances is one of the problems which has been extensively studied in information retrieval, natural language processing, and machine learning \ufb01elds [21]. To solve a similar problem, Uysal and Croft [31] have proposed \u201cCoordinate Ascent learning to rank\u201d algorithm to rank tweets for a user in a way that tweets which are more likely to be retweeted come on top. They have also worked on ranking users for a tweet in a way that the higher the rank, the more likely the given tweet will be retweeted. Several learning to rank algorithms have been proposed in the literature. Moreover, there are some supervised and unsupervised ensemble methods to aggregate di\ufb00erent rankings, such as Borda Count [2] and Cranking [20]. Previous studies show that in many cases, ranking aggregation methods outperform single ranking methods [8, 21]. 3. METHODOLOGY In general, our idea is to extract a number of features for each tweet and then try to learn machine learning based models on the training data. Then, for each user in test data, we apply the learned model to rank his/her tweets based on their engagements. In this section, we \ufb01rst introduce the features, and then we propose some machine learning approaches to rank the tweets based on their engagements. We also try to aggregate the results of these di\ufb00erent techniques to improve the performance. In the following subsections, we explain our methodology in details. 3.1 Features Each tweet contains the opinion of a user about a speci\ufb01c movie. We partition the features extracted from each tweet into three di\ufb00erent categories: user-based, movie-based, and tweet-based features. Overall, we extract several features from each tweet T tweeted by user U about movie M. Userbased features give us some information about the user who has tweeted his/her opinion about a speci\ufb01c movie. These features are not tweet-speci\ufb01c and they are equal for all tweets of each user. The total number of followers of U is an example of user-based features. Movie-based features only include information about movie M, e.g., the total number of tweets about movie M. Tweet-based features contain speci\ufb01c information of tweet T. This information may also contain the opinion of user U about movie M. The time and language of a tweet are two examples of tweet-based features. The name and description of the extracted features are shown in Table 1. These features are extracted for each tweet T. We specify the category of the features and also their type; \u201cN\u201d, \u201cC\u201d, and \u201cB\u201d are used for numerical, categorical, and boolean types, respectively. It should be noted that the feature values are normalized using z-score normalization method. We also perform feature selection to improve the performance and also to analyse the e\ufb00ectiveness of the proposed features. We exploit backward elimination for feature selection. The bolded features in Table 1 are those that are retained after performing feature selection. We discuss the selected features in Subsection 4.1 3.2 Machine Learning Techniques for User Engagement Ranking In this subsection, we propose two di\ufb00erent learning based approaches to rank the tweets of each user based on their engagements. The \ufb01rst approach is predicting the engagement of tweets, globally. In other words, for predicting the engagement of tweets of a user, we consider the tweets of all users for training the model and not only the tweets of the user. To this aim, we use regression models to predict the engagement of each tweet. The next approach is to rank the tweets for each user without predicting their engagements. We exploit learning to rank methods to rank the tweets of each user, which focus on ranking the tweets of each user individually and try to maximize a given objective function for each user. Finally, we propose a supervised method to aggregate the regression and learning to rank results using supervised Kemeny approach [1]. In the following, we explain our proposed methods in details. \fTable 1: Extracted features from each tweet T tweeted by user U about movie M Cat. Feature Name Type Description User-based Number of followers N The total number of users who are following user U in Twitter. Number of followees N The total number of users who are followed by user U in Twitter. Number of tweets N The total number of tweets written by user U. Number of IMDb tweets N The total number of tweets tweeted by user U using IMBD about di\ufb00erent movies. Average of ratings N The average of ratings provided by user U about di\ufb00erent movies in IMDb. Number of liked tweets N The total number of tweets which are liked by user U. Number of lists N The total number of Twitter lists which user U is involved in. Tweeting frequency N The frequency of tweets written by user U in each day. Attracting followers frequency N The frequency of attracting followers per day. This feature is calculated by dividing the total number of followers by the membership age of user U in Twitter in terms of number of days. Following frequency N The frequency of following di\ufb00erent users by user U per day. Like frequency N The frequency of liking tweets by user U per day. Followers/Followees N The total number of followers of user U divided by the total number of his/her followees. FollowersFollowees N The di\ufb00erence between the total number of followers and followees of user U. Movie-based Number of tweets about M N The total number of tweets tweeted using IMDb about movie M. This feature shows how much movie M is rated by di\ufb00erent users around the world in IMDb. Average rating of M N The average of ratings reported by di\ufb00erent users for movie M. Tweet-based Rate N The rating provided by user U for movie M. This rating is a positive integer up to 10. Mention count N The total number of people who are mentioned in tweet T. Number of hash-tags N The total number of hash-tags used in tweet T. Tweet age N The age of tweet T in terms of number of days. Membership age until now N The number of days from when user U registered in Twitter until when tweet T is tweeted. opinion di\ufb00erence N The di\ufb00erence between the rate tweeted by user U for movie M and the average of rates given by di\ufb00erent users about movie M. Hour of tweet C The hour when tweet T is tweeted. This feature is an integer between 0 and 23. Day of tweet C The day of week which tweet T is tweeted. Time of tweet C The part of the day that tweet T is tweeted. We have partitioned each day into four parts. Holidays or not B This feature give us whether tweet T is tweeted on holidays or not. Same language or not B This feature illustrates whether tweet T is tweeted in the same language as the default language of user U or not. English or not B This feature tells us whether tweet T is tweeted in English or not. \f3.2.1 Regression To rank the tweets of each user based on their possible engagements, we can \ufb01rst predict the engagement of each tweet and then sort the tweets by their predicted values. To predict the engagements, we propose to train regression models by using the features de\ufb01ned in Subsection 3.1 as the features and the engagements as the labels. Then, we apply the learned model on the same extracted features from the test set. To create the regression model, we exploit Extremely Randomized Trees (also known as Extra-Trees) [11], Bayesian Ridge Regression [22], and Stochastic Gradient Descent Regression (SGDR) [4]. Extra-Trees are tree-based ensemble regression methods which are successfully used in several tasks. In Extra-Trees, when a tree is built, the node splitting step is done randomly by choosing the best split among a random subset of features. The results of all trees are combined by averaging the individual predictions. SGDR is a generalized linear regression model that tries to \ufb01t a linear model by minimizing a regularized empirical loss function using gradient descent technique. 3.2.2 Learning to Rank Instead of predicting the exact engagements, we can rank the tweets directly, without predicting the engagements of each tweet. Learning to Rank (LTR) methods are machine learning techniques which try to solve ranking problems [21]. LTR methods have been widely used in many di\ufb00erent areas such as information retrieval, natural language processing, and recommender systems [16, 21]. LTR methods train a ranking model and use the learned model to rank the instances using several features which are extracted from each instance. To build our LTR model, we consider a number of ranking algorithms which are among state-of-the-art in many test collections: ListNet [7], RankingSVM [15], AdaRank [33], RankNet [6], LambdaRank [5], and ListMLE [32]. ListNet is a probabilistic listwise approach to solve ranking problems, which exploits a parameterized Plackett-Luce model to compute di\ufb00erent permutations. Ranking SVM is a pairwise ranking approach which uses SVM classi\ufb01er in its core computations. The basic idea behind AdaRank is constructing some weak rankers and combining them linearly to achieve a better performance. Although, Ranking SVM creates a ranking model by minimizing the classi\ufb01cation error on instance pairs, AdaRank tries to minimize the loss function which is directly de\ufb01ned as an evaluation measure (such as NDCG@10). RankNet is one of the pairwise methods that adopts cross entropy as the loss function. RankNet employs a three layered neural network with a single output node to compare each pairs. LambdaRank is one of the ranking algorithms inspired by RankNet which uses Gradient Descent approach to optimize the evaluation measure. Similar to ListNet, ListMLE is a probabilistic listwise approach to rank instances by maximizing a logarithmic loss function. 3.2.3 Aggregating Regression and Learning to Rank Outputs According to the aforementioned facts, regression and learning to rank techniques take two di\ufb00erent points of view into consideration and their results might be totally di\ufb00erent. Therefore, by aggregating their results, the performance can potentially be increased. To aggregate all the mentioned regression and learning to rank results, we use supervised Kemeny approach [1]. Kemeny optimal aggregation [17] tries to minimize total number of pairwise disagreements between the \ufb01nal ranking and the outputs of all base rankers. In other words, if r1, r2, ..., rn represent the outputs of n di\ufb00erent rankers, the \ufb01nal ranking r\u2217is computed as: r\u2217= arg max r { n X i=1 k(r, ri)} where k(\u03b1, \u03b2) is the Kendall tau distance [18] measured as: |(i, j) : i < j, \u03b1i > \u03b1j \u2227\u03b2i < \u03b2j| where \u03b1i denotes the ith position of ranking \u03b1. While in Kemeny optimal aggregation all the rankers have the same importance, supervised Kemeny approach assumes that there is a weight for each ranker. In more details, in supervised Kemeny instead of counting the number of disagreements, we use the following equation to compute the \ufb01nal ranking: r\u2217= arg max r { n X i=1 k(r, ri) \u2217wi} where wi denotes the weight of ith ranker. To \ufb01nd the weight of each ranker, we propose to perform a Randomized Search [3]. To this aim, we perform cross validation over training data and \ufb01nd the optimal weight for each ranker. 4. EXPERIMENTS In the experiments, we consider an extended version of MovieTweetings dataset [9] which is provided by ACM RecSys Challenge 2014 [27].2 The dataset contains movie ratings which are automatically tweeted by the users of IMDb iOS application. The reported results throughout this work are those obtained on the test set. The evaluation measure is the mean of normalized discounted cumulative gain [14] computed for top 10 tweets of each user. We call it NDCG@10, hereafter. In our experiments, we used Scikit-learn library [25] for all the regression and feature selection algorithms. To select the parameters of the learning methods, we performed hyper-parameter optimization using Randomized Search [3] with 5-fold cross validation. For the learning to rank algorithms except AdaRank, we exploited an open source package, named ToyBox-Ranking3. For AdaRank, we used the software developed in Microsoft Research [33].4 4.1 Experimental Results and Discussion In this subsection, we report and discuss the results of di\ufb00erent regression and learning to rank methods. We also provide the results obtained by aggregating the regression and learning to rank results using the supervised Kemeny approach. To show the impact of feature selection, we report the results of regression and learning to rank methods both before and after feature selection. As mentioned before, the bolded features in Table 1 are those retained after performing backward elimination method. The selected features are 2http://2014.recsyschallenge.com/ 3https://github.com/y-tag/cpp-ToyBox-Ranking 4http://goo.gl/xycK0h \fTable 2: Regression results with and without feature selection NDCG@10 REG method REG w/ FS REG w/o FS XT 0.7441384724 0.7863435909 BRR 0.7541443109 0.7759180414 SGDR 0.7507494314 0.8168741812 Table 3: Learning to rank results with and without feature selection NDCG@10 LTR method LTR w/ FS LTR w/o FS ListNet 0.8243394623 0.8190048552 RankingSVM 0.8225893034 0.8169257071 AdaRank 0.8182340058 0.8153622186 RankNet 0.8223464432 0.8169752826 LambdaRank 0.8209622031 0.8126243442 ListMLE 0.8217342257 0.8174866943 di\ufb00used among all the three feature categories. This shows the importance of using a combination of di\ufb00erent kinds of features in this problem. The selected user-based features show how active and popular the user is in Twitter. Interestingly, all the boolean features are selected and none of the categorical features are retained. The reason may be that the values of the boolean features are constant and the difference between them are not a continuous value. So it may be easier and more e\ufb03cient to use these features. Moreover, for the categorical features, we assign a number to each possible category and the arithmetic di\ufb00erence between these numbers is not informative. Table 2 shows the results obtained by di\ufb00erent regression algorithms, in terms of NDCG@10. In Table 2, \u201cXT\u201d, \u201cBRR\u201d, and \u201cSGDR\u201d respectively denote Extremely Randomized Trees, Bayesian Ridge Regression, and Stochastic Gradient Descent Regression. The results reported in Table 2 demonstrate that feature selection does not help with regression algorithms. In other words, after performing the feature selection, the results of regression models are dropped dramatically. This shows that backward elimination is not su\ufb03cient for regression models. According to Table 2, there is a considerable di\ufb00erence between the results achieved by di\ufb00erent regression models. Table 3 shows the results of using several learning to rank methods. The results also include NDCG@10 before and after applying feature selection. The results reported in Table 3 emphasize on the importance of using feature selection in learning to rank methods; since after performing feature selection, the results are improved. Therefore, backward elimination method works well for LTR methods. Table 3 demonstrates that ListNet performs better than the other LTR methods. Comparing the results of Table 2 and Table 3 shows that all the learning to rank methods outperform all the regression models. Table 4: Ranking aggregation results NDCG@10 LTRs 0.8242044953 REGs 0.8063031984 LTRs+REGs 0.8261454943 Table 4 represents the results obtained by aggregating the mentioned regression and learning to rank results using supervised Kemeny approach. To show the importance of considering both regression and learning to rank methods together, we also report the results achieved by aggregating all the LTR methods and all the regression methods, separately. Table 4 indicates that although most of the results of regression models are far lower than the LTR methods, their aggregation improves the results. It shows that aggregating regression and learning to rank methods achieves better results in comparison with aggregating only LTR methods or regression models. To show that this improvement is significant, we performed 10-fold cross validation over the training data and conducted a statistical signi\ufb01cant test (t-test) on the improvements of LTRs+REGs over the other methods. The results show that the improvement achieved by LTRs+REGs is statistically signi\ufb01cant (p \u2212value < 0.01). 5." + } + ], + "Nick Craswell": [ + { + "url": "http://arxiv.org/abs/2105.04021v1", + "title": "MS MARCO: Benchmarking Ranking Models in the Large-Data Regime", + "abstract": "Evaluation efforts such as TREC, CLEF, NTCIR and FIRE, alongside public\nleaderboard such as MS MARCO, are intended to encourage research and track our\nprogress, addressing big questions in our field. However, the goal is not\nsimply to identify which run is \"best\", achieving the top score. The goal is to\nmove the field forward by developing new robust techniques, that work in many\ndifferent settings, and are adopted in research and practice. This paper uses\nthe MS MARCO and TREC Deep Learning Track as our case study, comparing it to\nthe case of TREC ad hoc ranking in the 1990s. We show how the design of the\nevaluation effort can encourage or discourage certain outcomes, and raising\nquestions about internal and external validity of results. We provide some\nanalysis of certain pitfalls, and a statement of best practices for avoiding\nsuch pitfalls. We summarize the progress of the effort so far, and describe our\ndesired end state of \"robust usefulness\", along with steps that might be\nrequired to get us there.", + "authors": "Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Jimmy Lin", + "published": "2021-05-09", + "updated": "2021-05-09", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG" + ], + "main_content": "INTRODUCTION MS MARCO is a series of datasets, the first of which released in 2016, aiming to help academic researchers explore information access in the large-data regime [3]. The MS MARCO datasets have been a boon for neural IR researchers to support their explorations of ever arXiv:2105.04021v1 [cs.IR] 9 May 2021 \flarger and richer models with an insatiable appetite for more (supervised) training data. Over the past few years, the datasets have been used in tasks ranging from keyphrase extraction to question answering to text ranking. Of these tasks, the passage ranking and document ranking tasks have received the most attention from the research community; both are associated with competitive leaderboards1 and the TREC Deep Learning Track [13\u201315]. They are standard ad hoc retrieval tasks, with the major difference being the length of the documents that are retrieved: the passage ranking task works with paragraph-length segments of text, while the document ranking task works with full-length web pages. Figure 1 summarizes both leaderboards, passage on the left and document on the right. The x-axes represent time, from the introduction of the leaderboards until early 2021. Each point represents a submission: the x-axis plots the date of submission and the y-axis plots the official metric (MRR@10 for passage and MRR@100 for document). Circles in red represent the (current and former) state of the art (SOTA) runs, i.e., a top-scoring run on the leaderboard, beginning with the first submission that beat organizer-supplied baselines. On left panel in Figure 1 for the passage leaderboard, the large jump in the SOTA in January 2019 represents the work of Nogueira and Cho [54], which is the first known application of pretrained transformers to a ranking task. This is considered by many to be a watershed moment in IR, as it ushered in a new era of research dominated by the use of pretrained transformer models. Runs whose description contain the word \u201cBERT\u201d are shown in orange in the left panel. From the multitude of the orange points, we can see the immediate dominance of BERT-based techniques right after its introduction; this is likely even an under-estimate, since there are many ranking models based on pretrained transformer models that do not have BERT in its name (e.g., ELECTRA, T5, etc.). We did not repeat the same coloring in the document leaderboard because, based on our observations, BERT has become so ingrained that its name is nowadays omitted from the model descriptions. Prior to the advent of the MS MARCO, deep neural methods in IR were largely being benchmarked on proprietary datasets (e.g., [34, 51, 87]), non-English datasets (e.g., [16, 82]), synthetic datasets (e.g., [52, 72]), or under weak supervision settings (e.g., [19, 87]). This made it difficult for the community to compare these emerging methods against each other, as well as against well-tuned traditional IR methods, which led to concerns [41] in the IR community as to whether \u201creal progress\u201d was being made. Subsequently after the release of the MS MARCO dataset, some of these neural methods (e.g., [16, 51]) reproduced their claimed improvements over traditional methods on the public leaderboard. BERT put any remaining concerns to rest, as can be seen by not only the initial big jump in effectiveness as well as the continued upward progress in SOTA, in both the document and passage ranking leaderboards. The effectiveness of BERT was widely reproduced and shown to be a robust finding, leading Lin [42] to later retract their criticisms. The MS MARCO datasets have been instrumental in driving this progress because it enabled all researchers (not only those in industry) to examine neural techniques in the large-data regime. The impact of data is shown in Figure 2, taken from Nogueira et al. [56]. The figure shows the effectiveness of BERT-base as a 1http://msmarco.org 1 2.5 10 100 530 0.1 0.15 0.2 0.25 0.3 0.35 0.4 # relevant query-doc training instances (thousands) MRR@10 BERT-base BM25 MS MARCO Dev Set Figure 2: Effectiveness of BERT-base trained with different numbers of training instances (note the log scale in the \ud835\udc65axis). Results report means and 95% confidence intervals over five trials. Taken from Nogueira et al. [56]. reranker trained with different numbers of training instances (note the log scale in the \ud835\udc65-axis). Results report means and 95% confidence intervals over five trials. As expected, the more the data, the better the effectiveness. As pointed out by some researchers [44], to a large extent, the rapid progress made in the IR community would not have been possible without MS MARCO. So what is the state of the field at present? We can summarize as follows: (1) the MS MARCO datasets have enabled large-data exploration of neural models, and (2) from the leaderboards, it appears that progress continues unabated. But is the \u201cSOTA\u201d progress meaningful? Is MRR a good metric? Are all the top runs tied, with an exhausted leaderboard? Have we seen multiple submission and overfitting? If we change the test data slightly, as a test of external validity, do our findings hold up? Are these easy to deply, with a standard playbook? We describe what is required to make more progress, towards having many evaluation with internal validity, external validity and robust usefulness. 2 REQUIREMENTS TO ADVANCE THE STATE OF THE ART This section outlines some steps that are required to make a valid and useful contribution to the state of the art in ad hoc ranking. Valid because we are sure it is an improvement. Useful because the improvement is easy to deploy in many different real-world applications. We first describe an older improvement, where significantly better rankers such as BM25 were developed using TREC data in the 1990s. We then consider the same criteria for BERT-style rankers using the MS MARCO and TREC Deep Learning Track data. This is a checkpoint on our progress so far, it motivates some of our analysis in this paper and identifies important future work. 2.1 BM25 and TREC data New data can move the field forward. For example, TREC [76] introduced test collections starting in 1991 led to a new generation \fof ranking functions. The test collections did not have a large set of training queries, encouraging the development of ranking functions that work well in a small training data regime. The number of query topics used in each evaluation was 50. Compared to previous evaluation efforts, TREC documents were longer, and they varied in length, writing style, level of editing and vocabulary [32]. By the third year of the effort, this led to the development of new ranking functions that dealt with variation in document length significantly better than previous ranking functions, including Okapi BM25 [61]. Today the \u201cOkapi at TREC-3\u201d paper has 2,420 citations in Google Scholar and searching for that string seems to mostly give papers about information retrieval (checking a few) with an estimated 15,700 results. BM25 was developed just before the appearance of the first Web search engines, but was found to work well on Web documents and was also commonly used in learning to rank data sets, many of which used web data [45]. Papers might use BM25 features in a learning to rank data set without mentioning BM25, but it still had impact. Many real-world information retrieval systems implement BM25 and it has most likely been evaluated on many proprietary data sets, not just with TREC-style evaluation, but also with online tests such as interleaving and A/B tests [33]. Internal validity. With each study, there is a risk that the conclusions we draw are not reliable. Here we focus on statistical and mathematical correctness of the study [7]. A study can be under powered, meaning that we can not draw finer-grained conclusions. For example, we can identify that BM25 is significantly better than a plain tf-idf implementation, but it may be statistically indistinguishable from other modern BM25-like functions. Multiple testing and selective publication can harm the internal validity of our studies [15]. Statistical significance tests tell us how likely our findings may hold up on a new sample of data from the same distribution. However, if we run multiple tests on the same sample, and selectively report the best outcomes on that sample (and the bad outcomes may be rejected even if reported in a paper), the chances of that result holding on a new sample are reduced. The best practice for avoiding multiple testing is to avoid reuse of the data, such as an online A/B test, where each new test is on live data, without reuse. Submitting to evaluation efforts such as TREC also avoids reuse, since each year generates a new set of single-shot submissions on a new set of queries. Public leaderboards are a bit worse, since they allow multiple submission to the same dataset. We will discuss methods of reducing the harm and we will analyze the extent of the problem in MS MARCO leaderboards. The most harmful case for multiple testing is with reusable test collections, which allow unlimited iteration on a test set with no public registration of what was done. There have been some claims that the field has a problem with this kind of validity [2] although that paper was not questioning that BM25 was an improvement, but rather questioning whether subsequent studies improved on methods such as BM25 from the 1990s. If IR metrics are not on an interval scale, as was argued recently by Ferrante et al. [23], Fuhr [29], this is also an internal validity problem. If commonly-used metrics such as Mean Average Precision (MAP) and Normalized Discounted Cumulative Gain (NDCG) are not on an interval scale, then reporting the mean of the metric and doing a statistical test on difference of means is not valid. Many forms of evaluation used on BM25 did calculate mean metrics with ttests. However, a model that has been very widely deployed such as BM25 has also been tested in online interleaving and A/B tests with large numbers of users, which may not have the same problems. There is also evidence that sufficiently powered online experiments of this sort can agree with a TREC-style NDCG metric [57]. External validity. A study could be internally valid, with statistical tests that indicate how well the results will hold up on a new identically-distributed sample of data, but still lack external validity. Here we focus on slight changes in the data distribution, such as moving to a slightly different document distribution, query distribution or relevance judging scheme. Zobel and Moffat [94] evaluated many BM25-style rankers on six different data distributions (which they called domains), coming from two different document collections, each with title, narrative or full queries. Their finding was that there was no clear best method with \u201csuccess in one domain was a poor predictor for success in another\u201d. The TREC finding from the 1990s, that BM25-like rankers improved on pre-TREC rankers, has good evidence of external validity. BM25 has been tested on many datasets in industry and academia, on public and private datasets, with TREC-style evaluation and presumably with online metrics. It has been selected as a powerful feature many times, by many different machine learned rankers, on many different data distributions. It would be incorrect to say that the improved performance identified in TREC-3 only held up on TREC-3 data or datasets with identical distributions. Robust usefulness. BM25 is not only valuable on many different settings, but it is useful, robust and easy to deploy. It has a small number of free parameters, but if these are not tuned then the performance is still good. BM25 can be included in an IR system without needing extra training data, without needing a PhD (or PhD student) to carry out finetuning. The chances of BM25 giving very bad results in a new setting are low. 2.2 BERT-style rankers and MS MARCO data In the case of MS MARCO, the main difference from TREC data is the presence of large training data, with hundreds of thousands of training queries. This encourages the development of rankers that can work well in the large-data regime, such as BERT-style rankers. Have these rankers been evaluated with internal and external validity, in a way that is robustly useful when deployed? Let us assess how far we are from this goal. Internal validity. Multiple testing is a problem in our field, we discourage multiple submission in several ways. We have experiments in the TREC Deep Learning Track, where there is a single-shot submission each year, which is the gold standard for avoiding data reuse. We then retire the data as a reusable test collection, which is the worst case here, very vulnerable to multiple testing and the tests that do not show a gain may not be written up as papers and/or may not be accepted. We also have a leaderboard, which allows multiple submission, but we discourage multiple submissions. First, we limit how frequently each group submits. Second, every submission is public, so we can see which groups seem to be p-hacking and slowly overfitting to the test data through multiple submission. Third, with each submission we have a small number of queries \fthat are not used for the leaderboard metric. We will analyze the extent to which the evaluation on these held out queries diverges from the evaluation using the queries in the leaderboard, which could happen if participants are iterating on their submissions and using the numbers on the public leaderboard as their guide. The other threat to internal validity is whether we can find repeatable and valid differences between leaderboard runs. Perhaps the top runs are all statistically indistinguishable, and after a while we should stop the evaluation. Perhaps due to the questions in the field about the interval scale, we shouldn\u2019t be using the mean and t-test approach that many papers in the field use. We will analyze the reliability of our leaderboard under different statistical tests and also use bootstrapping to analyze its reliability. External validity. Eventually, if BERT-style rankers are widely adopted, they will be evaluated in many different settings using many different metrics. However, when we first saw good leaderboard results from ML-heavy approaches, we were suspicious that the improvements would only hold true if the training and test data were independent identically distributed (IID) samples form the same distribution. For example, there could be quirks of the MS MARCO sparse labeling that pretrained transformer models can learn, giving good performance on MS MARCO sparse labels in the test set, but the improvements would vanish if we relabeled the data with slightly different judging scheme. In that case, the results would be specific to the setup of our study, lacking external validity. We could only claim a real improvement if we think real users have exactly the same quirks as the MS MARCO labels. We test this two ways. Firstly, we set up the TREC experiment with a slight data mismatch between the train and test data. Specifically, NIST judging selects queries that have the right level of difficulty (not too easy nor too hard) and instead of roughly one positive result per query as in MS MARCO, TREC judges label many documents per query on a 4-point relevance scale. In the DL track, we found that training on the sparse labels does allow a big improvement on the test set, despite the slightly different data distributions [13, 14]. Second, in the document ranking leaderboard of MS MARCO we included some queries that are not used in the public leaderboard. This allows us to do a private leaderboard analysis, in this case on the 45 TREC 2020 queries, using NIST labels (as well as the sparse labels on the same queries). These are small steps to ensure that the BERT-style rankers will perform well in many applications, these get confirmed over time in industry and academia with tests on many proprietary and public datasets. Robust usefulness. We survey all the different ways people are using BERT-style rankers. We discuss our concerns about whether we really established a playbook yet, making it easy for a non-PhD to deploy this kind of ranker in a new application in a way that truly works better than previous rankers. 3 MS MARCO LEADERBOARD VALIDITY ANALYSIS To test the validity of our leaderboard, we first analyze its ability to distinguish different runs using a variety of parametric and nonparametric statistical tests. We also use bootstrapping to analyze the leaderboard stability, which in some cases can indicate that the Table 1: Passage ranking leaderboard bootstrap analysis. Rank under bootstrapping Leaderboard run 1 2 3 4 5 1st 72.7% 25.4% 1.9% 0% 0% 2nd 24.2% 62.5% 13.3% 0% 0% 3rd 3.1% 12.1% 83.9% 0.8% 0.1% 4th 0% 0% 0.6% 47.0% 27.1% 5th 0% 0% 0.2% 34.5% 34.0% Table 2: Document ranking leaderboard bootstrap analysis. Rank under bootstrapping Leaderboard run 1 2 3 4 5 1st 91.2% 7.4% 1.4% 0% 0% 2nd 6.8% 61.7% 21.1% 8.6% 1.4% 3rd 1.6% 22.7% 36.8% 20.2% 12.2% 4th 0.4% 5.4% 17.7% 27.0% 25.1% 5th 0% 0.5% 15.9% 21.2% 22.9% top-ranked result was lucky (Table 4 of [8]). We also use bootstrapping analysis to test external validity, using our private leaderboard. These 45 TREC-2020 queries were part of every submitted run, but did not contribute to the public leaderboard numbers. We analyze whether the leaderboard conclusions generalize to these held-out queries, using sparse MS MARCO labels and also using TREC labels. Finally, since we are concerned about multiple submission, we analyze the leaderboard with respect to multiple submissions from the same group, to see if they seem to be benefitting from these submissions and whether their movement on the private leaderboard is different from that on the public leaderboard. 3.1 Public leaderboard stability We analyze overall leaderboard stability using bootstrapping, similar to previous work by Caruana et al. [8], which avoids running many pairwise statistical tests. For each leaderboard we run 1000 bootstrapping trials, comparing the top-ranked runs, the most recent runs and baseline runs. Each bootstrapping trial samples a queryset of the same size as the original queryset, with replacement. Our first question is whether the leaderboard\u2019s top ranks are stable under bootstrapping. If we saw that many top runs had similar chance of being top-ranked, we might conclude that the leaderboard is exhausted. It is even possible to find that the top run on the leaderboard was lucky, and some other run has more appearances at the top under bootstrapping [8]. Tables 1 and 2 show the top-5 stability of the passage and document leaderboards, respectively. It is not the case that the top ranks are all tied. The 1st ranked run on each leaderboard never drops below position 3 in any of the 1000 trials. The tables show some indication that lower ranked reuslts are less certain, for example the 5th ranked run has less than 50% chance of being in position five. This can happen when two runs are similar. In the document leaderboard, the 5th and 6th run have expected ranks of 5.1 and 5.4 over 1000 trials. \f0 5 10 15 20 25 30 35 40 Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Run Figure 3: Full results of document leaderboard bootstrap. Runs 1\u20135 show the same results as Table 2. 0 10 20 30 40 Rank Public MRR Public NDCG Private MS MARCO MRR Private MS MARCO NDCG Private TREC MRR Private TREC NDCG Metric Top run TREC run Baseline Figure 4: Rank positions of three leaderboard runs under bootstrapping. Metrics are MRR and NDCG@10. The querysets are the 5,793 Public leaderboard queries and the 45 Private leaderboard queries from TREC-2020. The Private queries can be evaluated with sparse MS MARCO labels or comprehensive TREC labels. The overall stability of the document leaderboard under bootstrapping is shown in Figure 3. There are some runs with similar performance lower down in the ranking, having very similar rank distributions. The official baseline at 38th position was ranked 38th in all 1000 bootstrapping trials. Overall it was very unlikely under bootstrapping that a lower-ranked run would overtake a top-ranked run, leading us to conclude that the leaderboard is quite stable. The top-ranked run is not there by luck. 3.2 Private leaderboard It is possible for a leaderboard to be stable, as in our bootstrapping analysis, but still have overfitting due to multiple submission. One way of detecting this is to have a private leaderboard, where each of the submissions can be tested on a held-out dataset. If participants are using the public leaderboard to overfit to the test queries, we would see their performance increase on the public query set, and decrease on the private query set. To allow this sort of testing, we included some additional queries that were run by every participant in the document leaderboard. Here we use the 45 TREC 2020 queries for our analysis. Including our earlier bootstrapping on the Public leaderboard, we now have full bootstrapping analysis with 1000 trials on six alternatives: Metric is MRR or NDCG@10, query set is Public or Private, and relevance labels on the Private queries are sparse MS MARCO labels or comprehensive TREC labels. Instead of showing the full bootstrapping results for all six combinations, we summarize three key runs and their performance under bootstrapping. Figure 4 shows this analysis. The top run on the public leaderboard has its ranks more spread out on the smaller Private leaderboard queries, since there are 45 rather than 5793 queries. It is overtaken by other runs not only on certain bootstrap trials, but other runs even have a better expected rank. For the MS MARCO labels, the top leaderboard run is ranked fourth in expectation for MRR and third in expectation for NDCG@10. For the TREC labels, the top run is fifth and sixth. To explain why we saw greater rank for the TREC labels, we note that some runs submitted to the leaderboard were also submitted to TREC and may have used the TREC 2019 labels for training. The TREC run we highlight in the figure is the ranker from University of Waterloo, that achieved the best NDCG@10 at TREC. This could be seen as a lack of external validity, that the top run on MS MARCO labels is not as highly-ranked on TREC labels. It could also be seen as an indication that extra adaptation and training for the target domain is useful. Overall the official baseline run is significantly worse than our top run and TREC run, under all six conditions. 3.3 Multiple submission To avoid overfitting to our eval queries, the MS MARCO leaderboards have rules limiting multiple submission. We allow each participating group to submit no more than two runs per month, and no more than one run with very small changes such as hyperparameter tuning or random seeds. This makes it more difficult for participants to try minor variations until they get lucky with a higher leaderboard submission. We also track all the submissions, so if a group is submitting many runs we can analyze what they submitted and how often. We believe this makes it much more difficult for participants to overfit, compared to a reusable test collection which allows unlimited iteration with no public record. We already presented detailed bootstrapping results for the document leaderboard. Now, we consider the institution that submitted \fthe top run, and whether there is a risk of overfitting. On the document leaderboard, grouped by institution, the three institutions with the most submissions had 12, 11 and 7 submissions. However, the top run came from an institution with only two submissions. Also, on Private MS MARCO evaluation (Figure 4) the top run is still in the top five runs in expectation. If it were only there by overfitting, we do not think it would be in the top five of forty. There is still a risk of cross-group overfitting, as groups learn about \u201cwhat works\u201d some of what they learn may be about this samlple of test queries. For example, different groups can share code and ideas. They can converge to a common solution, with many groups submitting slight variations, and one group can get lucky. Through this process of code sharing they can also form an ensemble of promising approaches, and since the promising approaches were selected using the evaluation set, this is another form of overfitting. In future we will use the MS MARCO judgments on the private leaderboard to monitor for overfitting. 4 IR METRICS AND THE INTERVAL SCALE Commonly used information retrieval metrics, including the ones we employed in our leaderboard evaluation such as NDCG [36] and MRR [12], have recently been criticized by Ferrante et al. [24, 25] for not being interval-scale, which would imply that computing their mean values across different queries is not meaningful. Instead, they argue that most IR metrics tend to be in ordinal scale, implying that we should be using the median metric value as opposed to the mean when we aggregate these values across different queries. This ignited a debate in the IR community with Fuhr [28] arguing that it is, therefore, not meaningful to compute the mean of MRR and ERR metrics over multiple relevance topics. Sakai [65] subsequently disagreed citing that this line of reasoning would render many other IR metrics inappropriate and that many of these metrics are practically useful, even if not theoretically justified. More recently, Ferrante et al. [23] have furthered the argument made by Fuhr [28] to point out that indeed for many well-known and commonly used IR metrics it is inappropriate to compute their average. Because the MS MARCO labels are binary and sparse, we chose to report MRR as our primary metric on the leaderboard. Similarly at TREC, the Deep Learning track has focused on NDCG, NCG [62], MRR, and MAP [93]. So, the validity of these metrics is an important consideration in the context of benchmarking on MS MARCO. Our position in this paper is that Ferrante et al. [23] have raised a valid issue and indeed there is no reason to assume that metrics like MRR, MAP, and NDCG are on an interval scale. However, we do not fully agree with their theoretical argument and recommendations, and present an alternative viewpoint here. 4.1 Preliminaries The theoretical argument presented by Ferrante et al. [23] is grounded in the representational theory of measurement [38] which views measurement as the process of mapping real world entities to numbers such that some entity attributes are represented faithfully as numerical properties. Before we analyze their argument, we define a few preliminary concepts and notations for our reader. We adopt the same notation as Ferrante et al. [23] here for consistency. Definition 1 (Relational structure). A relational structure is an ordered pair A = \u27e8\ud835\udc34, \ud835\udc45\ud835\udc34\u27e9where \ud835\udc34is a domain set and \ud835\udc45\ud835\udc34is a set of relations on \ud835\udc34. If the \ud835\udc34is a set of entities then we refer to it as an empirical relational structure. In contrast, in case of numerical or symbolic relational structure \ud835\udc34is a set of numbers. Definition 2 (Homomorphism). Given two relational structures A1 and A2, a homomorphism M : A1 \u2192A2 is a mapping M : \u27e8\ud835\udc40, \ud835\udc40\ud835\udc45\u27e9such that, i. The function \ud835\udc40maps \ud835\udc341 to \ud835\udc40(\ud835\udc341) \u2286\ud835\udc342 ii. The function \ud835\udc40\ud835\udc45maps \ud835\udc45\ud835\udc341 to \ud835\udc40(\ud835\udc45\ud835\udc341) \u2286\ud835\udc45\ud835\udc342, such that \u2200\ud835\udc5f\u2208\ud835\udc45\ud835\udc341, \ud835\udc5fand \ud835\udc40(\ud835\udc45\ud835\udc341) have the same arity iii. \u2200\ud835\udc5f\u2208\ud835\udc45\ud835\udc341, if the relation \ud835\udc5fholds between some elements from the domain set \ud835\udc341 then the image relation \ud835\udc40(\ud835\udc45\ud835\udc341) should also hold for the corresponding image elements in \ud835\udc342. Note that we use homomorphism instead of isomorphism because \ud835\udc40is typically not a one-to-one mapping. Definition 3 (Measurement). A measurement (scale) is the homomorphism M : E \u2192N that maps from the empricial relation structure \ud835\udc38to the numerical relational structure \ud835\udc41. The mapping of an element \ud835\udc52\u2208\ud835\udc38to a number \ud835\udc5b\u2208\ud835\udc41is called a measure. Definition 4 (Difference structure). An empirical relational structure E = \u27e8\ud835\udc38, \u2aaf\u27e9is a difference structure if \u2200\ud835\udc4e,\ud835\udc4f\u2208\ud835\udc38it defines a difference \u0394\ud835\udc4e\ud835\udc4fand satisfies the following axioms: i. \u2aafis a weak order\u2014i.e., \u2aafis a binary relation on \ud835\udc38\u00d7 \ud835\udc38such that \u2200\ud835\udc4e,\ud835\udc4f,\ud835\udc50\u2208\ud835\udc38it satisfies: (a) \ud835\udc4e\u2aaf\ud835\udc4for \ud835\udc4f\u2aaf\ud835\udc4e, and (b) \ud835\udc4e\u2aaf\ud835\udc4f and \ud835\udc4f\u2aaf\ud835\udc50=\u21d2\ud835\udc4e\u2aaf\ud835\udc50. ii. \u2200\ud835\udc4e,\ud835\udc4f,\ud835\udc50,\ud835\udc51\u2208\ud835\udc38, \u0394\ud835\udc4e\ud835\udc4f\u2aaf\u0394\ud835\udc50\ud835\udc51=\u21d2\u0394\ud835\udc51\ud835\udc50\u2aaf\u0394\ud835\udc4f\ud835\udc4e iii. \u2200\ud835\udc4e1,\ud835\udc4f1,\ud835\udc501,\ud835\udc4e2,\ud835\udc4f2,\ud835\udc502 \u2208\ud835\udc38, \u0394\ud835\udc4e1\ud835\udc4f1 \u2aaf\u0394\ud835\udc4e2\ud835\udc4f2 and \u0394\ud835\udc4f1\ud835\udc501 \u2aaf\u0394\ud835\udc4f2\ud835\udc502 =\u21d2 \u0394\ud835\udc4e1\ud835\udc501 \u2aaf\u0394\ud835\udc4e2\ud835\udc502 iv. \u2200\ud835\udc4e,\ud835\udc4f,\ud835\udc50,\ud835\udc51\u2208\ud835\udc38, if \u0394\ud835\udc4e\ud835\udc4e\u2aaf\u0394\ud835\udc50\ud835\udc51\u2aaf\u0394\ud835\udc4e\ud835\udc4f, then there exists \ud835\udc65,\ud835\udc66\u2208\ud835\udc38 such that \u0394\ud835\udc4e\ud835\udc65\u223c\u0394\ud835\udc50\ud835\udc51\u223c\u0394\ud835\udc66\ud835\udc4f(Solvability Condition) According to the representation theorem for difference structures, if there is a difference structure on the empirical set \ud835\udc38then there must exist an interval scale \ud835\udc40. 4.2 Analysis of argument by Ferrante et al. Having covered the preliminaries, we now take a closer look at the argument that Ferrante et al. [23] make in their work. They define a domain set over search result page (SERP) states, where each SERP state is a unique rank-ordered list of relevance grades. For example, under this notation a SERP with three documents with a relevant document at rank two and nonrelevant documents at rank one and three corresponds to a SERP state denoted by the tuple (0, 1, 0). For example, if we consider the universe of all SERPs with exactly three documents and binary relevance grades, then the domain set \ud835\udc38over all possible SERP states is \ud835\udc46= {(1, 1, 1), (1, 1, 0), (1, 0, 1), (1, 0, 0), (0, 1, 1), (0, 1, 0), (0, 0, 1), (0, 0, 0)}. Ferrante et al. [23] argue that if we can define a difference structure over S = \u27e8\ud835\udc46, \u2aaf\u27e9then it would imply the existence of a corresponding interval scale. However, for S to satisfy the Solvability Condition requires the metric to be equi-spaced between any two neighboring items in a partial ordering of the domain set \ud835\udc46. For example, Table 3 shows that MRR can take four discrete values [1.00, 0.50, 0.33, 0.00] in context of the same example scenario with SERPs of fixed length three and binary \fTable 3: A tabular representation of the domain set \ud835\udc46of all SERPs with exactly three results with binary relevance grades and corresponding MRR values. A C D B \ud835\udc60\u2208\ud835\udc46 (1,1,1) (1,1,0) (1,0,1) (1,0,0) (0,1,1) (0,1,0) (0,0,1) (0,0,0) RR 1.00 1.00 1.00 1.00 0.50 0.50 0.33 0.00 relevance grades. If we consider the four specific SERP states labeled A\u2013D, we observe that the Solvability Condition is violated because the presence of \u0394\ud835\udc36\ud835\udc37= 0.17 implies there should exist \ud835\udc4b,\ud835\udc4c\u2208\ud835\udc46, such that \u0394\ud835\udc34\ud835\udc4b= \u0394\ud835\udc4c\ud835\udc35= \u0394\ud835\udc36\ud835\udc37= 0.17. The key argument that Ferrante et al. [23] make is that MRR is not interval-scale because values of 0.17 and 0.83 are not realizable under this setting. However, our position is that relevance metrics like MRR and NDCG are fundamentally not measurements over SERP states, but instead they measure user perceived relevance of the SERPs. Hence, the difference structure should not be applied on the domain set \ud835\udc46 of all possible SERP states, but instead on the domain set \ud835\udc48of all possible user-perceived relevance states. We argue that an appropriate IR metric should be equi-spaced relative to user perception of relevance such that a change in 0.1 in the metric at any point on the scale (e.g., 0.3 \u21920.4 vs. 0.75 \u21920.85) should correspond to same difference in user-perceived relevance. In other words, it is irrelevant if a three-document SERP cannot realize a MRR value of 0.17 as long as we believe that there exists some user-perceived relevance state that corresponds to that value of the metric. Now, there is no reason to believe that IR metrics without further calibration would be equi-spaced on the scale of user-perceived relevance. We therefore agree with Ferrante et al. [23] that computing mean of many IR metrics may be inappropriate. But the difference between their argument and ours points to different recommendations for addressing these concerns. To remedy the situation, Ferrante et al. [23] propose ranked versions of common IR metrics that are equi-spaced over \ud835\udc38. While the mean value of these ranked-metrics may be more meaningful from the viewpoint of representational theory of measurement [38], it is possible, if not likely, that it reduces the correspondence of the metric to user-perceived relevance. By our criteria, the better approach is to conduct lab studies and online studies with real users, to understand their preferences and how this reveals their underlying notion of utility, so we can develop metrics that are on an interval scale in user value. 4.3 Reliability of statistical tests In this section we analyse the effect of IR metrics not being intervalscale on evaluation outcomes in practice. Apart from the mean not being very meaningful when aggregating metrics that are not in interval scale across different queries, Ferrante et al. [23] have also raised concerns about the reliability of using some of the commonly used significance tests such as t-test or the Wilcoxon Signed Rank, which require that the values are in interval scale. They have instead argued that sign test or the Wilcoxon Rank Sum test should be used with ordinal measurements, such as most top heavy IR metrics. Aforementioned issues raised by Ferrante et al. [23] could also raise questions regarding the reliability of the evaluation results obtained through leaderboards like MS MARCO. Previous work [35, Table 4: Agreement rates for different significance tests across 100 different query set splits for different task and metric combinations. (a) document ranking using MRR Sign T. WX RS WX SR t-test Sign T. Med. WX RS Med WX SR Med agree 93.3% 92.1% 91.2% 91.7% 80.2% 81.2% 80.7% part. agree 3% 3% 3% 3% 16.1% 16.1% 16.1% disagree 3.7% 4.9% 5.8% 5.3% 3.7% 2.7% 3.2% perc. signif. 95.0% 79.7% 92.7% 79.8% 95.0% 79.7% 92.7% (b) passage ranking using MRR Sign T. WX RS WX SR t-test Sign T. Med. WX RS Med WX SR Med agree 92.8% 91.6% 90.8% 90.8% 80.6% 81.4% 81.2% part. agree 3.3% 3.3% 3.3% 3.3% 15.5% 15.5% 15.5% disagree 3.9% 5.1% 5.9% 5.9% 3.9% 3.1% 3.3% perc. signif. 94.6% 79.8% 92.8% 79.9% 94.6% 79.8% 92.8% (c) document ranking using NDCG Sign T. WX RS WX SR t-test Sign T. Med. WX RS Med WX SR Med agree 94.9% 92.2% 91.0% 91.8% 85.2% 84.2% 83.3% part. agree 1.8% 7.7 % 8.4 % 8.1% 1.8% 10.4% 6.2% disagree 3.2% 0.1% 0.6% 0.1% 13.0 5.4% 10.5% perc. signif. 97.9% 81.1% 93.3% 81.4% 97.9% 81.1% 93.3% 67] has indicated that violation of certain assumptions by some significance tests, in particular, the normality assumption for the t-test, do not have a big effect on the conclusions reached using such tests in practice. This raises the question as to how much effect the metric not being in an interval scale could affect the reliability of evaluation results obtained using different significance tests, or using different aggregation methods (i.e., mean vs. median). In order to answer this question, we adopt a similar method as the one used by Buckley and Voorhees [6] for evaluating evaluation stability. We divide our query set into two random subsets and for each pair of systems submitted to the leaderboard, we evaluate as to whether the conclusions reached based on the evaluation results obtained using the two subsets agree with each other. We repeat this process 100 times, generating 100 random splits and compute the agreement rates across the different subsets. If the evaluation results are reliable, we expect the results to be robust to the changes in the query sample and hence, the agreement rates should be high. When we compare evaluation results across the two subsets, we use the following definition for agreement, partial agreement and disagreement. Evaluation results in the two subsets: \u2022 Agree with each other if the two subsets agree as to which system is better, and the difference is: (i) statistically significant according to both subsets, or (ii) not significantly different according to both subsets, \u2022 Partially agree with each other if (i) the two subsets agree as to which system is better, and the difference is significant according the one subset but not significant according to the other, or (ii) the two subsets disagree as to which system is better, but the difference is not statistically significant according to both sides, \f\u2022 Disagree with each other if the two subsets disagree as to which system is better, and the difference is: (i) statistically significant according to both subsets, or (ii) statistically significant according the one subset but not significant according to the other We report the results of this experiment using MRR as the metric for the document and passage ranking tasks in Table 4a and 4b, respectively. Each colum in the tables shows the agreement rates obtained when a different significance test is used in evaluation, focusing on the sign test (Sign T), Wilcoxon Rank-Sum test (WX RS), Wilcoxon Signed-Rank test (WX SR), and t-test as the significance tests. Since previous work has argued for using the median instead of the mean when the metrics are in ordinal scale, we also report the agreement rates for the significance tests that do not have the interval scale requirement (sign test and Wilcoxon Rank-Sum test), when median is used to compute the aggregate performance across different queries. While the Wilcoxon Signed-Rank test does require the metrics to be interval-scale Ferrante et al. [23], since the null hypothesis for this test is that the median (as opposed to the mean) of the differences is zero, we also report the results for this test when median is used for aggregation. The last three columns in the tables show the agreement rates when median (Med.) is used for aggregation instead of the mean. As seen in the tables, when mean is used for aggregation, the agreement rates for all four significance tests are above 90%. This is true both for significance tests that require interval measurements (t-test and Wilcoxon Signed-Rank test) and also for those that can be used with ordinal measurements (Sign Test and Wilcoxon Rank-Sum test). One potential reason for the high agreement rates could be caused by a test not being very powerful and hence mostly predicting differences as not statistically significant. Hence, we also report the fraction of pairs of systems that were deemed as significantly different by at least one of the two split sets using a particular significance test, which is reported in the last row of the tables. It can be seen that the agreement rates are not really correlated with the percentage of pairs a test identifies as significantly different. When median is used instead of the mean, agreement rates drop significantly and consistently across the three significance tests. Our results suggest that, even though the most commonly used IR metrics are not on interval scale, reliability of evaluation results obtained are not widely affected by this. In fact, unlike what was recommended before, using mean instead of the median seems to result in more reliable evaluation results, possibly caused by mean being a more discriminatory statistic than the median. Our results seem consistent across both document and passage ranking tasks. In Table 4c we show the results for the document ranking task when NDCG@10 is used as the evaluation metric. As expected, NDCG@10 results in higher agreement rates consistently for all significance tests when compared to MRR, even though the difference is not very big. Similar results were observed for the passage ranking task. Overall, our results suggest that even though most commonly used IR metrics such as MRR and NDCG@10 may not be in interval scale, evaluation results obtained in practice seem not to be highly affected and results obtained using benchmarks such as MS MARCO seem to be mostly reliable. 5 ON TRANSFER LEARNING FROM MS MARCO TO OTHER IR BENCHMARKS The primary motivation behind curating the MS MARCO ranking datasets was to answer the question \u201cHow much better can our IR systems be if we had access to millions of positively labeled query-document pairs?\u201d It is exciting to witness the large jumps in performance metrics on this benchmark from the development of new ranking models that can adequately leverage the provided large training datasets. However, if the benefits of MS MARCO\u2019s large training data is limited to its own test sets and access to domain-specific large training datasets is only limited to large forprofit private institutions\u2014e.g., major commercial search engines\u2014 then the creation of such benchmarks only serves to outsource research and development of models to the academic community that ironically the academic community then cannot operationalize for their own scenarios. To avoid this undesirable dynamic, it is important to also study whether the large training dataset from MS MARCO can bring about meaningful improvements from transfer learning to other IR benchmarks and tasks. As noted earlier, a successful application of transfer learning from the MS MARCO dataset has been for the TREC Deep Learning track. An initial test set of 200 queries is sampled from the MS MARCO distribution, but then NIST selects a subset of queries to judge which are neither too difficult nor too easy, then apply a 4-point labeling scheme to results pooled from submitted runs. As Craswell et al. [13, 14] have reported, several pretraining-based deep models finetuned on the MS MARCO training data achieve significant improvements over traditional IR methods in this setting. Transfer learning from MS MARCO to other ad hoc retrieval benchmarks have also been attempted with promising early success. Yilmaz et al. [85] finetune a BERT-based [21] model on MS MARCO, TREC CAR [22] and TREC Microblog [43] datasets and evaluate them on three TREC newswire collections: Robust04 [74], Core17 [1], and Core18. They find that finetuning on MS MARCO alone achieves mixed results on these benchmarks, but finetuning on MS MARCO followed by further finetuning on the TREC Microblog dataset achieves state-of-the-art performance on all three test sets. Since then, the combination of finetuning on MS MARCO followed by on TREC Microblog dataset has also achieved stateof-the-art results on the English subtask of the NTCIR15 WWW-3 task [66, 68]. Recently, Nogueira et al. [55] adapted T5 [58], a pretrained sequence-to-sequence model, by finetuning only on MS MARCO to significantly improve over the previous state-of-the-art results reported by Yilmaz et al. [85] on Robust04. Similar strategies of finetuning on MS MARCO and evaluating on Robust04, GOV2 [10], and ClueWeb [11] have been employed in other recent studies [30, 37, 40, 91, 92], sometimes in combination with weak supervision [71, 90]. Additionally, Ma et al. [47] have employed the document collection in MS MARCO for pretraining before evaluating on these other standard IR benchmarks. An interesting implication of the large size of the MS MARCO training dataset is that it allows for further filtering to generate new domain-specific training datasets that may be adequately large to finetune deep models specializing in a given domain. This is particularly interesting when due to time sensitivity or resource constraints it is infeasible to curate a domain-specific training dataset \ffrom scratch. Such a scenario emerged in 2020, when in response to the COVID-19 pandemic, the body of academic literature on Coronavirus grew significantly which in turn posed a difficult challenge for the information retrieval community to quickly devise better methods for searching over this growing scientific corpus. This prompted the creation of the Covid-19 Open Research Dataset (CORD-19) [81] and the TREC-COVID [60, 73] benchmarking effort on one hand, and a flurry of new research and development of IR systems specializing on this task [9, 69, 80] on the other. In particular, MacAvaney et al. [48, 49] created Med-MARCO, a subset of the MS MARCO dataset that are related to medical questions. Subsequently, several groups benchmarking on TREC-COVID employed this subset for model training [46, 83, 88, 89], while others explored finetuning on the full MS MARCO for this task [5, 40, 46, 53, 63, 71]. In a meta-analysis of participating runs in the TREC-COVID challenge, Chen and Hersh [9] found the use of MS MARCO dataset for finetuning to be associated with higher retrieval performance. Similar to Med-MARCO [48, 49], Hamzei et al. [31] studies place-related subset of the MS MARCO dataset. Another interesting case study in this context is the application of MS MARCO to conversational search where it has been useful for both creation of new benchmarks [17, 18, 59] and model training [26, 39, 50, 70, 77\u201379, 84, 86]. The adoption of MS MARCO in so many transfer learning settings is encouraging, and while it may be premature to draw parallels between its impact on the IR community and what ImageNet [20, 64] did for computer vision research, the current trends definitely bode well for MS MARCO\u2019s potential role in the future of IR research. 6 ROBUST USEFULNESS AND EXTERNALITIES For rankers based on pretrained transformers to become a standard solution in research and industry, we need to show that they can be easily be deployed in new settings. Section 5 indicated that models can work well in a new target domain, but this may involve domainspecific data and multiple stages of finetuning. Future research could work on developing a \u201cplay book\u201d for ranker deployment, with the goal of simplifying the process and decreasing the chances of problems or failure. This could include development of selftuning rankers that can learn from the corpus and/or usage logs when deployed. It could also include the development of a generalpurpose ranker, that works reasonably well in a new application with no additional finetuning. Considering issues of deployment raises the common adage \u201cyou can\u2019t improve what you don\u2019t measure\u201d. Data sets and evaluation efforts have an incentive structure, that encourages work towards exactly what is measured, creating blind spots in other areas. MS MARCO not only serves to compare existing IR methods but also plays the Pied Piper guiding a significant section of the community down specific lanes of research. As the curator of such benchmarks, it is therefore crucial that we critically reflect on where we are going and also importantly where we are choosing not to invest. For example, the availability of a large training dataset directly incentivizes new methods that can take advantage of millions of labeled query-document pairs. Excitement in the large-data work means we may see too few submissions in our benchmarking of methods in the small-data regime, even though such approaches have advantages of efficiency and robustness. Similarly, MS MARCO is English-only, reducing our likelihood of seeing related advances in non-English and cross-language IR. Both the MS MARCO leaderboard and the TREC Deep Learning track focuses singularly on measuring the relevance quality of retrieved documents and passages, without any consideration for other critical aspects such as efficiency or cost of deployment. This could for example lead the community towards building new models that are frustratingly hard for others, with limited compute resources, to further optimize or deploy. That could again create a divide between what we focus on as a community and what is practically useful. The scenario may be even more serious if we were to consider the potential social harms\u2014specifically on those who belong to historically marginalized communities\u2014and ecological costs of large language models [4], exactly the type of technology that MS MARCO and TREC Deep Learning track may encourage us to work on. As we develop these new benchmarks, the responsibility rests squarely on our own shoulders to think broadly and have open and inclusive conversations about the impact of leading a large section of our community down a given path. 7" + }, + { + "url": "http://arxiv.org/abs/2104.09399v1", + "title": "TREC Deep Learning Track: Reusable Test Collections in the Large Data Regime", + "abstract": "The TREC Deep Learning (DL) Track studies ad hoc search in the large data\nregime, meaning that a large set of human-labeled training data is available.\nResults so far indicate that the best models with large data may be deep neural\nnetworks. This paper supports the reuse of the TREC DL test collections in\nthree ways. First we describe the data sets in detail, documenting clearly and\nin one place some details that are otherwise scattered in track guidelines,\noverview papers and in our associated MS MARCO leaderboard pages. We intend\nthis description to make it easy for newcomers to use the TREC DL data. Second,\nbecause there is some risk of iteration and selection bias when reusing a data\nset, we describe the best practices for writing a paper using TREC DL data,\nwithout overfitting. We provide some illustrative analysis. Finally we address\na number of issues around the TREC DL data, including an analysis of\nreusability.", + "authors": "Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Ellen M. Voorhees, Ian Soboroff", + "published": "2021-04-19", + "updated": "2021-04-19", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG" + ], + "main_content": "INTRODUCTION The Text Retrieval Conference (TREC) [24] is an evaluation effort in the information retrieval research community that studies multiple search scenarios, called tracks. The TREC Deep Learning (DL) Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGIR \u201921, July, 2021, Online. \u00a9 2018 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/1122445.1122456 Track [8, 9] studies a common search scenario where a new query comes in to search an existing corpus of text, and the goal is to produce a ranking where the most relevant search results are at the top. The distinctive aspect of the DL track is that it uses a large set of human-labeled training data. Results so far have indicated that ranking models using deep learning benefit from the large data, returning more relevant results than other types of models. There are two tasks in TREC DL. One is passage retrieval, from a corpus of passages of text, modeling a question answering scenario where the system retrieves a short answer for the user\u2019s query. The other task is document retrieval, modeling a scenario where the user wants to see more content about their query, retrieving a ranked list of documents for the user to consider. Each year at TREC, the two tasks each evaluate a new set of test queries, producing two reusable test collections. If a research paper is studying document retrieval and implements a baseline ranker and a new ranker, these two rankers can be compared on the reusable test sets. The standard approach is to report the results on each test set separately and use statistical tests to see if the new approach is better than the baseline approach. This paper describes the reusable test collections built as part of the DL Track, bringing together and summarizing descriptions that previously existed only in DL Track guidelines, the overview papers and in our Github repositories. Then, since it is possible to overuse a test data set, we describe the best practices for using the test sets. We present a case study of making decisions using the dev set, despite it having a different labeling scheme from the test sets, showing it is predictive of test set performance without overfitting to test data. Since we focus on reuse, we also present a reusability analysis of the test sets, indicating that judgments are sufficiently complete to evaluate a new ranker without any need for additional relevance judgments. We finish by pointing out some limitations of the current setup, for example it focuses on only on our two tasks, only one language. 2 TASK DESCRIPTION The Deep Learning track has two tasks: Document retrieval and passage retrieval. Document retrieval task. The first task focuses on document retrieval, with two subtasks: (i) Full retrieval and (ii) Top-100 reranking. In the full retrieval subtask, the retrieval system is expected arXiv:2104.09399v1 [cs.IR] 19 Apr 2021 \fSIGIR \u201921, July, 2021, Online. Craswell, Mitra, et al. 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best nnlm run best nn run best trad run nnlm nn trad (a) Document retrieval task (2019) 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best nnlm run best nn run best trad run nnlm nn trad (b) Passage retrieval task (2019) 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best nnlm run best nn run best trad run nnlm nn trad (c) Document retrieval task (2020) 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best nnlm run best nn run best trad run nnlm nn trad (d) Passage retrieval task (2020) Figure 1: NDCG@10 results for TREC submissions, broken down by run type. BERT-style \u201cnnlm\u201d runs performed best in both tasks in both years, with non-BERT \u201cnn\u201d runs and non-neural \u201ctrad\u201d runs having relatively lower performance. to rank documents based on their relevance to the query, where documents can be retrieved from the full document collection provided. This subtask models the end-to-end retrieval scenario. In the reranking subtask, the benchmark provides an initial ranking of 100 documents, giving all ranking methods the same starting point. This is a common scenario in many real-world retrieval systems that employ a telescoping architecture [15, 25]. The reranking subtask allows participants to focus on learning an effective relevance estimator, without the need for implementing an end-to-end retrieval system. It also makes different reranking approaches more comparable, because they all rerank the same set of 100 candidates. The initial top-100 rankings were retrieved using Indri [23] on the full corpus with Krovetz stemming and stopwords eliminated. Passage retrieval task. Similar to the document retrieval task, the passage retrieval task includes (i) Full retrieval and (ii) Top-1000 reranking. In the full retrieval subtask, given a query, the retrieval system is expected to retrieve a ranked list of passages from the full collection based on their estimated likelihood of containing an answer to the question. In the top-1000 reranking subtask, 1000 passages per query are provided, giving all methods the same starting point. The sets of 1000 were generated based on BM25 retrieval with no stemming as applied to the full collection. All ranking models are expected to rerank the 1000 passages based on their estimated likelihood of containing an answer to the query. In this subtask, we can compare different reranking methods based on the same initial set of 1000 candidates, with the same rationale as described for the document reranking subtask. 3 TEST COLLECTION DATA A test collection comprises a corpus, test queries and test relevance judgments [24]. We have a document corpus and a passage corpus, each with two test sets so far and a large training set, as summarized in Table 1. We now describe the data in more detail. 3.1 Training Data Sets Both tasks have large training sets based on human relevance assessments, derived from MS MARCO [3]. These are sparse, with no negative labels and often only one positive label per query, analogous to some real-world training data such as click logs. In the case of passage retrieval, the positive label indicates that the passage contains an answer to a query. In the case of document retrieval, we transferred the passage-level label to the corresponding source document that contained the passage. We do this under the assumption that a document with a relevant passage is a relevant document, although we note that our document snapshot was generated at a different time from the passage data set, so there can be some mismatch. Despite this, machine learning models trained with these labels seem to benefit from using the labels, when evaluated using NIST\u2019s non-sparse, non-transferred labels. This suggests the transferred document labels are meaningful for our TREC task. 3.2 Test Sets The test collections were constructed using shallow pooling across all runs submitted to the respective task, with additional documents \fTREC Deep Learning Track: Reusable Test Collections in the Large Data Regime SIGIR \u201921, July, 2021, Online. selected to be judged via the HiCAL classifier [2]. The test set of 200 queries was the same for both tasks. A subset of those queries were selected to be judged at NIST based on effectiveness as scored using the MARCO sparse judgments\u2014queries with a median MRR across document retrieval submissions of greater than 0.5 or of 0.0 were excluded from the evaluation set. Some evaluation-set candidate queries were later eliminated because their number of relevant documents fell outside the range that would create a robust test collection. In the end, the evaluation set contained 43 queries for both tasks in 2019, though they were different sets of queries. The same judgment process was used in 2020, which resulted in evaluation sets of 45 and 54 queries for the document and passage retrieval tasks, respectively. For the document retrieval task, judgments were on a four-point scale of Irrelevant, Relevant, Highly Relevant, and Perfectly Relevant. For measures that use binary judgments, all but Irrelevant are counted as relevant. Passage retrieval judgments were also collected on a four-point scale: Irrelevant, Related, Highly Relevant, and Perfectly Relevant. In this case, Related means the passage is on-topic, but does not actually answer the question; hence only Highly and Perfectly Relevant are treated as relevant for binary measures. When reusing the test sets, there can be a significant problem if the new ranker retrieves documents that are relevant but unjudged. This makes it difficult to correctly estimate the quality of the new model. Since reusability is a focus in this paper, we provide some more in-depth reusability analysis in Section 5. 3.3 ORCAS The 2020 edition of the track also released a large scale click data set for the document retrieval task. The ORCAS data [7] is constructed from the logs of a major search engine. The data can be used in a variety of ways, for example as additional training data (almost 50 times larger than the main training set) or as a document field in addition to title, URL and body text fields available in the original training data. However, we do not describe the ORCAS data in details here and instead point the interested reader to the ORCAS website1. 3.4 TREC runs The TREC Deep Learning Track had 15 and 25 participating groups\u2014 with a total of 75 and 123 runs submitted across both tasks\u2014in 2019 and 2020, respectively. Since runs are blind submissions, that are finalized before any relevance labels are available, they provide a comparison of ranking approaches with no chance of overfitting to the test judgments. Based on submission surveys with each run, we divided the runs into three categories: (1) nnlm: if the run employs large scale pre-trained neural language models, such as BERT [10] or XLNet [26] (2) nn: if the run employs some form of neural network based approach\u2014e.g., Duet [17, 18] or using word embeddings [12]\u2014but does not fall into the \u201cnnlm\u201d category (3) trad: if the run exclusively uses traditional IR methods like BM25 [22] and RM3 [1]. Overall results (Figure 1) in both tasks in both years were that the best \u201cnnlm\u201d runs outperformed other types of run. This effect 1https://microsoft.github.io/msmarco/ORCAS Table 1: Summary of statistics on TREC 2020 Deep Learning Track data sets. Document task Passage task Data Number of records Number of records Corpus 3, 213, 835 8, 841, 823 Train queries 367, 013 502, 939 Train qrels 384, 597 532, 761 Dev queries 5, 193 6, 980 Dev qrels 5, 478 7, 437 2019 TREC queries 200 \u219243 200 \u219243 2019 TREC qrels 16, 258 9, 260 2020 TREC queries 200 \u219245 200 \u219254 2020 TREC qrels 9, 098 11, 386 was more pronounced in passage retrieval, since the chances of vocabulary mismatch between query and result is greater if the search result is a shorter text. The track overview papers [8, 9] present a number of more detailed breakdowns, for example by subtask. We note that NIST makes the runs available, so they become part of the DL Track resources. Past TREC runs could be studied as baselines. 3.5 Reusing the Test Collections Table 1 provides descriptive statistics for the data set. More details about the data sets\u2014including directions for download\u2014is available on the TREC 2020 Deep Learning Track website2. Interested readers are also encouraged to refer to Bajaj et al. [3] for details on the original MS MARCO data set. Broadly speaking, the data set contains four separate sets: train, dev, TREC 2019 test, and TREC 2020 test. The relevance judgments in train and dev sets are binary, while for TREC 2019 and 2020 test sets they are on a four-point scale as described in Section 3.2. A typical use of this data set would involve traininig a model on the large training corpus, using the dev set (or a subset of it) to make decisions about hyperparameter choices and early stopping, and evaluating and reporting the model performance on the TREC 2019 and 2020 test sets. However, multiple evaluation of different model variants on the TREC test sets can lead to problematic outcomes, such as, overfitting on these query sets and false positive results on model performance. To avoid this, we strongly recommend not to iterate over the TREC sets. Instead, a more reasonable experiment protocol would be to use the dev set to validate and iterate on new architecture and other proposed changes. Only after the final model has been selected for publication, should they be evaluated against the TREC test sets to generate the final numbers that can be reported in a publication or in other forums. Section 4 provides evidence that supports the validity of conclusions reached via such an experimentation protocol. 2https://microsoft.github.io/msmarco/TREC-Deep-Learning \fSIGIR \u201921, July, 2021, Online. Craswell, Mitra, et al. 0 5 10 15 20 25 30 snapshot 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 A B C MS MARCO dev RR TREC 2019 NDCG@10 TREC 2020 NDCG@10 Figure 2: Training curves. Using 32 snapshots of the training process, we have 32 candidate rankers. Rankers A, B and C are optimal on dev RR, 2019 NDCG@10 and 2020 NDCG@10 respectively. The correct ranker to select is A, then report results of ranker A on the 2019 and 2020 test queries. 3.6 Code To make it easier to work with this benchmark, we open-source a PyTorch [21] implementation3 of the Conformer-Kernel model [19, 20]. The code automates the download of all the required data files for the document retrieval task. It also implements the training and evaluation of a relatively efficient deep neural baseline on this benchmark, under both the rerank and the fullrank settings. This code is provided as-is primarily for the convenience of those working with this data set for the first time, but it is not compulsory to use this codebase in the context of this benchmark. 4 VALIDITY OF RESULTS The easy reusability of TREC test collections is extremely important, but can also cause problems. The important advantage of reusability is that we can test a new hypothesis on several years of TREC data without waiting years for new iterations for TREC. If the same ranking approach gives positive results on multiple test sets, we are more confident that it is a real improvement, rather than a lucky one that was positive on a single test. The negative side of reusability is related to selection bias, harming the validity of results. In one scenario, the researcher tries their hypothesis on reusable test collections, the results are negative and their paper is rejected. This means that published papers are biased towards positive results. Some positive results will be false positives, where the researcher was lucky. In another scenario, the researcher gets a negative result, so they try another configuration or variant of their hypothesis. With enough iteration and multiple testing, they get results good enough to publish in a paper and the paper is accepted. With multiple testing, the chances of getting lucky are increased, and the chances of their result being correct are reduced. Ideally we should make no decisions using the test sets, much less a series of iterations. We should make decisions on the dev set. 3https://github.com/bmitra-msft/TREC-Deep-Learning-Quick-Start 0.22 0.24 0.26 0.28 0.30 0.32 MS MARCO dev RR 0.52 0.54 0.56 0.58 0.60 0.62 TREC 2019 NDCG@10 A B C 0.22 0.24 0.26 0.28 0.30 0.32 MS MARCO dev RR 0.52 0.54 0.56 0.58 0.60 TREC 2020 NDCG@10 A B C 0.52 0.54 0.56 0.58 0.60 0.62 TREC 2019 NDCG@10 0.52 0.54 0.56 0.58 0.60 TREC 2020 NDCG@10 A B C Figure 3: Results vary, even in a single trail of training a model. Select your rankers using the dev set, which in this case is ranker A, and report the results on TREC 2019 and TREC 2020. Selecting your ranker based on the TREC sets (B and C) is not acceptable. To illustrate this, we present a case study. It is a singe training run of the Conformer-Kernel approach described in Section 3.5. We use default parameters, but take a checkpoint of the ranker training every 1024 samples instead of 4096, to get 32 checkpoints during training (Figure 2). Each checkpoint is a ranker that could \fTREC Deep Learning Track: Reusable Test Collections in the Large Data Regime SIGIR \u201921, July, 2021, Online. be presented in a paper. The rankers do not differ in an interesting way, since they all come from the same training setup, and the only difference is the stopping point of the training. We recommend to choose the ranker with the best dev set performance, which here is the 31st checkpoint. That ranker is indicated by a vertical dotted line and letter A in the figure. Since we used the dev to make a decision, the dev set numbers are not a valid test to report in the paper. Whenever we have several options and choose the best-performing option on a test set, that test set is no longer valid. The valid numbers to report are the TREC 2019 and TREC 2020 NDCG@10 results for ranker A, which can be read by following the dotted line from A to the two other curves. By the same token we should not publish results of the 27th or 28th iteration, which are rankers B and C, unless we want to choose results on one TREC test set and report results on the other set. Figure 3 shows the same 32 rankers, but focusing on the variability and correlation of the different metrics. The metrics are largely correlated, but between checkpoints there is also random variation. So even within one trial of ranker training, if we allow cherrypicking of the best checkpoint, we can get a meaningless positive result. We note that ranker A is near-optimal on both the TREC collections, so the dev set is working well if our goal is to choose a good checkpoint. We also note that rankers B and C are quite different from each other on the TREC 2019 data, and the difference is statistically significant. There is no sensible procedure that would lead to the comparison of rankers B and C in a paper. Instead, this illustrates that there is enough random variation in a single training run that we can find a completely meaningless \u2018significant\u2019 (\ud835\udc5d= 0.00837) difference between two rankers. The general principle is to avoid making decisions on the test set. Throughout the process of experimentation for a paper it is better to make multiple use of the dev set, and only try methods on the test set when they are finalized. Since making decisions on the same test set can lead to unreliable conclusions, another approach that could be possible is to divide the dev set into two or three parts. One set could be used for choosing the checkpoint as in our case study. Another set could be used to compare which neural network architecture is better. Becuase we split the data, the architecture question can be asked independently of the checkpoint decision. In the case of a third set it would even be possible to publish dev set results, since we have a third set that was never used for choosing between checkpoints or choosing architectures. To this end, we plan to publish some standard splits of the dev set, to enable publication of dev results on the same held out test set. Rather than comparing a particular machine learned ranker to a baseline ranker, our goal should be to compare an overall machine learning approach for ranking to a baseline approach. The significant variation we see in a single training run (Figure 3) reminds us that we can better understand the performance of a learning approach by running multiple training trials with different random seeds. Considering the performance across trials increases the chances of finding a true difference between machine learning approaches, by decreasing the chances that results are dominated by a lucky or unlucky random seed. Ganjisaffar et al. [11] provide one demonstration of this approach being applied in information retrieval. Overall, the best ranker on a test set is lucky [6]. Even selecting the best baseline from a set of simple \u201ctrad\u201d rankers can seem like overfitting, since the best ranker varies depending on the test data [28]. Instead we should strive to identify ranking approaches that work well across a variety of test sets, without making any design decisions using these particular test sets. This maximizes the chances that the models will perform well when deployed, on an independent future sample of test queries. 5 REUSABILITY ANALYSIS One of the goals of the track is to build general-purpose, reusable test collections at acceptable cost. In this context, general-purpose means a collection reliably ranks runs for a wide spectrum of evaluation measures, including recall-focused measures. Reusable means that runs that did not participate in the collection building process can be reliably ranked by the collection. In this section we support the claim that the DL track collections are reusable. Leave-Out-Uniques (LOU) tests [5, 27] are a traditional way of analyzing the reusability of a collection. In these tests, the relevant documents retrieved by only one participating team are removed from the qrels files and all runs are then evaluated using the reduced qrels. The reduced qrels are the qrels that would have resulted had the team not participated in the collection building process, and thus their submitted runs represent new runs with respect to the reduced qrels. New runs ranking essentially the same in both the original and reduced collections supports a claim that the collection is reusable. However, a standard LOU test does not work for the collections built in the DL track because of the way the judgment sets were constructed. We used the HiCAL system [2] to select the set of documents to be judged for each query once depth-10 pools were judged. HiCAL uses the current set of judgments to build a relevance model and then selects the unjudged document most likely to be relevant as the next document to judge. HiCAL does not depend on runs as the source of documents, so the concept of uniquely retrieved relevant documents no longer applies. A given team\u2019s unique relevant documents can be removed from the depth-10 pools in the first stage, but then the HiCAL process must be activated as it may select the removed documents to be judged. Since the HiCAL process is not deterministic (ties are broken randomly) and depends on the particular set of documents seen so far, the HiCAL process must be simulated multiple times using the original qrels\u2019 judgments for a fair test. The simulations proceeded as follows, where the entire process was performed separately for the each DL track task. The original depth-10 pools were fed to HiCAL for each of ten trials, where each trial used a separate initial seed for the random number generator. Within each trial, we tracked the documents encountered by HiCAL, creating a trace of the first 2500 documents encountered per query. Any unjudged documents encountered by HiCAL were treated as not relevant. We created a qrels file from each trace by taking a prefix of the trace of length equal to the number of documents judged in the original qrels per query. This resulted in 10 qrels files that could have been the result of the official judgment process of the track (modulo the unjudged documents would have been judged). \fSIGIR \u201921, July, 2021, Online. Craswell, Mitra, et al. a) Document retrieval task, MAP b) Document retrieval task, Precision@10 Figure 4: Heatmap of ranks obtained by a run over 120 trials for the 2019 document task when using MAP (left) or Precision@10 (right) as the evaluation measure. The darker a plotted point, the more times the run was ranked at that position. To obtain the effect of the LOU test, for each team in turn we omit that team\u2019s uniquely retrieved relevant documents from the depth-10 pools. This pool is fed to the HiCAL process for each of ten trials where the random number seed for a given trial is the same as in the all-teams simulation. As before, we create a trace of the documents that were encountered by HiCAL, and create a qrels file by taking a prefix of the trace of length equal to the number of documents judged in the official qrels. All runs are evaluated using this trial\u2019s qrels, and the system ranking induced by it for a given evaluation measure is compared to the ranking induced by the official qrels. Figure 4 shows the results of this modified LOU test for the TREC 2019 document task (the passage task results exhibited even less variability). The figure shows a heat map of the number of times a run was ranked at a given position over all 120 simulation trials\u2014 10 trials using the full set of runs to form the pools plus 10 trials for each of 11 teams when omitting a team\u2019s uniquely retrieved relevant from the initial pools. The ranks are plotted on the x-axis and the runs on the y-axis where they are sorted by their position in the ranking by the official qrels, using either MAP or Precision@10 as the evaluation measure. The darker a plotted point the more times the run was ranked at that position. The figure makes it clear that a most runs have a single dominant rank. When a run does change ranks, it moves by a modest amount. 6 LIMITATIONS The overall motivation for the TREC Deep Learning Track is to allow benchmarking of machine learning based retrieval approaches [13, 14, 16] against traditional IR methods, in a setting where large training data is available. Improving over state-of-the-art on this benchmark may involve designing new neural architectures that, in addition to large scale training data, may also require specialized hardware, e.g., GPUs, to scalably train. Many such deep models that learn from large text data are known to encode problematic societal biases from the corpus or incur significant ecological costs from computationally intensive training [4]. Any use of this benchmark for model development should seriously consider such negative externalities and follow strict ethical guidelines. Alternatively, we also encourage the use of this benchmark for development and study of new methods that may not necessitate large scale training. The large training data provided may also be used for training models for other tasks. The queries in this benchmark correspond to real user queries from Bing\u2019s search logs. However, they are restricted to include only English queries that can be answered by a short passage. This restricts the development of models that may target non-English languages or other user intents\u2014e.g., transactional search. In spite of these limitations, we believe the TREC DL benchmark is a critical resource for model development and evaluation in the IR community. 7" + }, + { + "url": "http://arxiv.org/abs/2102.07662v1", + "title": "Overview of the TREC 2020 deep learning track", + "abstract": "This is the second year of the TREC Deep Learning Track, with the goal of\nstudying ad hoc ranking in the large training data regime. We again have a\ndocument retrieval task and a passage retrieval task, each with hundreds of\nthousands of human-labeled training queries. We evaluate using single-shot\nTREC-style evaluation, to give us a picture of which ranking methods work best\nwhen large data is available, with much more comprehensive relevance labeling\non the small number of test queries. This year we have further evidence that\nrankers with BERT-style pretraining outperform other rankers in the large data\nregime.", + "authors": "Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos", + "published": "2021-02-15", + "updated": "2021-02-15", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Deep learning methods, where a computational model learns an intricate representation of a large-scale dataset, yielded dramatic performance improvements in speech recognition and computer vision [LeCun et al., 2015]. When we have seen such improvements, a common factor is the availability of large-scale training data [Deng et al., 2009, Bellemare et al., 2013]. For ad hoc ranking in information retrieval, which is a core problem in the \ufb01eld, we did not initially see dramatic improvements in performance from deep learning methods. This led to questions about whether deep learning methods were helping at all [Yang et al., 2019a]. If large training data sets are a factor, one explanation for this could be that the training sets were too small. The TREC Deep Learning Track, and associated MS MARCO leaderboards [Bajaj et al., 2016], have introduced human-labeled training sets that were previously unavailable. The main goal is to study information retrieval in the large training data regime, to see which retrieval methods work best. The two tasks, document retrieval and passage retrieval, each have hundreds of thousands of human-labeled training queries. The training labels are sparse, with often only one positive example per query. Unlike the MS MARCO leaderboards, which evaluate using the same kind of sparse labels, the evaluation at TREC uses much more comprehensive relevance labeling. Each year of TREC evaluation evaluates on a new set of test queries, where participants submit before the test labels have even been generated, so the TREC results are the gold standard for avoiding multiple testing and over\ufb01tting. However, the comprehensive relevance labeling also generates a reusable test collections, allowing reuse of the dataset in future studies, although people should be careful to avoid over\ufb01tting and overiteration. The main goals of the Deep Learning Track in 2020 have been: 1) To provide large reusable training datasets with associated large scale click dataset for training deep learning and traditional ranking methods in a large training data regime, 2) To construct reusable test collections for evaluating quality of deep learning and traditional ranking methods, 3) To perform a rigorous blind single-shot evaluation, where test labels don\u2019t even exist until after all runs are submitted, to compare different ranking methods, and 4) To study this in both a traditional TREC setup with end-to-end retrieval and in a re-ranking setup that matches how some models may be deployed in practice. 2 Task description The track has two tasks: Document retrieval and passage retrieval. Participants were allowed to submit up to three runs per task, although this was not strictly enforced. Submissions to both tasks used the same set of 200 test queries. arXiv:2102.07662v1 [cs.IR] 15 Feb 2021 \fIn the pooling and judging process, NIST chose a subset of the queries for judging, based on budget constraints and with the goal of \ufb01nding a suf\ufb01ciently comprehensive set of relevance judgments to make the test collection reusable. This led to a judged test set of 45 queries for document retrieval and 54 queries for passage retrieval. The document queries are not a subset of the passage queries. When submitting each run, participants indicated what external data, pretrained models and other resources were used, as well as information on what style of model was used. Below we provide more detailed information about the document retrieval and passage retrieval tasks, as well as the datasets provided as part of these tasks. 2.1 Document retrieval task The \ufb01rst task focuses on document retrieval, with two subtasks: (i) Full retrieval and (ii) top-100 reranking. In the full retrieval subtask, the runs are expected to rank documents based on their relevance to the query, where documents can be retrieved from the full document collection provided. This subtask models the end-to-end retrieval scenario. In the reranking subtask, participants were provided with an initial ranking of 100 documents, giving all participants the same starting point. This is a common scenario in many real-world retrieval systems that employ a telescoping architecture [Matveeva et al., 2006, Wang et al., 2011]. The reranking subtask allows participants to focus on learning an effective relevance estimator, without the need for implementing an end-to-end retrieval system. It also makes the reranking runs more comparable, because they all rerank the same set of 100 candidates. The initial top-100 rankings were retrieved using Indri [Strohman et al., 2005] on the full corpus with Krovetz stemming and stopwords eliminated. Judgments are on a four-point scale: [3] Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. [2] Highly relevant: The content of this document provides substantial information on the query. [1] Relevant: Document provides some information relevant to the query, which may be minimal. [0] Irrelevant: Document does not provide any useful information about the query. For metrics that binarize the judgment scale, we map document judgment levels 3,2,1 to relevant and map document judgment level 0 to irrelevant. 2.2 Passage retrieval task Similar to the document retrieval task, the passage retrieval task includes (i) a full retrieval and (ii) a top-1000 reranking tasks. In the full retrieval subtask, given a query, the participants were expected to retrieve a ranked list of passages from the full collection based on their estimated likelihood of containing an answer to the question. Participants could submit up to 1000 passages per query for this end-to-end retrieval task. In the top-1000 reranking subtask, 1000 passages per query were provided to participants, giving all participants the same starting point. The sets of 1000 were generated based on BM25 retrieval with no stemming as applied to the full collection. Participants were expected to rerank the 1000 passages based on their estimated likelihood of containing an answer to the query. In this subtask, we can compare different reranking methods based on the same initial set of 1000 candidates, with the same rationale as described for the document reranking subtask. Judgments are on a four-point scale: [3] Perfectly relevant: The passage is dedicated to the query and contains the exact answer. [2] Highly relevant: The passage has some answer for the query, but the answer may be a bit unclear, or hidden amongst extraneous information. [1] Related: The passage seems related to the query but does not answer it. [0] Irrelevant: The passage has nothing to do with the query. For metrics that binarize the judgment scale, we map passage judgment levels 3,2 to relevant and map document judgment levels 1,0 to irrelevant. 2 \fTable 1: Summary of statistics on TREC 2020 Deep Learning Track datasets. Document task Passage task Data Number of records Number of records Corpus 3, 213, 835 8, 841, 823 Train queries 367, 013 502, 939 Train qrels 384, 597 532, 761 Dev queries 5, 193 6, 980 Dev qrels 5, 478 7, 437 2019 TREC queries 200 \u219243 200 \u219243 2019 TREC qrels 16, 258 9, 260 2020 TREC queries 200 \u219245 200 \u219254 2020 TREC qrels 9, 098 11, 386 Table 2: Summary of ORCAS data. Each record in the main \ufb01le (orcas.tsv) indicates a click between a query (Q) and a URL (U), also listing a query ID (QID) and the corresponding TREC document ID (DID). The run \ufb01le is the top-100 using Indri query likelihood, for use as negative samples during training. Filename Number of records Data in each record orcas.tsv 18.8M QID Q DID U orcas-doctrain-qrels.tsv 18.8M QID DID orcas-doctrain-queries.tsv 10.4M QID Q orcas-doctrain-top100 983M QID DID score 3 Datasets Both tasks have large training sets based on human relevance assessments, derived from MS MARCO. These are sparse, with no negative labels and often only one positive label per query, analogous to some real-world training data such as click logs. In the case of passage retrieval, the positive label indicates that the passage contains an answer to a query. In the case of document retrieval, we transferred the passage-level label to the corresponding source document that contained the passage. We do this under the assumption that a document with a relevant passage is a relevant document, although we note that our document snapshot was generated at a different time from the passage dataset, so there can be some mismatch. Despite this, machine learning models trained with these labels seem to bene\ufb01t from using the labels, when evaluated using NIST\u2019s non-sparse, non-transferred labels. This suggests the transferred document labels are meaningful for our TREC task. This year for the document retrieval task, we also release a large scale click dataset, The ORCAS data, constructed from the logs of a major search engine [Craswell et al., 2020]. The data could be used in a variety of ways, for example as additional training data (almost 50 times larger than the main training set) or as a document \ufb01eld in addition to title, URL and body text \ufb01elds available in the original training data. For each task there is a corresponding MS MARCO leaderboard, using the same corpus and sparse training data, but using sparse data for evaluation as well, instead of the NIST test sets. We analyze the agreement between the two types of test in Section 4. Table 1 and Table 2 provide descriptive statistics for the dataset derived from MS MARCO and the ORCAS dataset, respectively. More details about the datasets\u2014including directions for download\u2014is available on the TREC 2020 Deep Learning Track website1. Interested readers are also encouraged to refer to [Bajaj et al., 2016] for details on the original MS MARCO dataset. 1https://microsoft.github.io/TREC-2020-Deep-Learning 3 \fTable 3: Summary of statistics of runs for the two retrieval tasks at the TREC 2020 Deep Learning Track. Document retrieval Passage retrieval Number of groups 14 14 Number of total runs 64 59 Number of runs w/ category: nnlm 27 43 Number of runs w/ category: nn 11 2 Number of runs w/ category: trad 26 14 Number of runs w/ category: rerank 19 18 Number of runs w/ category: fullrank 45 41 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best nnlm run best nn run best trad run nnlm nn trad (a) Document retrieval task 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best nnlm run best nn run best trad run nnlm nn trad (b) Passage retrieval task Figure 1: NDCG@10 results, broken down by run type. Runs of type \u201cnnlm\u201d, meaning they use language models such as BERT, performed best on both tasks. Other neural network models \u201cnn\u201d and non-neural models \u201ctrad\u201d had relatively lower performance this year. More iterations of evaluation and analysis would be needed to determine if this is a general result, but it is a strong start for the argument that deep learning methods may take over from traditional methods in IR applications. 4 Results and analysis Submitted runs The TREC 2020 Deep Learning Track had 25 participating groups, with a total of 123 runs submitted across both tasks. Based run submission surveys, we manually classify each run into one of three categories: \u2022 nnlm: if the run employs large scale pre-trained neural language models, such as BERT [Devlin et al., 2018] or XLNet [Yang et al., 2019b] \u2022 nn: if the run employs some form of neural network based approach\u2014e.g., Duet [Mitra et al., 2017, Mitra and Craswell, 2019] or using word embeddings [Joulin et al., 2016]\u2014but does not fall into the \u201cnnlm\u201d category \u2022 trad: if the run exclusively uses traditional IR methods like BM25 [Robertson et al., 2009] and RM3 [AbdulJaleel et al., 2004]. We placed 70 (57%) runs in the \u201cnnlm\u201d category, 13 (10%) in the \u201cnn\u201d category, and the remaining 40 (33%) in the \u201ctrad\u201d category. In 2019, 33 (44%) runs were in the \u201cnnlm\u201d category, 20 (27%) in the \u201cnn\u201d category, and the remaining 22 (29%) in the \u201ctrad\u201d category. While there was a signi\ufb01cant increase in the total number of runs submitted compared to last year, we observed a signi\ufb01cant reduction in the fraction of runs in the \u201cnn\u201d category. We further categorize runs based on subtask: \u2022 rerank: if the run reranks the provided top-k candidates, or \u2022 fullrank: if the run employs their own phase 1 retrieval system. We \ufb01nd that only 37 (30%) submissions fall under the \u201crerank\u201d category\u2014while the remaining 86 (70%) are \u201cfullrank\u201d. Table 3 breaks down the submissions by category and task. 4 \fOverall results Our main metric in both tasks is Normalized Discounted Cumulative Gain (NDCG)\u2014speci\ufb01cally, NDCG@10, since it makes use of our 4-level judgments and focuses on the \ufb01rst results that users will see. To get a picture of the ranking quality outside the top-10 we also report Average Precision (AP), although this binarizes the judgments. For comparison to the MS MARCO leaderboard, which often only has one relevant judgment per query, we report the Reciprocal Rank (RR) of the \ufb01rst relevant document on the NIST judgments, and also using the sparse leaderboard judgments. Some of our evaluation is concerned with the quality of the top-k results, where k = 100 for the document task and k = 1000 for the passage task. We want to consider the quality of the top-k set without considering how they are ranked, so we can see whether improving the set-based quality is correlated with an improvement in NDCG@10. Although we could use Recall@k as a metric here, it binarizes the judgments, so we instead use Normalized Cumulative Gain (NCG@k) [Rosset et al., 2018]. NCG is not supported in trec_eval. For trec_eval metrics that are correlated, see Recall@k and NDCG@k. The overall results are presented in Table 4 for document retrieval and Table 5 for passage retrieval. These tables include multiple metrics and run categories, which we now use in our analysis. Neural vs. traditional methods. The \ufb01rst question we investigated as part of the track is which ranking methods work best in the large-data regime. We summarize NDCG@10 results by run type in Figure 1. For document retrieval runs (Figure 1a) the best \u201ctrad\u201d run is outperformed by \u201cnn\u201d and \u201cnnlm\u201d runs by several percentage points, with \u201cnnlm\u201d also having an advantage over \u201cnn\u201d. We saw a similar pattern in our 2019 results. This year we encouraged submission of a variety of \u201ctrad\u201d runs from different participating groups, to give \u201ctrad\u201d more chances to outperform other run types. The best performing run of each category is indicated, with the best \u201cnnlm\u201d and \u201cnn\u201d models outperforming the best \u201ctrad\u201d model by 23% and 11% respectively. For passage retrieval runs (Figure 1b) the gap between the best \u201cnnlm\u201d and \u201cnn\u201d runs and the best \u201ctrad\u201d run is larger, at 42% and 17% respectively. One explanation for this could be that vocabulary mismatch between queries and relevant results is greater in short text, so neural methods that can overcome such mismatch have a relatively greater advantage in passage retrieval. Another explanation could be that there is already a public leaderboard, albeit without test labels from NIST, for the passage task. (We did not launch the document ranking leaderboard until after our 2020 TREC submission deadline.) In passage ranking, some TREC participants may have submitted neural models multiple times to the public leaderboard, so are relatively more experienced working with the passage dataset than the document dataset. In query-level win-loss analysis for the document retrieval task (Figure 2) the best \u201cnnlm\u201d model outperforms the best \u201ctrad\u201d run on 38 out of the 45 test queries (i.e., 84%). Passage retrieval shows a similar pattern in Figure 3. Similar to last year\u2019s data, neither task has a large class of queries where the \u201cnnlm\u201d model performs worse. End-to-end retrieval vs. reranking. Our datasets include top-k candidate result lists, with 100 candidates per query for document retrieval and 1000 candidates per query for passage retrieval. Runs that simply rerank the provided candidates are \u201crerank\u201d runs, whereas runs that perform end-to-end retrieval against the corpus, with millions of potential results, are \u201cfullrank\u201d runs. We would expect that a \u201cfullrank\u201d run should be able to \ufb01nd a greater number of relevant candidates than we provided, achieving higher NCG@k. A multi-stage \u201cfullrank\u201d run should also be able to optimize the stages jointly, such that early stages produce candidates that later stages are good at handling. According to Figure 4, \u201cfullrank\u201d did not achieve much better NDCG@10 performance than \u201crerank\u201d runs. In fact, for the passage retrieval task, the top two runs are of type \u201crerank\u201d. While it was possible for \u201cfullrank\u201d to achieve better NCG@k, it was also possible to make NCG@k worse, and achieving signi\ufb01cantly higher NCG@k does not seem necessary to achieve good NDCG@10. Speci\ufb01cally, for the document retrieval task, the best \u201cfullrank\u201d run achieves 5% higher NDCG@10 over the best \u201crerank\u2019 run; whereas for the passage retrieval task, the best \u201cfullrank\u201d run performs slightly worse (0.3% lower NDCG@10) compared to the best \u201crerank\u2019 run. Similar to our observations from Deep Learning Track 2019, we are not yet seeing a strong advantage of \u201cfullrank\u201d over \u201crerank\u201d. However, we hope that as the body of literature on neural methods for phase 1 retrieval (e.g., [Boytsov et al., 2016, Zamani et al., 2018, Mitra et al., 2019, Nogueira et al., 2019]) grows, we would see a larger number of runs with deep learning as an ingredient for phase 1 in future editions of this TREC track. Effect of ORCAS data Based on the descriptions provided, ORCAS data seems to have been used by six of the runs (ndrm3-orc-full, ndrm3-orc-re, uogTrBaseL17, uogTrBaseQL17o, uogTr31oR, relemb_mlm_0_2). Most runs seem to be make use of the ORCAS data as a \ufb01eld, with some runs using the data as an additional training dataset as well. 5 \fTable 4: Document retrieval runs. RR (MS) is based on MS MARCO labels. All other metrics are based on NIST labels. Rows are sorted by NDCG@10. run group subtask neural RR (MS) RR NDCG@10 NCG@100 AP d_d2q_duo h2oloo fullrank nnlm 0.4451 0.9476 0.6934 0.7718 0.5422 d_d2q_rm3_duo h2oloo fullrank nnlm 0.4541 0.9476 0.6900 0.7769 0.5427 d_rm3_duo h2oloo fullrank nnlm 0.4547 0.9476 0.6794 0.7498 0.5270 ICIP_run1 ICIP rerank nnlm 0.3898 0.9630 0.6623 0.6283 0.4333 ICIP_run3 ICIP rerank nnlm 0.4479 0.9667 0.6528 0.6283 0.4360 fr_doc_roberta BITEM fullrank nnlm 0.3943 0.9365 0.6404 0.6806 0.4423 ICIP_run2 ICIP rerank nnlm 0.4081 0.9407 0.6322 0.6283 0.4206 roberta-large BITEM rerank nnlm 0.3782 0.9185 0.6295 0.6283 0.4199 bcai_bertb_docv bcai fullrank nnlm 0.4102 0.9259 0.6278 0.6604 0.4308 ndrm3-orc-full MSAI fullrank nn 0.4369 0.9444 0.6249 0.6764 0.4280 ndrm3-orc-re MSAI rerank nn 0.4451 0.9241 0.6217 0.6283 0.4194 ndrm3-full MSAI fullrank nn 0.4213 0.9333 0.6162 0.6626 0.4069 ndrm3-re MSAI rerank nn 0.4258 0.9333 0.6162 0.6283 0.4122 ndrm1-re MSAI rerank nn 0.4427 0.9333 0.6161 0.6283 0.4150 mpii_run2 mpii rerank nnlm 0.3228 0.8833 0.6135 0.6283 0.4205 bigIR-DTH-T5-R QU rerank nnlm 0.3235 0.9119 0.6031 0.6283 0.3936 mpii_run1 mpii rerank nnlm 0.3503 0.9000 0.6017 0.6283 0.4030 ndrm1-full MSAI fullrank nn 0.4350 0.9333 0.5991 0.6280 0.3858 uob_runid3 UoB rerank nnlm 0.3294 0.9259 0.5949 0.6283 0.3948 bigIR-DTH-T5-F QU fullrank nnlm 0.3184 0.8916 0.5907 0.6669 0.4259 d_d2q_bm25 anserini fullrank nnlm 0.3338 0.9369 0.5885 0.6752 0.4230 TUW-TKL-2k TU_Vienna rerank nn 0.3683 0.9296 0.5852 0.6283 0.3810 bigIR-DH-T5-R QU rerank nnlm 0.2877 0.8889 0.5846 0.6283 0.3842 uob_runid2 UoB rerank nnlm 0.3534 0.9100 0.5830 0.6283 0.3976 uogTrQCBMP UoGTr fullrank nnlm 0.3521 0.8722 0.5791 0.6034 0.3752 uob_runid1 UoB rerank nnlm 0.3124 0.8852 0.5781 0.6283 0.3786 TUW-TKL-4k TU_Vienna rerank nn 0.4097 0.9185 0.5749 0.6283 0.3749 bigIR-DH-T5-F QU fullrank nnlm 0.2704 0.8902 0.5734 0.6669 0.4177 bl_bcai_mult\ufb02d bl_bcai fullrank trad 0.2622 0.9195 0.5629 0.6299 0.3829 indri-sdmf RMIT fullrank trad 0.3431 0.8796 0.5597 0.6908 0.3974 bcai_classic bcai fullrank trad 0.3082 0.8648 0.5557 0.6420 0.3906 longformer_1 USI rerank nnlm 0.3614 0.8889 0.5520 0.6283 0.3503 uogTr31oR UoGTr fullrank nnlm 0.3257 0.8926 0.5476 0.5496 0.3468 rterrier-expC2 bl_rmit fullrank trad 0.3122 0.8259 0.5475 0.6442 0.3805 bigIR-DT-T5-R QU rerank nnlm 0.2293 0.9407 0.5455 0.6283 0.3373 uogTrT20 UoGTr fullrank nnlm 0.3787 0.8711 0.5453 0.5354 0.3692 RMIT_DFRee RMIT fullrank trad 0.2984 0.8756 0.5431 0.6979 0.4087 rmit_indri-fdm bl_rmit fullrank trad 0.2779 0.8481 0.5416 0.6812 0.3859 d_d2q_bm25rm3 anserini fullrank nnlm 0.2314 0.8147 0.5407 0.6831 0.4228 rindri-bm25 bl_rmit fullrank trad 0.3302 0.8572 0.5394 0.6503 0.3773 bigIR-DT-T5-F QU fullrank nnlm 0.2349 0.9060 0.5390 0.6669 0.3619 bl_bcai_model1 bl_bcai fullrank trad 0.2901 0.8358 0.5378 0.6390 0.3774 bl_bcai_prox bl_bcai fullrank trad 0.2763 0.8164 0.5364 0.6405 0.3766 terrier-jskls bl_rmit fullrank trad 0.3190 0.8204 0.5342 0.6761 0.4008 rmit_indri-sdm bl_rmit fullrank trad 0.2702 0.8470 0.5328 0.6733 0.3780 rterrier-t\ufb01df bl_rmit fullrank trad 0.2869 0.8241 0.5317 0.6410 0.3734 BIT-run2 BIT.UA fullrank nn 0.2687 0.8611 0.5283 0.6061 0.3466 RMIT_DPH RMIT fullrank trad 0.3117 0.8278 0.5280 0.6531 0.3879 d_bm25 anserini fullrank trad 0.2814 0.8521 0.5271 0.6453 0.3791 d_bm25rm3 anserini fullrank trad 0.2645 0.8541 0.5248 0.6632 0.4006 BIT-run1 BIT.UA fullrank nn 0.3045 0.8389 0.5239 0.6061 0.3466 rterrier-dph bl_rmit fullrank trad 0.3033 0.8267 0.5226 0.6634 0.3884 rterrier-t\ufb01df2 bl_rmit fullrank trad 0.3010 0.8407 0.5219 0.6287 0.3607 uogTrBaseQL17o bl_uogTr fullrank trad 0.4233 0.8276 0.5203 0.6028 0.3529 uogTrBaseL17o bl_uogTr fullrank trad 0.3870 0.7980 0.5120 0.5501 0.3248 rterrier-dph_sd bl_rmit fullrank trad 0.3243 0.8296 0.5110 0.6650 0.3784 BIT-run3 BIT.UA fullrank nn 0.2696 0.8296 0.5063 0.6072 0.3267 uogTrBaseDPHQ bl_uogTr fullrank trad 0.3459 0.8052 0.5052 0.6041 0.3461 uogTrBaseQL16 bl_uogTr fullrank trad 0.3321 0.7930 0.4998 0.6030 0.3436 uogTrBaseL16 bl_uogTr fullrank trad 0.3062 0.8219 0.4964 0.5495 0.3248 uogTrBaseDPH bl_uogTr fullrank trad 0.3179 0.8415 0.4871 0.5490 0.3070 nlm-bm25-prf-2 NLM fullrank trad 0.2732 0.8099 0.4705 0.5218 0.2912 nlm-bm25-prf-1 NLM fullrank trad 0.2390 0.8086 0.4675 0.4958 0.2720 mpii_run3 mpii rerank nnlm 0.1499 0.6388 0.3286 0.6283 0.2587 6 \fTable 5: Passage retrieval runs. RR (MS) is based on MS MARCO labels. All other metrics are based on NIST labels. run group subtask neural RR (MS) RR NDCG@10 NCG@1000 AP pash_r3 PASH rerank nnlm 0.3678 0.9147 0.8031 0.7056 0.5445 pash_r2 PASH rerank nnlm 0.3677 0.9023 0.8011 0.7056 0.5420 pash_f3 PASH fullrank nnlm 0.3506 0.8885 0.8005 0.7255 0.5504 pash_f1 PASH fullrank nnlm 0.3598 0.8699 0.7956 0.7209 0.5455 pash_f2 PASH fullrank nnlm 0.3603 0.8931 0.7941 0.7132 0.5389 p_d2q_bm25_duo h2oloo fullrank nnlm 0.3838 0.8798 0.7837 0.8035 0.5609 p_d2q_rm3_duo h2oloo fullrank nnlm 0.3795 0.8798 0.7821 0.8446 0.5643 p_bm25rm3_duo h2oloo fullrank nnlm 0.3814 0.8759 0.7583 0.7939 0.5355 CoRT-electra HSRM-LAVIS fullrank nnlm 0.4039 0.8703 0.7566 0.8072 0.5399 RMIT-Bart RMIT fullrank nnlm 0.3990 0.8447 0.7536 0.7682 0.5121 pash_r1 PASH rerank nnlm 0.3622 0.8675 0.7463 0.7056 0.4969 NLE_pr3 NLE fullrank nnlm 0.3691 0.8440 0.7458 0.8211 0.5245 pinganNLP2 pinganNLP rerank nnlm 0.3579 0.8602 0.7368 0.7056 0.4881 pinganNLP3 pinganNLP rerank nnlm 0.3653 0.8586 0.7352 0.7056 0.4918 pinganNLP1 pinganNLP rerank nnlm 0.3553 0.8593 0.7343 0.7056 0.4896 NLE_pr2 NLE fullrank nnlm 0.3658 0.8454 0.7341 0.6938 0.5117 NLE_pr1 NLE fullrank nnlm 0.3634 0.8551 0.7325 0.6938 0.5050 1 nvidia_ai_apps rerank nnlm 0.3709 0.8691 0.7271 0.7056 0.4899 bigIR-BERT-R QU rerank nnlm 0.4040 0.8562 0.7201 0.7056 0.4845 fr_pass_roberta BITEM fullrank nnlm 0.3580 0.8769 0.7192 0.7982 0.4990 bigIR-DCT-T5-F QU fullrank nnlm 0.3540 0.8638 0.7173 0.8093 0.5004 rr-pass-roberta BITEM rerank nnlm 0.3701 0.8635 0.7169 0.7056 0.4823 bcai_bertl_pass bcai fullrank nnlm 0.3715 0.8453 0.7151 0.7990 0.4641 bigIR-T5-R QU rerank nnlm 0.3574 0.8668 0.7138 0.7056 0.4784 2 nvidia_ai_apps fullrank nnlm 0.3560 0.8507 0.7113 0.7447 0.4866 bigIR-T5-BERT-F QU fullrank nnlm 0.3916 0.8478 0.7073 0.8393 0.5101 bigIR-T5xp-T5-F QU fullrank nnlm 0.3420 0.8579 0.7034 0.8393 0.5001 nlm-ens-bst-2 NLM fullrank nnlm 0.3542 0.8203 0.6934 0.7190 0.4598 nlm-ens-bst-3 NLM fullrank nnlm 0.3195 0.8491 0.6803 0.7594 0.4526 nlm-bert-rr NLM rerank nnlm 0.3699 0.7785 0.6721 0.7056 0.4341 relemb_mlm_0_2 UAmsterdam rerank nnlm 0.2856 0.7677 0.6662 0.7056 0.4350 nlm-prfun-bert NLM fullrank nnlm 0.3445 0.8603 0.6648 0.6927 0.4265 TUW-TK-Sparse TU_Vienna rerank nn 0.3188 0.7970 0.6610 0.7056 0.4164 TUW-TK-2Layer TU_Vienna rerank nn 0.3075 0.7654 0.6539 0.7056 0.4179 p_d2q_bm25 anserini fullrank nnlm 0.2757 0.7326 0.6187 0.8035 0.4074 p_d2q_bm25rm3 anserini fullrank nnlm 0.2848 0.7424 0.6172 0.8391 0.4295 bert_6 UAmsterdam rerank nnlm 0.3240 0.7386 0.6149 0.7056 0.3760 CoRT-bm25 HSRM-LAVIS fullrank nnlm 0.2201 0.8372 0.5992 0.8072 0.3611 CoRT-standalone HSRM-LAVIS fullrank nnlm 0.2412 0.8112 0.5926 0.6002 0.3308 bl_bcai_mdl1_vt bl_bcai fullrank trad 0.1854 0.7037 0.5667 0.7430 0.3380 bcai_class_pass bcai fullrank trad 0.1999 0.7115 0.5600 0.7430 0.3374 bl_bcai_mdl1_vs bl_bcai fullrank trad 0.1563 0.6277 0.5092 0.7430 0.3094 indri-fdm bl_rmit fullrank trad 0.1798 0.6498 0.5003 0.7778 0.2989 terrier-InL2 bl_rmit fullrank trad 0.1864 0.6436 0.4985 0.7649 0.3135 terrier-BM25 bl_rmit fullrank trad 0.1631 0.6186 0.4980 0.7572 0.3021 DLH_d_5_t_25 RMIT fullrank trad 0.1454 0.5094 0.4935 0.8175 0.3199 indri-lmds bl_rmit fullrank trad 0.1250 0.5866 0.4912 0.7741 0.2961 indri-sdm bl_rmit fullrank trad 0.1600 0.6239 0.4822 0.7726 0.2870 p_bm25rm3 anserini fullrank trad 0.1495 0.6360 0.4821 0.7939 0.3019 p_bm25 anserini fullrank trad 0.1786 0.6585 0.4796 0.7428 0.2856 bm25_bert_token UAmsterdam fullrank trad 0.1576 0.6409 0.4686 0.7169 0.2606 terrier-DPH bl_rmit fullrank trad 0.1420 0.5667 0.4671 0.7353 0.2758 TF_IDF_d_2_t_50 RMIT fullrank trad 0.1391 0.5317 0.4580 0.7722 0.2923 small_1k reSearch2vec rerank nnlm 0.0232 0.2785 0.2767 0.7056 0.2112 med_1k reSearch2vec rerank nnlm 0.0222 0.2720 0.2708 0.7056 0.2081 DoRA_Large_1k reSearch2vec rerank nnlm 0.0208 0.2740 0.2661 0.7056 0.2072 DoRA_Small reSearch2vec fullrank nnlm 0.0000 0.1287 0.0484 0.0147 0.0088 DoRA_Med reSearch2vec fullrank nnlm 0.0000 0.1075 0.0431 0.0147 0.0087 DoRA_Large reSearch2vec fullrank nnlm 0.0000 0.1111 0.0414 0.0146 0.0079 7 \f0.0 0.2 0.4 0.6 0.8 1.0 NDCG@10 what is chaff and flare what amino produces carnitine difference between a company's strategy and business model is what is mamey how many sons robert kraft has how much would it cost to install my own wind turbine what is a alm why did the ancient egyptians call their land kemet, or black land? meaning of shebang what is reba mcentire's net worth dog day afternoon meaning who is rep scalise? who was the highest career passer rating in the nfl how often to button quail lay eggs who is thomas m cooley definition of laudable how old is vanessa redgrave can fever cause miscarriage early pregnancy difference between a hotel and motel what is a nonconformity? earth science define: geon do google docs auto save what type of conflict does della face in o, henry the gift of the magi who killed nicholas ii of russia what does a psychological screening consist of for egg donors why does lacquered brass tarnish who said no one can make you feel inferior when did rock n roll begin? who is aziz hashim why is pete rose banned from hall of fame what metal are hip replacements made of who sings monk theme song what temperature and humidity to dry sausage does mississippi have an income tax what is a statutory deed why do hunters pattern their shotguns? what is chronometer who invented it where is the show shameless filmed how long does it take to remove wisdom tooth when did family feud come out? average annual income data analyst what medium do radio waves travel through how much money do motivational speakers make average wedding dress alteration cost average salary for dental hygienist in nebraska nnlm trad Figure 2: Comparison of the best \u201cnnlm\u201d and \u201ctrad\u201d runs on individual test queries for the document retrieval task. Queries are sorted by difference in mean performance between \u201cnnlm\u201d and \u201ctrad\u201d runs. Queries on which \u201cnnlm\u201d wins with large margin are at the top. 8 \f0.0 0.2 0.4 0.6 0.8 1.0 NDCG@10 average wedding dress alteration cost what is a statutory deed how old is vanessa redgrave when did family feud come out? what can you do about discrimination in the workplace in oklahoma city who sings monk theme song define bmt medical who is rep scalise? difference between a hotel and motel what is chronometer who invented it what is chaff and flare is caffeine an narcotic why did the ancient egyptians call their land kemet, or black land? what is a alm what metal are hip replacements made of what is a nonconformity? earth science how does granulation tissue start are naturalization records public information meaning of shebang what is the un fao who killed nicholas ii of russia define etruscans how long does it take to remove wisdom tooth ia suffix meaning how much would it cost to install my own wind turbine when did rock n roll begin? do google docs auto save what does it mean if your tsh is low average annual income data analyst average salary for dental hygienist in nebraska what the best way to get clothes white how often to button quail lay eggs what type of conflict does della face in o, henry the gift of the magi what type of tissue are bronchioles what is reba mcentire's net worth dog day afternoon meaning what medium do radio waves travel through what is mamey what are best foods to lower cholesterol what amino produces carnitine how much money do motivational speakers make how many sons robert kraft has definition of laudable define pareto chart in statistics describe how muscles and bones work together to produce movement what carvedilol used for where is the show shameless filmed does mississippi have an income tax who is aziz hashim difference between a company's strategy and business model is define: geon why is pete rose banned from hall of fame why do hunters pattern their shotguns? can fever cause miscarriage early pregnancy nnlm trad Figure 3: Comparison of the best \u201cnnlm\u201d and \u201ctrad\u201d runs on individual test queries for the passage retrieval task. Queries are sorted by difference in mean performance between \u201cnnlm\u201d and \u201ctrad\u201d runs. Queries on which \u201cnnlm\u201d wins with large margin are at the top. 9 \f0.3 0.4 0.5 0.6 0.7 0.8 NDCG@10 best fullrank run best rerank run fullrank rerank (a) NDCG@10 for runs on the document retrieval task 0.0 0.2 0.4 0.6 0.8 1.0 NDCG@10 best fullrank run best rerank run fullrank rerank (b) NDCG@10 for runs on the passage retrieval task 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 NCG@100 fullrank rerank (c) NCG@100 for runs on the document retrieval task 0.0 0.2 0.4 0.6 0.8 1.0 NCG@1000 fullrank rerank (d) NCG@1000 for runs on the passage retrieval task Figure 4: Analyzing the impact of \u201cfullrank\u201d vs. \u201crerank\u201d settings on retrieval performance. Figure (a) and (b) show the performance of different runs on the document and passage retrieval tasks, respectively. Figure (c) and (d) plot the NCG@100 and NCG@1000 metrics for the same runs for the two tasks, respectively. The runs are ordered by their NDCG@10 performance along the x-axis in all four plots. We observe, that the best run under the \u201cfullrank\u201d setting outperforms the same under the \u201crerank\u201d setting for both document and passage retrieval tasks\u2014although the gaps are relatively smaller compared to those in Figure 1. If we compare Figure (a) with (c) and Figure (b) with (d), we do not observe any evidence that the NCG metric is a good predictor of NDCG@10 performance. Most runs used the ORCAS data for the document retrieval task, with relemb_mlm_0_2 being the only run using the ORCAS data for the passage retrieval task. This year it was not necessary to use ORCAS data to achieve the highest NDCG@10. However, when we compare the performance of the runs that use the ORCAS dataset with those that do not use the dataset within the same group, we observe that usage of the ORCAS dataset always led to an improved performance in terms of NDCG@10, with maximum increase being around 0.0513 in terms of NDCG@10. This suggests that the ORCAS dataset is providing additional information that is not available in the training data. This could also imply that even though the training dataset provided as part of the track is very large, deep models are still in need of more training data. NIST labels vs. Sparse MS MARCO labels. Our baseline human labels from MS MARCO often have one known positive result per query. We use these labels for training, but they are also available for test queries. Although our of\ufb01cial evaluation uses NDCG@10 with NIST labels, we now compare this with reciprocal rank (RR) using MS MARCO labels. Our goal is to understand how changing the labeling scheme and metric affects the overall results of the track, but if there is any disagreement we believe the NDCG results are more valid, since they evaluate the ranking more comprehensively and a ranker that can only perform well on labels with exactly the same distribution as the training set is not robust enough for use in real-world applications, where real users will have opinions that are not necessarily identical to the preferences encoded in sparse training labels. Figure 5 shows the agreement between the results using MS MARCO and NIST labels for the document retrieval and passage retrieval tasks. While the agreement between the evaluation setup based on MS MARCO and TREC seems 10 \fTable 6: Leaderboard metrics breakdown. The Kendall agreement (\u03c4) of NDCG@10 and RR (MS) varies across task and run type. Agreement on the best neural network runs is high, but agreement on the best document trad runs is very low. We do not list the agreement for passage nn runs since there are only two runs. run type docs passages nnlm 0.83 0.76 nn 0.96 \u2014 trad 0.03 0.67 all 0.46 0.69 0.20 0.25 0.30 0.35 0.40 0.45 0.50 RR (MS) 0.45 0.50 0.55 0.60 0.65 0.70 NDCG@10 docs nnlm nn trad 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 RR (MS) 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 NDCG@10 passages nnlm nn trad Figure 5: Leaderboard metrics agreement analysis. For document runs, the agreement between the leaderboard metric RR (MS) and the main TREC metric NDCG@10 is lower this year. The Kendall correlation is \u03c4 = 0.46, compared to \u03c4 = 0.69 in 2019. For the passage task, we see \u03c4 = 0.69 in 2020, compared to \u03c4 = 0.68 in 2019. reasonable for both tasks, agreements for the document ranking task seems to be lower (Kendall correlation of 0.46) than agreements for the passage task (Kendall correlation of 0.69). This value is also lower than the correlation we observed for the document retrieval task for last year. In Table 6 we show how the agreement between the two evaluation setups varies across task and run type. Agreement on which are the best neural network runs is high, but correlation for document trad runs is close to zero. One explanation for this low correlation could be use of the ORCAS dataset. ORCAS was mainly used in the document retrieval task, and could bring search results more in line with Bing\u2019s results, since Bing\u2019s results are what may be clicked. Since MS MARCO sparse labels were also generated based on top results from Bing, we would expect to see some correlation between ORCAS runs and MS MARCO labels (and Bing results). By contrast, NIST judges had no information about what results were retrieved or clicked in Bing, so may have somewhat less correlation with Bing\u2019s results and users. In Figure 6 we compare the results from the two evaluation setups when the runs are split based on the usage of the ORCAS dataset. Our results suggest that runs that use the ORCAS dataset did perform somewhat better based on the MS MARCO evaluation setup. While the similarities between the ORCAS dataset and the MS MARCO labels seem to be one reason for the mismatch between the two evaluation results, it is not enough to fully explain the 0.03 correlation in Table6. Removing the ORCAS \u201ctrad\u201d runs only increases the correlation to 0.13. In the future we plan to further analyze the possible reasons for this poor correlation, which could also be related to 1) the different metrics used in the two evaluation setups (RR vs. NDCG@10), 2) the different sensitivity of the datasets due to the different number of queries and number of documents labelled per query), or 3) difference in relevance labels provided by NIST assessors vs. labels derived from clicks. 11 \f0.20 0.25 0.30 0.35 0.40 0.45 0.50 RR (MS) 0.45 0.50 0.55 0.60 0.65 0.70 NDCG@10 orcas no yes Figure 6: This year it was not necessary to use ORCAS data to achieve the highest NDCG@10. ORCAS runs did somewhat better on the leaderboard metric RR (MS), which uses different labels from the other metrics. This may indicate an alignment between the Bing user clicks in ORCAS with the labeled MS MARCO results, which were also generated by Bing. 5" + }, + { + "url": "http://arxiv.org/abs/2006.05324v2", + "title": "ORCAS: 18 Million Clicked Query-Document Pairs for Analyzing Search", + "abstract": "Users of Web search engines reveal their information needs through queries\nand clicks, making click logs a useful asset for information retrieval.\nHowever, click logs have not been publicly released for academic use, because\nthey can be too revealing of personally or commercially sensitive information.\nThis paper describes a click data release related to the TREC Deep Learning\nTrack document corpus. After aggregation and filtering, including a k-anonymity\nrequirement, we find 1.4 million of the TREC DL URLs have 18 million\nconnections to 10 million distinct queries. Our dataset of these queries and\nconnections to TREC documents is of similar size to proprietary datasets used\nin previous papers on query mining and ranking. We perform some preliminary\nexperiments using the click data to augment the TREC DL training data, offering\nby comparison: 28x more queries, with 49x more connections to 4.4x more URLs in\nthe corpus. We present a description of the dataset's generation process,\ncharacteristics, use in ranking and suggest other potential uses.", + "authors": "Nick Craswell, Daniel Campos, Bhaskar Mitra, Emine Yilmaz, Bodo Billerbeck", + "published": "2020-06-09", + "updated": "2020-08-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG" + ], + "main_content": "INTRODUCTION When people search the Web, they reveal their information needs through their queries and clicks. Such click logs become an important asset. For example, given the ambiguous query panda, click logs can tell us the relative popularity of different interpretations, such as animals, movies, songs, restaurants and technology products. We can ask: Which documents are clicked for the query panda? Which topics are most popular in panda-related queries? What topics are most popular in panda-related documents? Popularity can also vary by context, for example Australia\u2019s PANDA Helpline is more commonly clicked by Australian users, but less so by US users. Not many click datasets are publicly available, for two reasons. One is privacy. Users enter sensitive and private information into a search engine, so a query can reveal information that should not be shared. By analyzing a stream of queries from the same user, we may see them search for their own name, address, or other details. Using these, and other hints, we can discover a lot about a person [3]. Serious problems are raised by the release of queries linked to user IDs, or even session IDs. Even without such links, it is dangerous to release a tail query that was only typed by one user. The other barrier to sharing click data is the commercial value of the data. In a particular country or language, the search engine with the most search traffic has an advantage over smaller competitors, because it has more information about user needs. Sharing that data at scale would help competitors, including new entrants in the market, and potentially help search engine optimizers. It could also reveal information about the workings of the engine, for example if results lists were provided including rank positions of the clicked and unclicked results. Given these barriers, one option is to provide anonymized click data, such as the Yandex search personalization challenge [22]. The 35 million search sessions in the dataset (a Kaggle competition with 194 teams) have anonymized user IDs, queries, query terms, URLs, URL domains and clicks. Providing these as numeric IDs rather than strings greatly reduces concerns about privacy and commercial value. It can\u2019t be used to build a competing search engine because it has URL IDs rather than URLs. It can\u2019t be used to identify a user who entered their name as a query, because we just have a query ID and some term IDs. arXiv:2006.05324v2 [cs.IR] 18 Aug 2020 \fWoodstock \u201918, June 03\u201305, 2018, Woodstock, NY Nick Craswell, Daniel Campos, Bhaskar Mitra, Emine Yilmaz, and Bodo Billerbeck The main disadvantage of anonymization via IDs is that we can not add new relevance judgments, or apply deep models that bring to bear text embeddings. It is also not possible to for instance discuss the meaning of the term panda because we do not know which termID maps to that term. This paper describes a new dataset, the Open Resource for Click Analysis in Search (ORCAS). Rather than providing user-level or session-level information, which can be used for personal identification, we focus on query-document connections. We use queries that have been repeated across many users. ORCAS data has clicks for TREC documents only, and only for English-speaking users in the United States. Combined with the filtering and aggregation, the dataset is intended to provide a useful resource for ranking TREC documents and also Web search mining more generally, without releasing personally identifying or commercially valuable information. 2 RELATED WORK There have been several previous attempts at releasing click datasets. The AOL data [21] and MSN data [28] were two of the initial attempts towards releasing click data, which came with question marks over their use. The AOL dataset contained \u223c20 million queries together with the domains of associated clicked URLs from \u223c659, 000 users, collected over a 3-month period from March to May 2006. It was found to allow personal identification of some individual people [3], which was a major setback for the release of any future similar datasets. As a result of this, the dataset was retracted and is no longer available. The MSN data [28] contained \u223c15 million queries that were collected from the users of the MSN search engine from the United States. The dataset was released in 2006 and again in 2009, but the agreement required a limited-time use, usage has now timed out, and it has not been released again since 2009. Some years later, as part of the Yandex search personalization challenge, Yandex released a dataset with \u223c35 million search sessions [22]. The dataset was created using search activity of users over 27 days as the training data, with a separate test data containing user activity over 3 days. As mentioned in the previous section, this dataset contains URL IDs as opposed to the actual URL itself, which limits the type of research that can be conducted with the dataset. The Sogou dataset 1, released by the major Chinese search engine company Sogou, contains \u223c25.1 million queries and \u223c43.5 million user interactions in the form of submitted queries and clicked Web page search results. Almost all queries in this dataset contain solely characters from the simplified Chinese alphabet \u2013 which has around 3,500 commonly used characters [26]. Since the aforementioned datasets have problems related to availability and use cases, a significant amount of research has been conducted on proprietary datasets instead. Proprietary datasets containing queries with associated clicked URLs have widely been used for ranking purposes [9, 13, 27], as well as for identifying related queries [1, 5, 15, 16, 25]. However, such proprietary datasets 1http://www.sogou.com/labs/ are not made available to the wider research community, which has been a major limitation for research. The TREC Deep Learning (DL) Track [8] attempted to address the need for large amounts of training data by releasing large scale training datasets that are based on human relevance assessments, derived from MS MARCO [2]. The track focuses on the document retrieval and passage retrieval tasks. In the document retrieval task, 367,013 queries together with associated relevance labels were made available, corresponding to a corpus of 3.2 million documents. The training datasets released as part of the track are sparse, with no negative labels and often only one positive label per query, analogous to some real-world training data such as click logs. While the Deep Learning Track is a step forward towards making a large scale dataset that can be used for training information retrieval systems publicly available, the datasets released by the track are still based on human relevance assessments as opposed to real usage. The ORCAS dataset released as part of this paper can be seen as complementary to the data released as part of the Deep Learning Track since it is based on click logs. Table 1 shows a comparison of different proprietary datasets, datasets released as part of the TREC DL Track, and the ORCAS dataset in terms of size (number of queries, URLs and query-URL pairs) and availability of the datasets, as well as the application areas for which the datasets have primarily been created for. 3 DATASET COLLECTION The ORCAS dataset was created through log mining, aggregation and filtering. Search engine logs can include information about the user\u2019s query, the ranking of URLs, the URLs that were clicked, as well as post-click behavior such as dwell time [10]. We focus on click events, since in aggregate these are a good positive signal for relevance. Before aggregation we apply some minimal filtering, to eliminate clicks that had very negative post-click signals, such as a short dwell, since this can indicate that the user was disappointed with the result. We aggregate click events at the query-URL level, without considering context such as other URLs or their rank order. For the purpose of collating this dataset, we filter for users that are located in the United States and speak English. But we do not collect any other data about the users such as their search session or any other short term or long term personalizing information [6]. Such things could potentially be studied using the anonymized Yandex personalization challenge data [22], but not using our aggregation to the query-URL level. Furthermore, rather than counting how popular query-URL pairs are during aggregation, we simply note which query-URL pairs are present, to avoid revealing too much information about event popularity in our logs. We will describe in Section 4 how it is possible to recover some information about popularity, even though we have removed the per-pair popularity information. The full set of query-URL data, aggregated based on a subsample of Bing\u2019s 26-month logs to January 2020, would still be too commercially valuable, potentially covering billions of queries. It could also potentially reveal information known only to one person. We apply several strict filters. First, keep only query-URL pairs where \fORCAS: 18 Million Clicked Query-Document Pairs for Analyzing Search Woodstock \u201918, June 03\u201305, 2018, Woodstock, NY Table 1: Size comparison of query-URL pair datasets used in Web mining and ranking studies. We leave a blank if a size was not provided, and use the filtered size in cases where filtering was applied before use. Our new dataset is larger than several of the previous datasets, suggesting those past results could be replicated or extended using ORCAS. Paper Queries URLs Q-U pairs Availability Primary focus of paper Beeferman and Berger [4] 244K 362K 1.9M proprietary Related Q Wen et al. [25] 1M/week Encarta 500K/week proprietary Related Q Xue et al. [27] 862K 507K proprietary Ranking Craswell and Szummer [9] 202K 505K 1.1M proprietary Ranking Baeza-Yates and Tiberi [1] 7.5M 973K proprietary Related Q Mei et al. [15] 637M 585M proprietary Related Q Huang et al. [13] 100M proprietary Ranking ORCAS 10.4M 1.4M 18.8M open Ranking TREC DL [8] 367K 320K 384K open Ranking Table 2: Summary of ORCAS data. Each record in the main file (orcas.tsv) indicates a click between a query (Q) and a URL (U), also listing a query ID (QID) and the corresponding TREC document ID (DID). For use in ranker training, we also provide files in TREC format for qrels, queries and runs. The run file is the top-100 using Indri query likelihood, for use as negative samples during training. Filename Size Records Data in each record orcas.tsv 1.76GB 18.8M QID Q DID U orcas-doctrain-qrels.tsv 410MB 18.8M QID DID orcas-doctrain-queries.tsv 322MB 10.4M QID Q orcas-doctrain-top100 52.8GB 983M QID DID score the URL is present in the 3.2 million document TREC DL corpus. We needed to choose some small subset of all clicked documents, so aligning with a TREC seems like a suitable choice. Second, we apply a k-anonymity filter, keeping only queries that were typed by k different users, for a high value of k. This makes it impossible for our dataset to contain a query with information that is only known to fewer than k users. Finally, we applied filters to remove potentially offensive queries, for example queries related to pornography or hate speech. Two of our design decisions relating to ranking and position bias can now be described in more detail in relation to our k-anonymity requirement. Firstly, although ORCAS includes clicked URLs, it does not provide the rankings of clicked and unclicked URLs that users saw. If released without an anonymity requirement, such ranking information could be too commercially valuable, revealing information about query popularity, ranking variation and user preferences. With a high enough k threshold, as used here, the data size would be reduced by orders of magnitude. The requirement is that the same ranking was seen by k users who all selected the same URL. Because rankings vary from user to user and also vary over time, very few cases reach the k threshold. Secondly, although it would be possible to correct for position bias before aggregation, ORCAS does not do this. Correcting for position bias would not remove any URLs from the dataset, because results with k or more clicks tend to be relevant, independently of position. Instead, position bias correction could allow us to include lower-ranked documents in ORCAS, which had fewer clicks due to position but were equally relevant. However, for privacy reasons we can not include results clicked by fewer than k users. Therefore we focus on clicks in aggregate, uncorrected by position bias. This approach, of using uncorrected clicks as a positive signal, has proven useful in a variety of studies such as those in Table 1. Our main dataset becomes 18.8 million records with four columns per record: QID: 10103699 Q: why is the sky blue DID: D1968574 U: http://www.sciencemadesimple.com/sky_blue.html The document ID (DID) is the same as the one used in the TREC corpus. To avoid revealing the unreleased query IDs in our held out test set, we assigned a disjoint set of query IDs (QIDs) to the ORCAS queries. This means the same query string can occur in TREC DL and ORCAS data, with different QIDs. We also provide the same data in TREC format, as qrels, queries and Indri rankings for use in negative sampling (see Section 5). These comprise the full ORCAS data release2, as described in Table 2. Dataset examples Figure 1 shows a sample of the ORCAS data, for some documents related to various meanings of the term panda. Some popular URLs 2https://microsoft.github.io/TREC-2020-Deep-Learning/ORCAS \fWoodstock \u201918, June 03\u201305, 2018, Woodstock, NY Nick Craswell, Daniel Campos, Bhaskar Mitra, Emine Yilmaz, and Bodo Billerbeck chinese restaurants near me food food panda giant giant giant giant food med panda express panda express menu dataframe column py pandas pandas iloc giant panda red giant pandas panda wikipedia red panda food near me pandas syndrome Figure 1: Example click data relating to the string \u2018panda\u2019. Query nodes are on the left and URL nodes are on the right. A query-URL connection means an edge exists in ORCAS data. The topics of the URLs are: food Panda Express, py Pandas Python library, med Pandas syndrome, giant Giant pandas, and red Red pandas are about Panda Express restaurants, the Python library Pandas, Pandas Syndrome, Giant Pandas or Red Pandas. Since URLs are too large to fit in the figure, we label each URL node with the type of panda it covers: food, py, med, giant and red. For example, one of the nodes labeled \u2018giant\u2019 is the Wikipedia page https://en.wikipedia.org/ wiki/Giant_panda. The queries were selected for being high-degree nodes, both in the global corpus and for this set of documents. A query such as pandas syndrome is only connected to documents of 10 1 10 3 Query degree 10 1 10 3 10 5 10 7 Count 10 1 10 3 URL degree 10 1 10 3 10 5 Count Figure 2: Degree distribution of query nodes and URL nodes in the bipartite query-URL graph. one topic. Whereas a query like panda is ambiguous, connected to documents of multiple topics. Absence of an edge can indicate that the document is less relevant, for example the Python Pandas node is not adjacent to panda, because it is less likely that a user who typed that query wants Python Pandas. Edge absence can also indicate that the document or query is less popular, and also indicate specific patterns of retrieval in the underlying search engine. This may explain why the Red Panda URL is not connected to panda or pandas, despite being on-topic. The figure covers several topics, but in the ORCAS dataset its possible to identify several more panda topics, such as Kung Fu Panda movies, the debut single of rapper Desiigner and Panda Antivirus. Outside the TREC documents and US click data used here there are even more meanings of panda, such as the Australian PANDA Helpline or the PandA gateway at Kyoto University. To illustrate the mining of related queries, in the following paragraph we give an example based on the seed query orcas (QID=2126294). Related queries can be found by taking a two-step walk. The first step reaches eight URLs. The second step reaches 198 queries. We can rank the queries according to the volume of paths, assuming the queries that are reachable by more distinct paths are more related to the seed query. The most similar queries (with 4\u20135 paths) are about whales: orca whale, orca facts, orca, orca whales and killer whale facts. Somewhat less related (with 2 paths) are queries about Orcas Island: orcas islands, orcas island washington, orcas island wa and orcas island. 4 DATASET ANALYSIS This section describes characteristics of the dataset and compares it to other datasets in the literature. Query-URL datasets have been used for improving document ranking, finding related queries, and other log mining applications, as was indicated in Table 1. For reasons of commercial sensitivity, ORCAS does not contain \u201cedge weights\u201d, which could indicate the relative popularity of URLs for a query, or vice versa. However, it is still possible to recover some popularity information from ORCAS. Specifically, queries that are more popular tend to accumulate connections to a greater number of URLs, and URLs that are commonly clicked tend to accumulate connections to a greater number of queries. Popularity is indicated by higher degree in the bipartite query-URL graph. \fORCAS: 18 Million Clicked Query-Document Pairs for Analyzing Search Woodstock \u201918, June 03\u201305, 2018, Woodstock, NY 1 2 3 4 5 6 7 8 9 101112 Query length in words 0 10 20 30 % Queries Dataset ORCAS TREC DL 10 1 10 3 10 5 Query frequency of term 10 1 10 3 10 5 Count Figure 3: ORCAS queries are shorter than TREC DL queries. ORCAS queries contain a variety of query terms, from the most common term how is in 387K queries, and 613K different terms that each occur in one query (for example egyptlive). Table 3: The most common first word in TREC DL queries is what, in 39.4% of the queries (145K queries). The most common first word in ORCAS queries is how, in 3.7% of queries (387K queries). TREC DL queries were selected in a way that favored natural language questions, so have less variety of first words, and more focus on question words. ORCAS queries were selected based on the TREC DL documents, so still have question words, but also have words in the top-10 that are more rare in TREC DL such as www, the, best and free. TREC DL ORCAS Word Count % [Rank] Count % [Rank] what 144575 39.4% [1] 362169 3.5% [2] how 50113 13.7% [2] 386611 3.7% [1] where 16091 4.4% [3] 46331 0.4% [6] who 12957 3.5% [4] 38522 0.4% [9] is 10742 2.9% [5] 41512 0.4% [8] when 10002 2.7% [6] 37885 0.4% [10] which 6138 1.7% [7] 7347 0.1% [89] can 5736 1.6% [8] 20354 0.2% [21] average 5389 1.5% [9] 9906 0.1% [52] define 4620 1.3% [10] 27469 0.3% [14] www 20 0.0% [379] 138528 1.3% [3] the 2289 0.6% [15] 95685 0.9% [4] best 203 0.1% [58] 86119 0.8% [5] free 16 0.0% [487] 41514 0.4% [7] Figure 2 shows the degree distributions of nodes in the ORCAS bipartite graph. Skewed degree distributions, which are better viewed on a log-log scale such that a few nodes are very highly connected and many nodes have only one connection, are common for datasets of this sort. Very popular queries (such as weather) have hundreds of URLs, while very popular URLs (such as www.outlook.com) have thousands of queries. It may be useful to leverage this popularity information in log mining and ranking studies, despite the absence of edge weights. For example, studies of query autocomplete [23] have sometimes used the AOL and MSN logs mentioned in Section 2. Such studies need not only queries but also per-query popularity information. It would be possible to use the popularity information from Figure 2 to make the ORCAS queries usable in the same way. ORCAS queries were selected based on connection to the TREC corpus, click aggregation, anonymity filtering and other filtering criteria. They were not otherwise selected to be natural language question answering queries. By contrast, the TREC DL queries were selected in the creation of the MS MARCO question answering task [2]. This would lead us to expect that ORCAS queries and TREC DL queries have somewhat different characteristics. Two such characteristics are query length and vocabulary. Figure 3 shows that the ORCAS queries tend to have fewer words than TREC DL queries. Table 3 compares the first word of queries. TREC DL queries are much more likely to have the word what at the start of a query (39.4% vs 3.5%), although the ORCAS data has more what queries in total (362K vs 145K). Both querysets have a healthy distribution of common and rare terms overall, we illustrate this for ORCAS data (also Figure 3), but TREC DL has a similarly healthy vocabulary distribution. The differences in query length and the likelihood of having question words at the start mainly reflect the selection criteria of TREC DL queries, which are a subset of query traffic suitable for the MS MARCO question answering dataset, favoring natural language questions. 5 RETRIEVAL EXPERIMENTS To study the effectiveness of the ORCAS data for training deep neural models, we conduct preliminary retrieval experiments on the document ranking task in the TREC 2019 Deep Learning Track [8]. We describe these experiments and corresponding results here. Data. The TREC deep learning benchmark for document reranking provides a large training dataset containing 367,013 queries and 384,597 positively labeled query-document pairs from the MS MARCO dataset [2]. For every query, a set of 100 documents is retrieved using Indri [24] and provided as part of the dataset. Featurebased [14] and representation learning-based [17] learning to rank models are typically trained on this data by employing optimization objectives that contrast relevant and non-relevant documents for a given query. Nonrelevant documents can be sampled either from the collection distribution or from a distribution that is more biased towards documents with at least partial matches with the query\u2014e.g., the Indri top 100 retrieved results. Previous work [18] has indicated that negative documents related to the query are more helpful for learning than negative documents sampled at random from the collection. Following a similar design to the MS MARCO training dataset, we generate a complementary dataset containing 10,405,342 queries and 18,823,602 positively labeled query-document pairs. This is approximately 28 times bigger than the MS MARCO training dataset in terms of the number of queries and approximately 49 times bigger in terms of the number of positive labels. To be consistent with the MS MARCO training data, we also provide the Indri top 100 retrieved results for each query. \fWoodstock \u201918, June 03\u201305, 2018, Woodstock, NY Nick Craswell, Daniel Campos, Bhaskar Mitra, Emine Yilmaz, and Bodo Billerbeck Model. We adopt a public implementation3 of a Transformerbased ranking model with query term independence [20] as our base architecture. Both query and document terms are first encoded using a shared term embedding model. The document term embeddings are then contextualized using stacked transformer layers. We compute the cosine similarity between every pair of query and document term embeddings and then employ windowed kernlepooling [11, 12] and multiple feedforward layers to estimate the match between the document and each query term. Finally, the scores are linearly combined across query terms. Experiments. We conduct two sets of experiments to study the usefulness of the ORCAS dataset (i) As a supplement to the MS MARCO training data, and (ii) As a document description field in addition to URL, title, and body. For our first study, we compare the retrieval effectiveness of the Transformer-Kernel model [12] when trained on a combination of MS MARCO and ORCAS training data to training on MS MARCO data alone. When the two datasets are combined, a two-step sampling is employed for training sample selection\u2014we first randomly select one of the two training datasets with equal probability and then randomly sample a query from the selected dataset with uniform probability. This means that our training model sees an equal proportion of samples from both datasets during training in spite of their significant difference in size. This is done intentionally to control against diverging too far from the MS MARCO query distribution, since our test queries also come from MS MARCO. We conduct two sets of experiments by sampling the negative document for training from (i) the collection distribution, and (ii) the Indri top-100 document distribution, respectively. For document representation we consider a maximum of the first 800 terms. In our second study, we use the ORCAS dataset to generate an additional document field and compare the performance of the base model trained on MS MARCO labels with and without the ORCAS field. We restrict the document representation, across all fields, to 4000 terms and the ORCAS field, specifically, to a maximum of 2000 terms. To handle the longer document text input we employ the Conformer-Kernel [19] model as our base architecture for this study. Across both studies, we evaluate our models on the 43 test queries from the 2019 edition of the track using the corresponding NIST labels provided as a reusable benchmark. We report MRR and NDCG for each run. The models are trained using the Adam optimizer and the RankNet objective [7] with a learning rate of 0.0001. All hyperparameters are consistent across different training runs. Results. Table 4 summarizes the findings from the first study. Under both negative sampling settings, we find that training on the combination of the two datasets gives roughly equivalent results to training on MS MARCO data only. Although the ORCAS data has a higher number, the difference is not statistically significant on our 43 test queries. The results from our second study is presented in Table 5. Similar to our first study we find that the inclusion of ORCAS dataset results in higher retrieval metrics but the difference is not statistically significant on our small test set. 3https://github.com/bmitra-msft/TREC-Deep-Learning-Quick-Start Table 4: Retrieval experiment results on the document reranking task from the TREC Deep Learning benchmark to study the usefulness of the ORCAS dataset for traininig. Training data MRR NDCG Negatives sampled from full collection MS MARCO only 0.798 0.505 MS MARCO + ORCAS 0.807 0.509 Negatives sampled from top 100 candidates MS MARCO only 0.909 0.574 MS MARCO + ORCAS 0.924 0.582 Table 5: Retrieval experiment results on the document fullranking task from the TREC Deep Learning benchmark to study the usefulness of the ORCAS dataset as an additional document field. Document fields MRR NDCG URL + Title + Body 0.902 0.616 URL + Title + ORCAS + Body 0.931 0.629 Although these initial studies did not yield significantly better results, we posit that models with larger number of layers or learnable parameters could perform better, taking advantage of the larger size of the ORCAS dataset. 6 EVALUATING USING ORCAS LABELS To analyze the viability of ORCAS query-URL pairs as positive relevance labels, we also used them in evaluation, to test the MRR of the 38 runs in the 2019 TREC DL document ranking task. For the 200 queries in the 2019 runs, we were able to identify 83 positive labels for 28 queries in the ORCAS dataset, whereas the official NIST labels were for 43 of the queries. We then compared the evaluation results obtained using the subset from the ORCAS dataset with the evaluation results from the official test collection from TREC, using MRR as the evaluation metric. The comparison of the two evaluation results are shown in Figure 4. The figure reports also reports the Kendall\u2019s tau correlation between the rankings obtained using the two sets of metrics. Considering the small size of the ORCAS subset that is common with the TREC DL test collection, the correlations with the official MRR results look reasonable. These correlations would probably improve if all the queries in the ORCAS test data were to be used. 7" + }, + { + "url": "http://arxiv.org/abs/2003.07820v2", + "title": "Overview of the TREC 2019 deep learning track", + "abstract": "The Deep Learning Track is a new track for TREC 2019, with the goal of\nstudying ad hoc ranking in a large data regime. It is the first track with\nlarge human-labeled training sets, introducing two sets corresponding to two\ntasks, each with rigorous TREC-style blind evaluation and reusable test sets.\nThe document retrieval task has a corpus of 3.2 million documents with 367\nthousand training queries, for which we generate a reusable test set of 43\nqueries. The passage retrieval task has a corpus of 8.8 million passages with\n503 thousand training queries, for which we generate a reusable test set of 43\nqueries. This year 15 groups submitted a total of 75 runs, using various\ncombinations of deep learning, transfer learning and traditional IR ranking\nmethods. Deep learning runs significantly outperformed traditional IR runs.\nPossible explanations for this result are that we introduced large training\ndata and we included deep models trained on such data in our judging pools,\nwhereas some past studies did not have such training data or pooling.", + "authors": "Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Ellen M. Voorhees", + "published": "2020-03-17", + "updated": "2020-03-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Deep learning methods, where a computational model learns an intricate representation of a large-scale dataset, have yielded dramatic improvements on the state of the art in speech recognition and computer vision. This has been fueled by the availability of large-scale datasets [LeCun et al., 2015] such as the ImageNet dataset [Deng et al., 2009] for computer vision and the Atari Arcade Learning Environment [Bellemare et al., 2013] for game playing. There has been signi\ufb01cant interest in deep learning for ad-hoc ranking [Mitra and Craswell, 2018]. Work so far has largely been done with small data, proprietary data or synthetic data. With small data, there has been some discussion about whether deep learning methods really outperform strong traditional IR baselines [Yang et al., 2019a]. Using a proprietary set of document ranking data with 200,000 training queries [Mitra et al., 2017], a traditional IR baseline was beaten, but it was impossible for others to follow up on the work without a data release. Dietz et al. [2017] have a TREC task with enough training data to investigate such \ufb01ndings, but on synthetic rather than human-labeled data. Since signi\ufb01cant questions remain about baselines and the required volume of human-labeled data, we argue that TREC is a good forum for studying such issues. When a large human-labeled dataset is made available, participants can investigate the role of data size by subsampling. Strong baselines are more than welcome at TREC and there is a blind one-shot evaluation to avoid over\ufb01tting. The TREC 2019 Deep Learning Track has two tasks: Document retrieval and passage retrieval. Each task has a dataset that is new to TREC, although the passage task is similar to the MS MARCO passage ranking leaderboard [Bajaj et al., 2016], but with a new test set in the TREC version with more comprehensive labeling. Both tasks are ad-hoc retrieval, meaning that there is a \ufb01xed document set, and the goal of the information retrieval system is to respond to each new query with results that would satisfy the querying user\u2019s information need. Ad-hoc retrieval is a very common scenario in real-world search applications and in TREC. The main goals of the track are: 1) To provide large reusable datasets for training and evaluation of deep learning and traditional ranking methods in a large training data regime, 2) To perform a rigorous blind single-shot evaluation, where test labels don\u2019t even exist until after all runs are submitted, to compare different ranking methods, and 3) To arXiv:2003.07820v2 [cs.IR] 18 Mar 2020 \fstudy this in both a traditional TREC setup with end-to-end retrieval and in a re-ranking setup that matches how some models may be deployed in practice. Comparing ad hoc retrieval methods in a large-data regime. The track should help us build our understanding of how retrieval methods can take advantage of large-scale data. It should also allow participants to compare various ranking methods such as: \u2022 ML models vs. traditional IR\u2014including pseudo-relevance feedback. \u2022 Deep learning vs. feature-based learning-to-rank (LTR) methods [Liu, 2009]. \u2022 Comparison of different deep learning architectures. \u2022 Comparison of different supervision approaches, such as fully supervised vs. semi-supervised vs. weakly supervised deep learning [Dehghani et al., 2017]. \u2022 Comparison of such models with all the training labels vs. using a subset of labels, to see how performance improves with more data. Comparing different methods for ad hoc search has always been a focus area at TREC, so our goal in this track is to continue that work. End-to-end retrieval vs. reranking. In real-world implementations of LTR methods, a common technique is to \ufb01rst retrieve the top-k documents for a query using relatively cheap \u201cphase 1\u201d ranker such as BM25, and then apply the full ML model to rerank the top-k documents in \u201cphase 2\u201d. This motivates us to offer two participation styles in the Deep Learning Track, which we also refer to as subtasks. One is to implement full end-to-end retrieval, perhaps by implementing both phase 1 and phase 2. This is interesting because a good implementation of phase 1 can enhance the end-to-end performance of the system, by enriching the candidate set for phase 2. It also encourages participants to consider alternatives to the two-phase approach, if it can improve ef\ufb01ciency and effectiveness. The other participation style is to only implement a top-k reranker. This approach is realistic in practice, in fact it is simply phase 2 of the end-to-end approach, for a \ufb01xed phase 1. This style of participation lowers the barrier to entry for participating groups who are interested in the LTR aspects of dealing with a large number of training queries, but are not interested in indexing a corpus or studying phase 1 issues. In this style of evaluation\u2014sometimes referred to as telescoping [Matveeva et al., 2006]\u2014participants are given the top-k results in both the training and test set. The interaction between deep learning models and traditional IR indexing data structures is also particularly interesting. Most applications of deep learning models in IR\u2014with few exceptions e.g., [Boytsov et al., 2016, Zamani et al., 2018, Mitra et al., 2019, Nogueira et al., 2019]\u2014have been constrained to the reranking setting. Encouraging future exploration of deep learning based ranking models under the full retrieval settings is an explicit goal of the Deep Learning Track. 2 Task description The track has two tasks: Document retrieval and passage retrieval. Participants were allowed to submit up to three runs per task, although this was not strictly enforced. Participants were provided with an initial set of 200 test queries, then NIST later selected 43 queries during the pooling and judging process, based on budget constraints and with the goal of producing a reusable test collection. The same 200 queries were used for submissions in both tasks, while the selected 43 queries for each task were overlapping but not identical. The full judging process is described in Section 5. When submitting each run, participants also indicated what external data, pretrained models and other resources were used, as well as information on what style of model was used. Below we provide more detailed information about the document retrieval and passage retrieval tasks, as well as the datasets provided as part of these tasks. 2.1 Document retrieval task The \ufb01rst task focuses on document retrieval\u2014with two subtasks: (i) Full retrieval and (ii) top-100 reranking. In the full retrieval subtask, the runs are expected to rank documents based on their relevance to the query, where documents can be retrieved from the full document collection provided. This subtask models the end-to-end retrieval scenario. Note, although most full retrieval runs had 1000 results per query, the reranking runs had 100, so to make the AP and RR results more comparable across subtasks we truncated full retrieval runs by taking the top-100 results per 2 \fquery by score. These truncated runs were used in the main results table for the task (only), not in the TREC Appendix or in Section 5. In the reranking subtask, participants were provided with an initial ranking of 100 documents, giving all participants the same starting point. The 100 were retrieved using Indri [Strohman et al., 2005] on the full corpus with Krovetz stemming and stopwords eliminated. Participants were expected to rerank the candidates w.r.t. their estimated relevance to the query. This is a common scenario in many real-world retrieval systems that employ a telescoping architecture [Matveeva et al., 2006, Wang et al., 2011]. The reranking subtask allows participants to focus on learning an effective relevance estimator, without the need for implementing an end-to-end retrieval system. It also makes the reranking runs more comparable, because they all rerank the same set of 100 candidates. For judging, NIST\u2019s pooling was across both subtasks, and they also identi\ufb01ed additional documents for judging via classi\ufb01er. Further, for queries with many relevant documents, additional documents were judged. These steps were carried out to identify a suf\ufb01ciently comprehensive set of relevant results, to allow reliable future dataset reuse. Judgments were on a four-point scale: [3] Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. [2] Highly relevant: The content of this document provides substantial information on the query. [1] Relevant: Document provides some information relevant to the query, which may be minimal. [0] Irrelevant: Document does not provide any useful information about the query. 2.2 Passage retrieval task Similar to the document retrieval task, the passage retrieval task includes (i) a full retrieval and (ii) a top-1000 reranking tasks. In the full retrieval subtask, given a query, the participants were expected to retrieve a ranked list of passages from the full collection based on their estimated likelihood of containing an answer to the question. Participants could submit up to 1000 passages per query for this end-to-end retrieval task. In the top-1000 reranking subtask, 1000 passages per query query were provided to participants, giving all participants the same starting point. The sets of 1000 were generated based on BM25 retrieval with no stemming as applied to the full collection. Participants were expected to rerank the 1000 passages based on their estimated likelihood of containing an answer to the query. In this subtask, we can compare different reranking methods based on the same initial set of 1000 candidates, with the same rationale as described for the document reranking subtask. For judging, NIST\u2019s pooling was across both subtasks, and they also identi\ufb01ed additional passages for judging via classi\ufb01er. Further, for queries with many relevant passages, additional passages were judged. These steps were carried out to identify a suf\ufb01ciently comprehensive set of relevant results, to allow reliable future dataset reuse. Judgments were on a four-point scale: [3] Perfectly relevant: The passage is dedicated to the query and contains the exact answer. [2] Highly relevant: The passage has some answer for the query, but the answer may be a bit unclear, or hidden amongst extraneous information. [1] Related: The passage seems related to the query but does not answer it. [0] Irrelevant: The passage has nothing to do with the query. 3 Datasets Both tasks have large training sets based on human relevance assessments, derived from MS MARCO. These are sparse, with no negative labels and often only one positive label per query, analogous to some real-world training data such as click logs. In the case of passage retrieval, the positive label indicates that the passage contains an answer to a query. In the case of document retrieval, we transferred the passage-level label to the corresponding source document that contained the passage. We do this under the assumption that a document with a relevant passage is a relevant document, although we note that our document snapshot was generated at a different time from the passage dataset, so there can be some mismatch. Despite this, in this year\u2019s document retrieval task machine learning models seem to bene\ufb01t from using the labels, when evaluated using NIST\u2019s non-sparse, non-transferred labels. This suggests the transferred document labels are meaningful for our TREC task. 3 \fTable 1: Summary of statistics on TREC 2019 Deep Learning Track datasets. Document retrieval dataset Passage retrieval dataset File description Number of records File size Number of records File size Collection 3, 213, 835 22 GB 8, 841, 823 2.9 GB Train queries 367, 013 15 MB 502, 940 19.7 MB Train qrels 384, 597 7.6 MB 532, 761 10.1 MB Validation queries 5, 193 216 KB 12, 665 545 KB Validation qrels 519, 300 27 MB 59, 273 1.1 MB Test queries 200 12 KB 200 12 KB Table 2: Summary of statistics of runs for the two retrieval tasks at the TREC 2019 Deep Learning Track. Document retrieval Passage retrieval Number of groups 10 11 Number of total runs 38 37 Number of runs w/ category: nnlm 15 18 Number of runs w/ category: nn 12 8 Number of runs w/ category: trad 11 11 Number of runs w/ category: rerank 10 11 Number of runs w/ category: fullrank 28 26 The passage corpus is the same as in MS MARCO passage retrieval leaderboard. The document corpus is newly released for use in TREC. Each document has three \ufb01elds: (i) URL, (ii) title, and (iii) body text. Table 1 provides descriptive statistics for the datasets. More details about the datasets\u2014including directions for download\u2014is available on the TREC 2019 Deep Learning Track website1. Interested readers are also encouraged to refer to [Bajaj et al., 2016] for details on the original MS MARCO dataset. 4 Results and analysis Submitted runs A total of 15 groups participated in the TREC 2019 Deep Learning Track, with an aggregate of 75 runs submitted across both tasks. Based run submission surveys, we classify each run into one of three categories: \u2022 nnlm: if the run employs large scale pre-trained neural language models, such as BERT [Devlin et al., 2018] or XLNet [Yang et al., 2019b] \u2022 nn: if the run employs some form of neural network based approach\u2014e.g., Duet [Mitra et al., 2017, Mitra and Craswell, 2019] or using word embeddings [Joulin et al., 2016]\u2014but does not fall into the \u201cnnlm\u201d category \u2022 trad: if the run exclusively uses traditional IR methods like BM25 [Robertson et al., 2009] and RM3 [AbdulJaleel et al., 2004]. We placed 33 (44%) runs in the \u201cnnlm\u201d category (32 using BERT and one using XLNet), 20 (27%) in the \u201cnn\u201d category, and the remaining 22 (29%) in the \u201ctrad\u201d category. We further categorize runs based on subtask: \u2022 rerank: if the run reranks the provided top-k candidates, or \u2022 fullrank: if the run employs their own phase 1 retrieval system. We \ufb01nd that only 21 (28%) submissions fall under the \u201crerank\u201d category\u2014while the remaining 54 (72%) are \u201cfullrank\u201d. Table 2 breaks down the submissions by category and task. We also encouraged some participants to run strong traditional IR baselines, and submit them as additional runs under the \u201cBASELINE\u201d group. Baseline runs for document ranking were: bm25base BM25 [Robertson et al., 2009] with default parameters 1https://microsoft.github.io/TREC-2019-Deep-Learning/ 4 \fbm25base_ax BM25+AX [Yang and Lin, 2019] with default parameters bm25base_prf BM25+PRF [Zeng and Sakai, 2019] with default parameters bm25base_rm3 BM25+RM3 [Yang et al., 2019a] with default parameters bm25tuned BM25 [Robertson et al., 2009] with tuned parameters bm25tuned_ax BM25+AX [Yang and Lin, 2019] with tuned parameters bm25tuned_prf BM25+PRF [Zeng and Sakai, 2019] with tuned parameters bm25tuned_rm3 BM25+RM3 [Yang et al., 2019a] with tuned parameters Baseline runs for passage ranking were: bm25base_ax_p BM25+AX [Yang and Lin, 2019] with default parameters bm25base_p BM25 [Robertson et al., 2009] with default parameters bm25base_prf_p BM25+PRF [Zeng and Sakai, 2019] with default parameters bm25base_rm3_p BM25+RM3 [Yang et al., 2019a] with default parameters bm25tuned_ax_p BM25+AX [Yang and Lin, 2019] with tuned parameters bm25tuned_p BM25 [Robertson et al., 2009] with tuned parameters bm25tuned_prf_p BM25+PRF [Zeng and Sakai, 2019] with tuned parameters bm25tuned_rm3_p BM25+RM3 [Yang et al., 2019a] with tuned parameters Overall results Our main metric in both tasks is Normalized Discounted Cumulative Gain (NDCG)\u2014speci\ufb01cally, NDCG@10, since it makes use of our 4-level judgments and focuses on the \ufb01rst results that users will see. To analyse if any of the fullrank runs recall more relevant candidates in phase 1 compared to those provided for the reranking subtask, we also report Normalized Cumulative Gain (NCG) [Rosset et al., 2018] at rank 100 and 1000 for the document and passage ranking tasks, respectively. We choose to report NCG because it discriminates between recalling documents with different positive relevance grades and is a natural complement to NDCG, our main metric. Although NCG is not of\ufb01cially supported by trec_eval, we con\ufb01rm that it correlates strongly with the recall metric for these analysed runs. The overall results are presented in Table 3 for document retrieval and Table 4 for passage retrieval. These tables include multiple metrics and run categories, which we now use in our analysis. Evaluation of deep learning and traditional ranking methods in a large training data regime An important goal of this track is to compare the performance of different types of model, using large human-labeled training sets, for the core IR task of ad-hoc search. Indeed this is the \ufb01rst time a TREC-style blind evaluation has been carried out to compare state-of-the-art neural and traditional IR methods. Figure 1a plots the NDCG@10 performance of the different runs for the document retrieval task, broken down by model type. In general, runs in the category \u201cnnlm\u201d outperform the \u201cnn\u201d runs, which outperform the \u201ctrad\u201d runs. The best performing run of each category is indicated, with the best \u201cnnlm\u201d and \u201cnn\u201d models outperforming the best \u201ctrad\u201d model by 29.4% and 14.8% respectively. The passage retrieval task reveals similar pattern. In Figure 1b, the gap between the best \u201cnnlm\u201d and \u201cnn\u201d runs and the best \u201ctrad\u201d run is larger, at 37.4% and 23.7% respectively. One explanation for this could be that vocabulary mismatch between queries and relevant results is more likely in short text, so neural methods that can overcome such mismatch have a relatively greater advantage in passage retrieval. Another explanation could be that there is already a public leaderboard, albeit without test labels from NIST, for the passage task. Some TREC participants may have submitted neural models multiple times to the public leaderboard, and are well practiced for the passage ranking task. In query-level win-loss analysis for the document retrieval task (Figure 2) the best \u201cnnlm\u201d model outperforms the best \u201ctrad\u201d run on 36 out of 43 test queries (i.e., 83.7%). Passage retrieval shows a similar pattern in Figure 3. Neither task has a large class of queries where the \u201cnnlm\u201d model performs worse, at least on this year\u2019s data. However, more iterations of rigorous blind evaluation with strong \u201ctrad\u201d baselines, plus more scrutiny of the benchmarking methods, would be required to convince us that this is true in general. Next, we analyze this year\u2019s runs by representing each run as a vector of 43 NDCG@10 scores. In this vector space, two runs are similar if their NDCG vectors are similar, meaning they performed well and badly on the same queries. Using t-SNE [Maaten and Hinton, 2008] we then plot the runs in two dimensions, which gives us a visualization where similar runs will be closer together and dissimilar results further apart. This method of visualizing inter-model similarity was \ufb01rst proposed by Mitra et al. [2017] and we employ it to generate the plots in Figure 4. 5 \fTable 3: Document retrieval runs. RR (MS) is based on MS MARCO labels. All other metrics are based on NIST labels. run group subtask neural RR (MS) RR NDCG@10 NCG@100 AP idst_bert_v3 IDST fullrank nnlm 0.4866 0.9612 0.7257 0.5800 0.3137 idst_bert_r1 IDST rerank nnlm 0.4889 0.9729 0.7189 0.5179 0.2915 idst_bert_v2 IDST fullrank nnlm 0.4865 0.9612 0.7181 0.5947 0.3157 idst_bert_v1 IDST fullrank nnlm 0.4874 0.9729 0.7175 0.5820 0.3119 idst_bert_r2 IDST rerank nnlm 0.4734 0.9729 0.7135 0.5179 0.2910 bm25exp_marcomb h2oloo fullrank nnlm 0.3518 0.8992 0.6456 0.6367 0.3190 TUW19-d3-re TU-Vienna rerank nn 0.4014 0.9457 0.6443 0.5179 0.2709 ucas_runid1 UCAS rerank nnlm 0.4422 0.9109 0.6437 0.5179 0.2642 ucas_runid3 UCAS rerank nnlm 0.4353 0.8992 0.6418 0.5179 0.2677 bm25_marcomb h2oloo fullrank nnlm 0.3591 0.9128 0.6403 0.6356 0.3229 bm25exp_marco h2oloo fullrank nnlm 0.3610 0.9031 0.6399 0.6191 0.3030 ucas_runid2 UCAS rerank nnlm 0.4315 0.9496 0.6350 0.5179 0.2526 TUW19-d2-re TU-Vienna rerank nn 0.3154 0.9147 0.6053 0.5179 0.2391 uogTrDNN6LM uogTr fullrank nnlm 0.3187 0.8729 0.6046 0.5093 0.2488 TUW19-d1-re TU-Vienna rerank nn 0.3616 0.8915 0.5930 0.5179 0.2524 ms_ensemble Microsoft fullrank nn 0.3725 0.8760 0.5784 0.4841 0.2369 srchvrs_run1 srchvrs fullrank trad 0.3065 0.8715 0.5609 0.5599 0.2645 TUW19-d2-f TU-Vienna fullrank nn 0.2886 0.8711 0.5596 0.4103 0.2050 TUW19-d3-f TU-Vienna fullrank nn 0.3735 0.8929 0.5576 0.3045 0.1843 dct_tp_bm25e2 CMU fullrank nn 0.3402 0.8718 0.5544 0.4979 0.2244 srchvrs_run2 srchvrs fullrank trad 0.3038 0.8715 0.5529 0.5572 0.2615 bm25tuned_rm3 BASELINE fullrank trad 0.3396 0.8074 0.5485 0.5590 0.2700 dct_qp_bm25e CMU fullrank nn 0.3585 0.8915 0.5435 0.4924 0.2228 dct_tp_bm25e CMU fullrank nn 0.3530 0.8638 0.5424 0.4786 0.2098 uogTrDSSQE5LM uogTr fullrank nnlm 0.3264 0.8895 0.5386 0.1839 0.1085 TUW19-d1-f TU-Vienna fullrank nn 0.3190 0.8465 0.5383 0.2951 0.1647 ms_duet Microsoft rerank nn 0.2758 0.8101 0.5330 0.5179 0.2291 uogTrDSS6pLM uogTr fullrank nnlm 0.2803 0.8895 0.5323 0.1868 0.1129 bm25tuned_prf BASELINE fullrank trad 0.3176 0.8005 0.5281 0.5576 0.2759 bm25tuned_ax BASELINE fullrank trad 0.2889 0.7492 0.5245 0.5835 0.2816 bm25base BASELINE fullrank trad 0.2949 0.8046 0.5190 0.5170 0.2443 bm25base_rm3 BASELINE fullrank trad 0.2405 0.7714 0.5169 0.5546 0.2772 runid1 CCNU_IRGroup rerank nnlm 0.3058 0.7811 0.5164 0.5179 0.2366 bm25tuned BASELINE fullrank trad 0.2930 0.8872 0.5140 0.5262 0.2318 bm25base_prf BASELINE fullrank trad 0.2717 0.7774 0.5106 0.5303 0.2542 baseline BITEM_DL fullrank trad 0.2795 0.8037 0.4823 0.5114 0.2168 bm25base_ax BASELINE fullrank trad 0.2677 0.7424 0.4730 0.5148 0.2452 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best nnlm run best nn run best trad run nnlm nn trad (a) Document retrieval task 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best nnlm run best nn run best trad run nnlm nn trad (b) Passage retrieval task Figure 1: NDCG@10 results, broken down by run type. Runs of type \u201cnnlm\u201d, meaning they use language models such as BERT, performed best on both tasks. Other neural network models \u201cnn\u201d and non-neural models \u201ctrad\u201d had relatively lower performance this year. More iterations of evaluation and analysis would be needed to determine if this is a general result, but it is a strong start for the argument that deep learning methods may take over from traditional methods in IR applications. 6 \f0.0 0.2 0.4 0.6 0.8 1.0 NDCG@10 how is the weather in jamaica who is robert gray what is famvir prescribed for difference between rn and bsn what is a active margin difference between a mcdouble and a double cheeseburger types of dysarthria from cerebral palsy how to find the midsegment of a trapezoid example of monotonic function medicare's definition of mechanical ventilation lps laws definition how long is life cycle of flea is cdg airport in main paris do goldfish grow definition of a sigmet causes of left ventricular hypertrophy right pelvic pain causes what is theraderm used for anthropological definition of environment hydrogen is a liquid below what temperature when was the salvation army founded tracheids are part of _____. axon terminals or synaptic knob definition what is physical description of spruce cost of interior concrete flooring define visceral? what is wifi vs bluetooth causes of military suicide definition declaratory judgment what is durable medical equipment consist of how are some sharks warm blooded what is an aml surveillance analyst what is the most popular food in switzerland why did the us volunterilay enter ww1 what can contour plowing reduce what types of food can you cook sous vide rsa definition key how many liberty ships were built in brunswick what are the social determinants of health what is the daily life of thai people who formed the commonwealth of independent states exons definition biology how long to hold bow in yoga nnlm trad Figure 2: Comparison of the best \u201cnnlm\u201d and \u201ctrad\u201d runs on individual test queries for the document retrieval task. Queries are sorted by difference in mean performance between \u201cnnlm\u201d and \u201ctrad\u201druns. Queries on which \u201cnnlm\u201d wins with large margin are at the top. 7 \f0.0 0.2 0.4 0.6 0.8 1.0 NDCG@10 how is the weather in jamaica causes of left ventricular hypertrophy when was the salvation army founded how long is life cycle of flea what are the social determinants of health rsa definition key right pelvic pain causes what is theraderm used for what is an aml surveillance analyst difference between a mcdouble and a double cheeseburger anthropological definition of environment causes of military suicide hydrogen is a liquid below what temperature does legionella pneumophila cause pneumonia what is famvir prescribed for axon terminals or synaptic knob definition definition declaratory judgment definition of a sigmet what is the daily life of thai people why did the us volunterilay enter ww1 lps laws definition cost of interior concrete flooring what is wifi vs bluetooth is cdg airport in main paris what is physical description of spruce tracheids are part of _____. what types of food can you cook sous vide do goldfish grow what is a active margin how are some sharks warm blooded what can contour plowing reduce what is durable medical equipment consist of medicare's definition of mechanical ventilation who formed the commonwealth of independent states types of dysarthria from cerebral palsy how to find the midsegment of a trapezoid what are the three percenters? example of monotonic function exons definition biology what is the most popular food in switzerland difference between rn and bsn define visceral? who is robert gray nnlm trad Figure 3: Comparison of the best \u201cnnlm\u201d and \u201ctrad\u201d runs on individual test queries for the passage retrieval task. Queries are sorted by difference in mean performance between \u201cnnlm\u201d and \u201ctrad\u201druns. Queries on which \u201cnnlm\u201d wins with large margin are at the top. 8 \fTable 4: Passage retrieval runs. RR (MS) is based on MS MARCO labels. All other metrics are based on NIST labels. run group subtask neural RR (MS) RR NDCG@10 NCG@1000 AP idst_bert_p1 IDST fullrank nnlm 0.4635 0.9283 0.7645 0.8196 0.5030 idst_bert_p2 IDST fullrank nnlm 0.4631 0.9283 0.7632 0.8203 0.5039 idst_bert_p3 IDST fullrank nnlm 0.4374 0.9167 0.7594 0.8287 0.5046 p_exp_rm3_bert h2oloo fullrank nnlm 0.3582 0.8884 0.7422 0.7939 0.5049 p_bert h2oloo fullrank nnlm 0.3624 0.8663 0.7380 0.7472 0.4677 idst_bert_pr2 IDST rerank nnlm 0.4209 0.8818 0.7379 0.6864 0.4565 idst_bert_pr1 IDST rerank nnlm 0.4430 0.9070 0.7378 0.6864 0.4571 p_exp_bert h2oloo fullrank nnlm 0.3564 0.8671 0.7336 0.7465 0.4749 test1 Brown rerank nnlm 0.3598 0.8702 0.7314 0.6864 0.4567 TUA1-1 TUA1 rerank nnlm 0.3622 0.8702 0.7314 0.6864 0.4571 runid4 udel_fang rerank nnlm 0.3762 0.8702 0.7028 0.6864 0.4383 runid3 udel_fang rerank nnlm 0.3725 0.8663 0.6975 0.6864 0.4381 TUW19-p3-f TU-Vienna fullrank nn 0.3134 0.8407 0.6884 0.7436 0.4196 TUW19-p1-f TU-Vienna fullrank nn 0.3187 0.8360 0.6756 0.7436 0.4125 TUW19-p3-re TU-Vienna rerank nn 0.3100 0.8568 0.6746 0.6864 0.4113 TUW19-p1-re TU-Vienna rerank nn 0.3180 0.8516 0.6746 0.6864 0.4073 TUW19-p2-f TU-Vienna fullrank nn 0.3469 0.8487 0.6709 0.7432 0.4157 ICT-BERT2 ICTNET fullrank nnlm 0.3846 0.8743 0.6650 0.2491 0.2421 srchvrs_ps_run2 srchvrs fullrank nnlm 0.3262 0.8302 0.6645 0.6643 0.4090 TUW19-p2-re TU-Vienna rerank nn 0.3424 0.8611 0.6615 0.6864 0.3963 ICT-CKNRM_B ICTNET fullrank nnlm 0.2984 0.8016 0.6481 0.2491 0.2289 ms_duet_passage Microsoft rerank nn 0.2473 0.8065 0.6137 0.6864 0.3477 ICT-CKNRM_B50 ICTNET fullrank nnlm 0.2055 0.7597 0.6014 0.3786 0.2429 srchvrs_ps_run3 srchvrs fullrank trad 0.1883 0.6942 0.5558 0.7240 0.3184 bm25tuned_prf_p BASELINE fullrank trad 0.1928 0.6996 0.5536 0.7947 0.3684 bm25base_ax_p BASELINE fullrank trad 0.1888 0.6516 0.5511 0.8194 0.3745 bm25tuned_ax_p BASELINE fullrank trad 0.1840 0.6481 0.5461 0.8145 0.3632 bm25base_prf_p BASELINE fullrank trad 0.2007 0.6211 0.5372 0.7901 0.3561 runid2 CCNU_IRGroup rerank nnlm 0.2143 0.8088 0.5322 0.6830 0.2671 runid5 CCNU_IRGroup fullrank nnlm 0.2068 0.7999 0.5252 0.5440 0.2506 bm25tuned_rm3_p BASELINE fullrank trad 0.2162 0.6992 0.5231 0.7841 0.3377 bm25base_rm3_p BASELINE fullrank trad 0.1590 0.6683 0.5180 0.7976 0.3390 bm25base_p BASELINE fullrank trad 0.2402 0.7036 0.5058 0.7490 0.3013 srchvrs_ps_run1 srchvrs fullrank trad 0.1902 0.5597 0.4990 0.7240 0.2972 bm25tuned_p BASELINE fullrank trad 0.2363 0.6850 0.4973 0.7472 0.2903 UNH_bm25 TREMA-UNH fullrank trad 0.1803 0.6036 0.4495 0.6957 0.2566 On both document and passage retrieval tasks, the runs appear to be \ufb01rst clustered by group\u2014see Figures 4b and 4d. This is expected, as different runs from the same group are likely to employ variations of the same approach. In Figures 4a and 4c, runs also cluster together based on their categorization as \u201cnnlm\u201d, \u201cnn\u201d, and \u201ctrad\u201d. End-to-end retrieval vs. reranking. Our datasets include top-k candidate result lists, with 100 candidates per query for document retrieval and 1000 candidates per query for passage retrieval. Runs that simply rerank the provided candidates are \u201crerank\u201d runs, whereas runs that perform end-to-end retrieval against the corpus, with millions of potential results, are \u201cfullrank\u201d runs. We would expect that a \u201cfullrank\u201d run should be able to \ufb01nd a greater number of relevant candidates than we provided, achieving higher NCG@k. A multi-stage \u201cfullrank\u201d run should also be able to optimize the stages jointly, such that early stages produce candidates that later stages are good at handling. According to Figure 5, \u201cfullrank\u201d did not achieve much better NDCG@10 performance than \u201crerank\u201d runs. While it was possible for \u201cfullrank\u201d to achieve better NCG@k, it was also possible to make NCG@k worse, and achieving signi\ufb01cantly higher NCG@k does not seem necessary to achieve good NDCG@10. Speci\ufb01cally, for the document retrieval task, the best \u201cfullrank\u201d run achieves only 0.9% higher NDCG@10 over the best \u201crerank\u2019 run. For the passage retrieval task, the difference is 3.6%. The best NCG@100 for the document retrieval task is achieved by a well-tuned combination of BM25 [Robertson et al., 2009] and RM3 [Abdul-Jaleel et al., 2004] on top of document expansion using doc2query [Nogueira et al., 2019]\u2014which improves by 22.9% on the metric relative to the set of 100 candidates provided for the reranking task. For the passage retrieval task, the best NCG@1000 is 20.7% higher than that of the provided reranking candidate set. 9 \flatent dimension 1 latent dimension 2 nn nnlm trad (a) By model type on document retrieval task latent dimension 1 latent dimension 2 BASELINE BITEM_DL CCNU_IRGroup CMU IDST Microsoft TU-Vienna UCAS h2oloo srchvrs uogTr (b) By group name on document retrieval task latent dimension 1 latent dimension 2 nn nnlm trad (c) By model type on passage retrieval task latent dimension 1 latent dimension 2 BASELINE Brown CCNU_IRGroup ICTNET IDST Microsoft TREMA-UNH TU-Vienna TUA1 h2oloo srchvrs udel_fang (d) By group name on passage retrieval task Figure 4: Visualizing inter-run similarity using t-SNE. Each run is represented by a 43-dimensional vector of NDCG@10 performance on corresponding 43 test queries. The 43-dimensional vector is then reduced to twodimensions and plotted using t-SNE. Runs that are submitted by the same group generally cluster together. Similarly, \u201cnnlm\u201d, \u201cnn\u201d, and \u201ctrad\u201d runs also demonstrate similarities. Given this was the \ufb01rst ever Deep Learning Track at TREC, we are not yet seeing a strong advantage of \u201cfullrank\u201d over \u201crerank\u201d. However, we hope that as the body of literature on neural methods for phase 1 retrieval (e.g., [Boytsov et al., 2016, Zamani et al., 2018, Mitra et al., 2019, Nogueira et al., 2019]) grows, we would see a larger number of runs with deep learning as an ingredient for phase 1 in future editions of this TREC track. NIST labels vs. Sparse MS MARCO labels. Our baseline human labels from MS MARCO often have one known positive result per query. We use these labels for training, but they are also available for test queries. Although our of\ufb01cial evaluation uses NDCG@10 with NIST labels, we now compare this with reciprocal rank (RR) using MS MARCO labels, and RR using NIST labels. Our goal is to understand how changing the labeling scheme and metric affects the overall results of the track, but if there is any disagreement we believe the NDCG results are more valid, since they evaluate the ranking more comprehensively and a ranker that can only perform well on labels with exactly the same distribution as the training set is not robust enough for use in real-world applications, where real users will have opinions that are not necessarily identical to the preferences encoded in sparse training labels. In Figure 7 and 8, We observe general agreement between results using MS MARCO and NIST labels\u2013i.e., runs that perform well on MS MARCO-style evaluation also tends to achieve good performance when evaluated under 10 \f0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best fullrank run best rerank run fullrank rerank (a) NDCG@10 for runs on the document retrieval task 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best fullrank run best rerank run fullrank rerank (b) NDCG@10 for runs on the passage retrieval task 0.1 0.2 0.3 0.4 0.5 0.6 0.7 NCG@100 fullrank rerank (c) NCG@100 for runs on the document retrieval task 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 NCG@1000 fullrank rerank (d) NCG@1000 for runs on the passage retrieval task Figure 5: Analyzing the impact of \u201cfullrank\u201d vs. \u201crerank\u201d settings on retrieval performance. Figure (a) and (b) show the performance of different runs on the document and passage retrieval tasks, respectively. Figure (c) and (d) plot the NCG@100 and NCG@1000 metrics for the same runs for the two tasks, respectively. The runs are ordered by their NDCG@10 performance along the x-axis in all four plots. We observe, that the best run under the \u201cfullrank\u201d setting outperforms the same under the \u201crerank\u201d setting for both document and passage retrieval tasks\u2014although the gaps are relatively smaller compared to those in Figure 1. If we compare Figure (a) with (c) and Figure (b) with (d), we do not observe any evidence that the NCG metric is a good predictor of NDCG@10 performance. traditional TREC settings, and vice versa. This is good news, validating the MS MARCO leaderboard results are at least somewhat indicative of results that are found with pooled judging. 5 Reusability of test collections One goal of the track was to create traditional ad hoc test sets based on the MS MARCO dataset within available budgets. Since the Document Ranking and Passage Ranking tasks used different document sets, two separate test collections, one per task, were constructed. The two test collections started from a common set of topics and each topic was judged by the same NIST assessor for both documents and passages, but assessing for documents and passages was done at different times. Further, the evaluation set of topics (i.e., the topics over which evaluation scores are computed) are overlapping but not identical in the two collections. Thus the collections created in the track are two separate, independent collections. The runs submitted to the track consisted of ranked lists of items for each topic in the test set of 200 topics. NIST selected 52 topics from this set to be judged. The topics were selected by observing the behavior of submitted Document Ranking task runs on the entire test set when using the sparse MARCO judgments to evaluate runs. Test questions that had median MRR scores greater than 0.0 but no more than 0.5 were candidates to be judged. The judgment process then proceeded as follows, where the items to be judged will generically be called \u2018documents\u2019 even though those documents were MS MARCO passages for the Passage Ranking task. 11 \f0.45 0.50 0.55 0.60 0.65 0.70 NDCG@10 0.25 0.30 0.35 0.40 0.45 0.50 RR (MS) group IDST h2oloo TU-Vienna UCAS uogTr Microsoft srchvrs CMU BASELINE CCNU_IRGroup BITEM_DL (a) Document retrieval task. 0.45 0.50 0.55 0.60 0.65 0.70 0.75 NDCG@10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 RR (MS) group IDST h2oloo Brown TUA1 udel_fang TU-Vienna ICTNET srchvrs Microsoft BASELINE CCNU_IRGroup TREMA-UNH (b) Passage retrieval task. Figure 6: Metrics agreement scatter plot, broken down by group. RR (MS) is reciprocal rank calculated with the sparse MS MARCO labels, while NDCG@10 is calculated using NIST labels. 12 \f0.25 0.30 0.35 0.40 0.45 0.50 RR (MS) 0.75 0.80 0.85 0.90 0.95 1.00 RR = 0.68 0.2 0.4 0.6 RR (MS) 0.50 0.55 0.60 0.65 0.70 0.75 NDCG@10 = 0.69 0.8 1.0 RR = 0.73 0.4 0.6 0.8 NDCG@10 neural nnlm nn trad Figure 7: Metrics agreement analysis, broken down by model type, for the document retrieval task. Kendall correlation (\u03c4) indicates agreement between metrics on system ordering. RR (MS) is calculated using MS MARCO sparse labels, while RR and NDCG@10 are calculated using NIST labels. 13 \f0.2 0.3 0.4 0.5 RR (MS) 0.6 0.7 0.8 0.9 RR = 0.82 0.2 0.4 RR (MS) 0.5 0.6 0.7 0.8 NDCG@10 = 0.68 0.6 0.8 1.0 RR = 0.77 0.4 0.6 0.8 NDCG@10 neural nnlm nn trad Figure 8: Metrics agreement analysis, broken down by model type, for the passage retrieval task. Kendall correlation (\u03c4) indicates agreement between metrics on system ordering. RR (MS) is calculated using MS MARCO sparse labels, while RR and NDCG@10 are calculated using NIST labels. 14 \f[1] For each question, create a top-10 pool across all runs in the task, and add any document that contains a judgment in the MARCO sparse judgments. Call the size of this set P (which varies from topic to topic). The assessor judges these pool documents \ufb01rst, then another 100 documents selected to be judged using the University of Waterloo\u2019s HiCAL [Abualsaud et al., 2018] system. HiCAL uses the current set of judgments to build a relevance model and then selects the unjudged document most likely to be relevant as the next document to judge. At the end of this stage there are R known relevant documents. If 2R < P, the judging is \ufb01nished for this topic. [2] Call the the difference between the number of documents that have been judged and the desired number of 2R + 100 judgments G. Judge another G documents selected by HiCAL. Now the number of judgments for the topic is J = P + 100 + G and the new number of known relevant is R\u2217. If 2R\u2217+ 100 < J, assessment is \ufb01nished for the topic. If R\u2217\u2248J, then discard the topic because it will be too expensive to get \u201csuf\ufb01ciently complete\u201d judgments for it. [3] If a topic is still live, add a new increment proportional to the number of known relevant documents to the topic budget, and iterate, terminating when (if) the number of known relevant documents is less than half the number of judged documents. [4] Terminate the entire process when assessors are out of time or have nothing left to judge. The resulting evaluation set was the set of topics with at least three relevant documents and a ratio of R\u2217/J < 0.6. This process resulted in 43 topics in the evaluation set for both the Document Ranking and the Passage Ranking tasks, but as noted it is a slightly different 43 topics for the two tasks. Documents in the Document Ranking task were judged on a four-point scale of Irrelevant (0), Relevant (1), Highly Relevant (2), and Perfectly Relevant (3) where all but Irrelevant were treated as relevant in HiCAL and in computing binary-relevance-based measures. For the Passage Ranking task, passages were judged on a four-point scale of Irrelevant (0), Related (the passage is on-topic but does not answer the question) (1), Highly Relevant (2), and Perfectly Relevant (3). In this task, only Highly and Perfectly Relevant were considered to be relevant for binary measures and by HiCAL, though nDCG scores did use a gain value of 1 for the Related passages. Table 5 gives counts of the number of documents judged and the number of relevant documents (using the de\ufb01nitions for binary relevance) found for each of the 52 topics that entered the process. HiCAL is a dynamic collection construction method, meaning that the document to be judged next is selected only after judgments for previous documents have been received. The Common Core track in TRECs 2017 and 2018 used a method based on multi-armed bandit optimization techniques, another dynamic method, with the similar goal of building high-quality, reusable, ad hoc test collections affordably [Voorhees, 2018]. That work showed two main issues to be overcome when building new collections with dynamic techniques: providing the assessors the opportunity to learn a topic before immutable judgments are rendered, and setting individual topic budgets when assessors judge at different rates and at different times but are subject to a single overall judgment budget. The \ufb01rst issue is less severe with (NIST\u2019s modi\ufb01cation of) HiCAL since assessors can change the value of any previously made judgment at any time; whenever a new relevance model is calculated, HiCAL uses the judgments current at the time of calculation. Nonetheless, top-10 pools provide both an opportunity for assessors to learn a topic and ensure that all measures based on document-level cutoffs less than or equal to ten are precise for all judged runs, and this motivated the use of pools in the \ufb01rst stage of the process. Setting per-topic judgment budgets continues to be a challenging problem. The stopping criterion of ending a topic once 2R+100 documents were judged was motivated by the heuristic observed by Waterloo in prior use of HiCAL [Cormack and Grossman, 2018] further supported by the Common Core track\u2019s observation that a topic for which more than half of its judged documents are relevant is unlikely to be suf\ufb01ciently judged2. Note that the process described above was the target process, but the practicalities of keeping assessors occupied meant that some topics received more judgments than they \u201cdeserved\u201d. All judgments for non-excluded topics are included in the qrels \ufb01le. 5.1 Collection Robustness Our goal is to build general-purpose, reusable test collections at acceptable cost. In this context, general-purpose means a collection reliably ranks runs for a wide spectrum of evaluation measures, including recall-focused measures. Reusable means that runs that did not participate in the collection building process can be reliably ranked by the collection. Since costs in building a collection are generally dominated by the cost of human assessments, the number of relevance judgments required is used as the construction cost. 2We nonetheless included topics with a ratio of relevant to judged between 0.5 and 0.6 in the evaluation set because test collection stability tests suggest the collection is more stable with those topics than without them (likely because the total number of topics is greater with them) and to provide a greater diversity of topic sizes in the evaluation set. 15 \fTable 5: Judging statistics for the Document Ranking and Passage Ranking tasks. Given are the number of documents judged (any variant of) relevant, the total number of documents judged, and the fraction of judged documents that are relevant (Relevant Ratio). Topics were excluded from the evaluation set if they had fewer than 3 relevant or if the fraction of judged documents that are relevant was greater than 0.6. Data for excluded topics are given in gray. The \ufb01nal rows gives the total number of documents judged and the number of documents judged when not counting excluded topics. Document Ranking Passage Ranking Topic # Relevant # Judged Relevant Ratio # Relevant # Judged Relevant Ratio 19335 53 239 0.222 7 194 0.036 47923 767 1476 0.520 41 143 0.287 87181 168 404 0.416 31 158 0.196 87452 165 346 0.477 31 139 0.223 100983 341 420 0.812 370 432 0.856 104861 61 218 0.280 111 306 0.363 130510 42 174 0.241 14 133 0.105 131843 25 168 0.149 19 132 0.144 146187 25 157 0.159 8 138 0.058 148538 240 578 0.415 32 159 0.201 156493 151 378 0.399 117 300 0.390 168216 578 885 0.653 200 582 0.344 182539 23 144 0.160 9 132 0.068 183378 324 723 0.448 175 451 0.388 207786 76 228 0.333 11 137 0.080 264014 177 415 0.427 152 382 0.398 287683 3 190 0.016 1 140 0.007 359349 183 446 0.410 25 139 0.180 405717 34 171 0.199 7 144 0.049 423273 1 183 0.005 2 199 0.010 443396 195 376 0.519 63 188 0.335 451602 202 415 0.487 100 220 0.455 489204 392 700 0.560 24 175 0.137 490595 51 161 0.317 24 148 0.162 527433 52 204 0.255 34 160 0.212 573724 42 176 0.239 13 141 0.092 833860 178 412 0.432 42 157 0.268 855410 5 337 0.015 3 183 0.016 915593 115 314 0.366 79 192 0.411 962179 24 173 0.139 21 161 0.130 966413 283 372 0.761 120 180 0.667 1037798 44 188 0.234 7 154 0.045 1063750 381 708 0.538 183 392 0.467 1103812 40 234 0.171 11 141 0.078 1104031 432 466 0.927 113 152 0.743 1104492 335 395 0.848 192 300 0.640 1106007 242 416 0.582 41 178 0.230 1110199 41 183 0.224 28 175 0.160 1112341 385 664 0.580 119 223 0.534 1113437 93 280 0.332 25 180 0.139 1114646 55 163 0.337 12 151 0.079 1114819 562 1026 0.548 213 470 0.453 1115776 7 158 0.044 4 152 0.026 1117099 386 845 0.457 83 257 0.323 1121402 55 200 0.275 23 146 0.158 1121709 2 250 0.008 3 178 0.017 1121986 440 474 0.928 263 378 0.696 1124210 276 629 0.439 120 330 0.364 1129237 38 175 0.217 17 147 0.116 1132213 20 204 0.098 0 163 0.000 1133167 199 464 0.429 219 492 0.445 1134787 426 454 0.938 467 700 0.667 Total judged: 20,157 11,904 Final qrels size: 16,258 9260 16 \fLeave-Out-Uniques (LOU) tests [Buckley et al., 2007, Zobel, 1998] are a way of analyzing the reusability of a collection. In these tests, the relevant documents retrieved by only one participating team are removed from the qrels \ufb01les and all runs are then evaluated using the reduced qrels. The reduced qrels are the qrels that would have resulted had the team not participated in the collection building process, and thus their submitted runs represent new runs with respect to the reduced qrels. If the ranking of runs using the reduced qrels is essentially the same as the ranking of runs using the original qrels over all participating teams, then the original collection is likely reusable. The similarity between rankings of runs is usually de\ufb01ned by the Kendall\u2019s \u03c4 correlation between the rankings. Kendall\u2019s \u03c4 is a measure of association that is proportional to the number of interchanges between adjacent items in one ranking that are required to turn that ranking into the other. \u03c4 scores are normalized such that a score of 1 designates perfect agreement, -1 designates rankings that are inverses of one another, and 0 designates rankings that are independent of one another. \u03c4 scores can be misleading in the case of system rankings of TREC submissions, however, because usually there are a set of very good runs and a set of very poor runs and each of those run sets always rank in the same order. Thus, in addition to the \u03c4 score between the rankings, we also report drops, the largest (negative) difference in ranks experienced by some run [Voorhees, 2018]. A standard LOU test does not work for examining the collections built in the Deep Learning track because the HiCAL process does not depend on runs to provide documents and thus \u201cunique relevant documents\u201d is no longer a wellde\ufb01ned concept. A given team\u2019s unique relevant documents can be removed from the depth-10 pools in the \ufb01rst stage, but then the HiCAL process must activated as it may select the removed documents to be judged in later stages. Since the HiCAL process is not deterministic (ties are broken randomly) and depends on the particular set of documents seen so far, the HiCAL process must be simulated multiple times using the original qrels\u2019 judgments. The simulations proceeded as follows, where the entire process was performed separately for the Document Ranking and Passage Ranking collections. The original depth-10 pools (i.e., top-10 documents from all runs plus MARCO judgments) were fed to the HiCAL process for each of ten trials, where each trial used a separate initial seed for the random number generator. Within each trial, we tracked the documents encountered by HiCAL, creating a trace of the \ufb01rst 2500 documents encountered per topic. Any unjudged documents encountered by HiCAL were treated as not relevant. We created a qrels \ufb01le from each trace by taking a pre\ufb01x of the trace of length equal to the number of documents judged in the original qrels per topic. This resulted in 10 qrels \ufb01les that could have resulted as the of\ufb01cial qrels of the track (modulo the unjudged documents would have been judged). While these qrels are not identical to one another nor to the of\ufb01cial qrels, they do rank systems very similarly. The leftmost segment of Table 6 shows the \u03c4 values and the drops for MAP scores over the set of ten trials3. The top part of the table gives statistics for the Document Ranking task collection and the bottom part for the Passage Ranking task collection. The rightmost segment of Table 6 gives the \u03c4 and maximum drop values for the experiments when one participating team is removed from the process. In these experiments, for each team in turn, we created initial pools consisting of the MARCO judged documents plus the top-10 documents from all runs except those runs submitted by the current team. This pool was fed to the HiCAL process for each of ten trials where the random number seed for a given trial was the same as in the all-teams simulation. As before, we created a trace of the documents that were encountered by HiCAL, and created a qrels \ufb01le by taking a pre\ufb01x of the trace of length equal to the number of documents judged in the of\ufb01cial qrels. All runs were evaluated using this trial qrels, and the ranking induced by it was compared to the ranking induced by the of\ufb01cial qrels. The table reports the smallest \u03c4 and largest maximum drop observed over all teams for that trial. In general, the ranking of systems is stable, providing support for the contention that the collections are reusable. A more detailed look at the variability in system rankings is given in Figure 9. The \ufb01gure shows a heat map of the number of times a run was ranked at a given position over all simulation trials (120 trials for the Document Ranking collection and 130 trials for the Passage Ranking task). The ranks are plotted on the x-axis and the runs on the y-axis where they are sorted by their position in the ranking by the of\ufb01cial qrels. The darker a plotted point the more times the run was ranked at that position. The \ufb01gure makes it clear that a large majority of runs have a single dominant rank. When a run does have change ranks, it moves by a modest amount. 5.2 Per-topic Budgets The qrels created from the simulations for the stability investigation were constructed to contain exactly the same number of judgments per topic as the of\ufb01cial qrels contains for fair comparisons. But, of course, no such stopping criterion is available when \ufb01rst building a collection. The trace of documents encountered by HiCAL in the simulations provides a mechanism for exploring the effect of different stopping conditions on the \ufb01nal collection. We construct a qrels by applying a given stopping criterion to a document trace. For these experiments, all 52 topics start the process 3Prec(10) scores are identical over all trials because each trial starts with a depth-10 pool. 17 \fTable 6: Kendall\u2019s \u03c4 and Maximum Drop in ranks observed in simulation trials. Each trial creates a qrels \ufb01le of the same size as the of\ufb01cial qrels, and the ranking of systems induced by that qrels is compared to the ranking induced by the of\ufb01cial qrels. Using all team\u2019s runs compared to the original (left columns) shows the effect of the nondeterminism of HiCAL. The remainder of the columns show the effect of omitting one team\u2019s runs from the pools in the \ufb01rst stage. All vs. Of\ufb01cial Omit Team vs. Of\ufb01cial MAP MAP Prec(10) Trial \u03c4 Drop \u03c4 Drop \u03c4 Drop 1 0.9915 1 0.9573 3 0.9856 5 2 0.9829 2 0.9659 3 0.9856 5 3 0.9801 2 0.9687 3 0.9856 5 4 0.9801 2 0.9687 3 0.9856 5 5 0.9829 2 0.9687 3 0.9827 5 6 0.9858 2 0.9687 3 0.9798 5 7 0.9886 2 0.9687 3 0.9856 5 8 0.9829 2 0.9687 3 0.9827 5 9 0.9801 2 0.9602 3 0.9856 4 10 0.9829 2 0.9659 3 0.9827 5 a) Document Ranking task collection All vs. Of\ufb01cial Omit Team vs. Of\ufb01cial MAP MAP Prec(10) Trial \u03c4 Drop \u03c4 Drop \u03c4 Drop 1 0.9970 1 0.9820 2 0.9939 2 2 0.9910 2 0.9819 2 0.9939 2 3 0.9880 2 0.9820 2 0.9939 2 4 0.9880 2 0.9820 2 0.9939 2 5 0.9880 2 0.9820 2 0.9939 2 6 0.9970 1 0.9820 2 0.9939 2 7 0.9940 1 0.9849 2 0.9939 2 8 0.9880 2 0.9820 2 0.9939 2 9 0.9880 2 0.9850 2 0.9939 2 10 0.9880 2 0.9820 2 0.9939 2 b) Passage Ranking task collection and each may be included in the \ufb01nal qrels if the stopping criterion allows. Unjudged documents encountered in a simulation are treated as not relevant. The simplest stopping criterion is to simply judge an equal number of documents per topic. Each of the X topics that starts the process gets totalBudget/X judgments, and a topic is included in the \ufb01nal qrels if at least some minimum number of relevant documents (we use 3) is found. The simplicity of this method arises from the fact that topics are independent of one another once the budget is determined, but equal allotment is known to be sub-optimal for \ufb01nding the maximum possible viable topics since \u201csmall\u201d topics will receive as many judgments as \u201clarge\u201d topics. An alternative is a strict implementation of the process loosely followed in the track; we call this the Heuristic stopping criterion. In the Heuristic simulation experiments here, we capped the number of judgments any topic can receive at 1000, though that cap was never reached. Table 7 shows the number of judgments required and relative quality of the qrels created from these different stopping criteria for the two collections built in the track. Note that the only judgments available for these collection is from the Of\ufb01cial qrels, so a method could never \ufb01nd more relevant than in the Of\ufb01cial qrels. The statistics for the Document Ranking task collection are given in the top half of the table and for the Passage Ranking task collection in the bottom half. The statistics for the Of\ufb01cial qrels is included in the table for reference. The qrels designated as \u201cOriginal Size\u201d are the same qrels as in the previous experiments above: pools are built from all runs but ten different trials of the HiCAL process, corresponding to ten different random number seeds, are tested. \u201cBudget 400\u201d and \u201cBudget 500\u201d correspond to a constant per-topic budget of 400 and 500 judgments respectively. The Total Number of Judgments column in the table gives the number of judgments used over all topics that start the process. These judgments must be made to determine whether a topic will be included in the \ufb01nal evaluation set, and 18 \f0 10 20 30 Rank bm25_marcomb bm25exp_marcomb bm25exp_marco bm25base_rm3 bm25tuned_ax bm25tuned_prf idst_bert_v2 idst_bert_v1 bm25tuned_rm3 idst_bert_v3 bm25base_prf bm25base_ax srchvrs_run1 srchvrs_run2 bm25base uogTrDNN6LM bm25tuned idst_bert_r1 idst_bert_r2 baseline dct_qp_bm25e dct_tp_bm25e2 TUW19-d3-re ucas_runid3 ucas_runid1 dct_tp_bm25e ucas_runid2 TUW19-d1-re TUW19-d2-re ms_ensemble runid1 ms_duet TUW19-d2-f TUW19-d3-f TUW19-d1-f uogTrDSS6pLM uogTrDSSQE5LM query2doc_RNN Run count <= 25 count <= 50 count <= 60 count <= 70 count <= 80 count <= 90 count <= 100 count <= 125 count <= 150 0 10 20 30 Rank idst_bert_v3 idst_bert_v1 idst_bert_v2 idst_bert_r1 idst_bert_r2 bm25exp_marcomb bm25_marcomb TUW19-d3-re bm25exp_marco ucas_runid1 ucas_runid3 TUW19-d1-re ucas_runid2 TUW19-d2-re uogTrDNN6LM ms_ensemble dct_tp_bm25e dct_tp_bm25e2 srchvrs_run1 TUW19-d2-f srchvrs_run2 bm25tuned_rm3 ms_duet TUW19-d1-f TUW19-d3-f bm25tuned_ax bm25tuned_prf dct_qp_bm25e bm25base_prf bm25base_rm3 runid1 bm25base_ax bm25base bm25tuned baseline uogTrDSSQE5LM uogTrDSS6pLM query2doc_RNN Run count <= 25 count <= 50 count <= 60 count <= 70 count <= 80 count <= 90 count <= 100 count <= 125 count <= 150 Document Ranking collection, MAP Document Ranking collection, Prec(10) 0 10 20 30 Rank p_exp_rm3_bert idst_bert_p3 idst_bert_p2 idst_bert_p1 p_exp_bert p_bert idst_bert_pr1 TUA1-1 test1 idst_bert_pr2 runid4 runid3 TUW19-p3-f TUW19-p2-f TUW19-p1-f TUW19-p3-re srchvrs_ps_run2 TUW19-p1-re TUW19-p2-re bm25base_ax_p bm25tuned_prf_p bm25tuned_ax_p bm25base_prf_p ms_duet_passage bm25base_rm3_p bm25tuned_rm3_p srchvrs_ps_run3 bm25base_p srchvrs_ps_run1 bm25tuned_p runid2 UNH_bm25 runid5 ICT-CKNRM_B50 ICT-BERT2 ICT-CKNRM_B UNH_exDL_bm25 Run count <= 25 count <= 50 count <= 60 count <= 70 count <= 80 count <= 90 count <= 100 count <= 125 count <= 150 0 10 20 30 Rank idst_bert_p2 idst_bert_p1 idst_bert_p3 p_exp_rm3_bert p_bert p_exp_bert test1 idst_bert_pr2 TUA1-1 idst_bert_pr1 runid4 runid3 TUW19-p3-f TUW19-p2-f TUW19-p3-re TUW19-p1-f ICT-CKNRM_B TUW19-p1-re srchvrs_ps_run2 TUW19-p2-re ICT-BERT2 ICT-CKNRM_B50 ms_duet_passage bm25tuned_prf_p bm25base_ax_p bm25base_prf_p srchvrs_ps_run3 bm25tuned_ax_p bm25base_rm3_p bm25tuned_rm3_p srchvrs_ps_run1 runid2 runid5 bm25base_p bm25tuned_p UNH_bm25 UNH_exDL_bm25 Run count <= 25 count <= 50 count <= 60 count <= 70 count <= 80 count <= 90 count <= 100 count <= 125 count <= 150 Passage Ranking collection, MAP Passage Ranking collection, Prec(10) Figure 9: Position at which runs ranked over all simulation trials. so must be accounted for in the budgeting process. The Number of Evaluation Topics is the number of topics that are included in the \ufb01nal qrels \ufb01le based on the criterion\u2019s speci\ufb01cation. Original Size qrels always have the same number of judgments as the of\ufb01cial qrels by construction, so the qrels built using that method in each trial has the same number of topics as the qrels from all other trials, namely the number of topics in the Of\ufb01cial qrels. Constant budget qrels omit a topic only if the minimum number of relevant documents for a topic is not found. While it is possible for qrels created by a constant budget to differ in the number of topics, for the current collections each trial produced a qrels with the same number of topics as the other trials. The Heuristic method omits not only topics with too few relevant documents but topics with too many relevant as well. Again, different trials could lead to different numbers of topics in the qrels, but that did not happen in practice. The Heuristic method is the only method among those tested that can differ in the number of documents judged across trials. For that method, the table reports the mean number of judgments across the ten trials as well as the minimum and maximum number of judgments observed in a trial. The remaining columns in the table give the Kendall\u2019s \u03c4 score and maximum drops for the ranking of systems produced by the test qrels as compared to the ranking produced by the Of\ufb01cial qrels. As in the experiments above, the value reported is the smallest \u03c4 and largest drop observed across the ten trials. The main take-away from the results in Table 7 is that the HiCAL process is very stable across trials and is even robust to differences in stopping conditions within the ranges tested. The primary effect of the different stopping conditions is the inclusion or exclusion of topics affecting mean scores, not differences in individual topic scores. Averaging 19 \fTable 7: Effect of stopping criteria on qrels quality and number judgments required. Total # Eval MAP Prec(10) Criterion Judgments Topics \u03c4 Drop \u03c4 Drop Of\ufb01cial 20,157 43 \u2014 \u2014 \u2014 \u2014 Original Size 20,157 43 0.9801 2 1.0000 0 Budget 400 20,852 50 0.9316 5 0.9017 8 Budget 500 26,052 50 0.9431 3 0.9017 8 Heuristic 17,231.2 [17,190\u201317,262] 38 0.9260 5 0.9565 2 a) Document Ranking task collection Total # Eval MAP Prec(10) Criterion Judgments Topics \u03c4 Drop \u03c4 Drop Of\ufb01cial 11,904 43 \u2014 \u2014 \u2014 \u2014 Original Size 11,904 43 0.9880 2 1.0000 0 Budget 400 20,852 49 0.9880 1 0.9727 3 Budget 500 26,052 49 0.9880 1 0.9727 3 Heuristic 12,721.6 [12,712-12,730] 46 0.9880 1 0.9786 2 b) Passage Ranking task collection effects are the sole explanation for the differences in Prec(10) rankings: since the top-10 pool was always judged in all conditions, the only difference that can arise for a Prec(10) ranking is the change in the mean score when a topic is omitted from the evaluation set. A large majority of the topics omitted by the Heuristic method were eliminated by matching the condition |Relevant| > 0.6|Judged| once suf\ufb01ciently many documents were judged (i.e., in step 2 above). LOU tests and other simulations are dependent on the results submitted to the track, so it is not possible to say with certainty that a given partially judged collection is reusable. Nonetheless, the current evidence suggests that the collections built in the Deep Learning track are high quality ad hoc collections. 6" + } + ], + "Michael Bendersky": [ + { + "url": "http://arxiv.org/abs/2010.00200v1", + "title": "RRF102: Meeting the TREC-COVID Challenge with a 100+ Runs Ensemble", + "abstract": "In this paper, we report the results of our participation in the TREC-COVID\nchallenge. To meet the challenge of building a search engine for rapidly\nevolving biomedical collection, we propose a simple yet effective weighted\nhierarchical rank fusion approach, that ensembles together 102 runs from (a)\nlexical and semantic retrieval systems, (b) pre-trained and fine-tuned BERT\nrankers, and (c) relevance feedback runs. Our ablation studies demonstrate the\ncontributions of each of these systems to the overall ensemble. The submitted\nensemble runs achieved state-of-the-art performance in rounds 4 and 5 of the\nTREC-COVID challenge.", + "authors": "Michael Bendersky, Honglei Zhuang, Ji Ma, Shuguang Han, Keith Hall, Ryan McDonald", + "published": "2020-10-01", + "updated": "2020-10-01", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "Introduction In this paper, we analyze the participation of our team \u2013 unique_ptr \u2013 in the TREC-COVID challenge organized by the Allen Institute for Arti\ufb01cial Intelligence (AI2), the National Institute of Standards and Technology (NIST), the National Library of Medicine (NLM), Oregon Health and Science University (OHSU), and the University of Texas Health Science Center at Houston (UTHealth)2. The TREC-COVID challenge followed the TREC model for building test collections through community participation; the submissions from the different teams were pooled to create a reusable test collection to encourage future research in systems for information retrieval from scienti\ufb01c literature. The CORD-19 Research Dataset3 was used as a target retrieval corpus. The challenge was organized as a series of \ufb01ve rounds, where participants could choose to skip any round. Each round was associated with a set of structured search topics for which relevant documents needed to be retrieved (see Figure 1 for an example of a topic). While the majority of the topics repeated across rounds with only \ufb01ve new topics added per-round, the CORD-19 dataset itself grew signi\ufb01cantly during the time that the challenge took place (see Figure 2(a)), and only the new relevance assessments (by human annotators with biomedical expertise) from the round were used to score the submissions. Therefore, strong performance in one round did not guarantee success in future rounds. unique_ptr had participated in all \ufb01ve rounds, however the techniques described in this paper only solidi\ufb01ed in rounds 4 and 5, where our team achieved the best performance among 72 and 126 runs, respectively, on the majority of the evaluation metrics. Early on in the competition, we realized that due to the rapid evolution of the corpus, it is unlikely that the \u201cwinner takes all\u201d approach will dominate, with a single method leading the challenge in all rounds. Therefore, instead, we turned our attention to an ensemble approach, that would be able to adapt to the rapidly evolving CORD-19 content, and would be able to leverage the growing pool of relevance judgements for each query (Figure 2(b)). After several less successful (but highly educational) attempts in Rounds 1 \u2013 3, we have zeroed in on the ensemble approach described in this paper. \u2217Work done while at Google Research. 2https://ir.nist.gov/covidSubmit/ 3https://www.semanticscholar.org/cord19 Preprint. Under review. arXiv:2010.00200v1 [cs.IR] 1 Oct 2020 \f post-infection COVID-19 immunity do individuals who recover from COVID-19 show sufficient immune response, including antibody levels and T-cell mediated immunity, to prevent re-infection? There is concern about re-infection for COVID-19, so this topic is looking for studies suggesting post-infection immunity, including post-infection antibody levels (over time) and evidence for individuals who have been infected more than once. Figure 1: Example of a topic used in TREC-COVID challenge. Round ID Metadata rows 0 50000 100000 150000 200000 I II II IV V Round ID Judgments / Query 0 250 500 750 1000 I II II IV V (a) (b) Figure 2: Plots that demonstrate the evolution of the CORD-19 corpus as the TREC-COVID rounds progressed. (a) Number of documents for which any metadata was available in each round; (b) Available relevance judgments per query prior to each round. Our approach combines runs from lexical and semantic retrieval systems, as well as pre-trained BERT rankers to achieve a dual effect of high recall of relevant retrieved documents, and high precision at the top ranks. It can also make use of existing relevance judgements, both in the retrieval stage via relevance feedback, as well as in the ranking stage, via BERT model \ufb01ne-tuning. To join all of these disparate retrieval and ranking components together we propose a simple but effective weighted hierarchical rank fusion technique. Our \ufb01nal submission \u2013 codenamed RRF102 \u2013 ensembles together 102 different retrieval and ranking runs using this technique. RRF102 signi\ufb01cantly and consistently outperforms other alternatives both in our ablation studies, as well as on the of\ufb01cial Round 5 leaderboard. 2 Related Work Readers who are interested in the further details on the TREC-COVID challenge are encouraged to refer to an excellent overview by Voorhees et al. [2020]. There were also other publications by the participating teams, most of which can be found on the TREC-COVID Bibliography page. We do not claim novelty for most of the ideas presented in this paper. Many of them were discussed and utilized by other challenge participants, like relevance feedback [Zhang et al., 2020] or using the MS-MARCO dataset for training BERT ranking model [MacAvaney et al., 2020]. Our main contribution in this work is careful evaluation of the various retrieval and ranking systems, as well as a robust ensembling mechanism that combines lexical and semantic retrieval with multiple rankers. 3 Ensemble Construction Ensemble models have repeatedly been placed at the top positions of recommendation and ranking competitions. For instance, Bell and Koren [2007], who were among the winners of the Net\ufb02ix prize noted that: \u201cit was important to utilize a variety of models that complement the shortcomings of each 2 \fother\u201d. Burges et al. [2011] \u2013 the winners of the Yahoo! Learning to Rank challenge \u2013 used a linear combination of 12 ranking models, combining LambdaMART boosted decision trees, neural nets and logistic regression with various loss functions. Han et al. [2020] used an ensemble of 15 BERT, RoBERTa and ELECTRA models to achieve top best performance in the MS-MARCO passage reranking task. Therefore, following these success stories, we also focused on exploring ensemble models for our TREC-COVID challenge submission. The most common application of the ensembling method (e.g., in a classi\ufb01cation or a regression task) consists of two main stages: \ufb01rst, we develop a pool of base learners, and then combine them together to form an aggregate prediction [Hastie et al., 2009]. Since each of these two stages may require some \ufb01tting to the training data, it is important to ensure that the overall ensemble is not so complex that it is over\ufb01tted, and does not generalize to unseen data. Therefore, in most cases, ensembles are implemented using a pool of simple diverse base learners usually combined via (weighted) averaging of their predictions. The setting we face in the TREC-COVID challenge is somewhat more involved than the ensemble setting presented above. As is common in information retrieval, there are two stages to generating the optimal ranked list. First, we need to retrieve an initial set of candidates that potentially match the topic. The success at this stage is measured by a metric like recall@K (where K \u2248O(1000)). Then, we need to apply ranking to this set, to achieve the most optimal ordering. The success in the ranking stage will be measured by a metric like nDCG@K (where K \u2248O(10)). Mean average precision (MAP) is often used as an effective metric for measuring the joint effect of the two stages (both high recall and precision) . Many of the successful submissions at the \ufb01rst rounds of the TREC-COVID challenge \ufb01xed the \ufb01rst stage (retrieval) to be a simple lexical retrieval algorithm, e.g., BM25, and instead focused on the re-ranking stage [MacAvaney et al., 2020]. While reasonable (and indeed effective at achieving high nDCG@10 performance), in our opinion, this approach may limit the real-world applicability of the resulting algorithms, since high recall is of importance in the medical domain, e.g., for summarizing all available evidence regarding a certain treatment or symptom [Kanoulas et al., 2017]. Therefore, we have deployed a two-pronged ensembling approach in our submission. On one hand, we use a combination of different retrieval mechanisms to obtain a comprehensive document set to increase recall. On the other hand, we also apply multiple BERT-based rankers to this set, which was found to be bene\ufb01cial for high precision at top ranks in prior studies [Han et al., 2020]. As we demonstrate in Section 5, the resulting ensemble achieves the best of both worlds: state-of-the-art performance on a wide range of both precision and recall metrics. How to optimally construct such two-stage retrieval and ranking ensembles remains an interesting open research question that we will undoubtedly revisit in the future. However, for the purpose of TREC-COVID challenge, using reciprocal rank fusion (RRF) [Cormack et al., 2009] as a foundational building block, results in robust and effective ensembles. One important advantage of RRF is that, as its name suggests, it only requires access to document ranks, and thus can accommodate heterogeneous candidate sets with differing score ranges. However, in its most basic form, RRF can result in sub-optimal performance, which we address by proposing a simple hierarchical variant of this method. 3.1 Notation We \ufb01rst introduce some notation that will be useful in the exposition of our method, which follows next. We are given a document corpus C, from which the candidate documents are drawn. We are also given a set of runs R, wherein each run r \u2208R induces a permutation \u03c0r over a subset of documents {d} \u2282C. With a slight abuse of notation, we denote the rank of document d in run r as \u03c0r(d), and its respective score as scr(d). Each run r is generated by some system S, and therefore the entire run pool R can be divided into non-overlapping pool subsets RS. Note that each system S can generate multiple runs, e.g., through variation in parameters or inputs. The concrete implementations of systems and runs are discussed in detail in Section 4. In the remainder of this section, we discuss how these runs are ensembled to form our \ufb01nal submission. 3 \f3.2 Reciprocal Rank Fusion Following the formulation originally proposed by Cormack et al. [2009], reciprocal rank fusion (RRF) sorts the documents according to a simple scoring formula, where document score is de\ufb01ned as a sum of reciprocal ranks of the document across the runs: scrrf(d) = (P r\u2208R 1 k+\u03c0r(d), if d \u2208\u03c0r 0, otherwise. (1) We \ufb01x k = 60, following the original paper. k is a constant that mitigates the impact of high rankings by outlier systems. Sorting all the documents where scrrf(d) > 0 in a descending order of scrrf(d) produces an RRF run \u03c0rrf. 3.3 Hierarchical Reciprocal Rank Fusion As noted above, our runs from heterogeneous set of systems and the number of runs across systems may vary quite dramatically (see more on that in Section 4). Therefore, rankings in the \u03c0rrf run may be dominated by the system that has the most runs. To mitigate this effect, and to ensure that no system is over-represented in the \ufb01nal fusion run, we propose a simple approach based on a hierarchical application of rank fusion. First we divide our run pool R, into sub-pools, RS, each corresponding to runs produced by system S. Obviously, it is possible to divide R into sub-pools beyond system pools, but we stick to this simple mechanism in our submissions, as it is quite logical, and empirically effective. For each pool RS, we produce a single run \u03c0rrfS, such that scrrfS(d) = (P r\u2208RS 1 k+\u03c0r(d), if d \u2208\u03c0r 0, otherwise. We then recursively apply RRF to all the \u03c0rrfS runs, resulting in the \ufb01nal hierarchical rank fusion run \u03c0hrrf: schrrf(d) = (P RS\u2282R 1 k+\u03c0rrfS (d), if d \u2208\u03c0rrfS 0, otherwise. (2) 3.4 Weighted Hierarchical Reciprocal Rank Fusion Since it is likely that not all systems are of equal quality, intuitively it makes sense to weight their contributions, resulting in a variant of the hierarchical rank fusion \u03c0hwrrf: schwrrf(d) = (P RS\u2282R wS k+\u03c0rrfS (d), if d \u2208\u03c0rrfS 0, otherwise. (3) Given some training data, it is possible to make the weights wS learnable. However, given the paucity of training data, we were somewhat apprehensive of over\ufb01tting, and used a simple heuristic instead. We set wS = 2 for any systems that rely on prior relevance judgments, and wS = 1 for all other systems. This heuristic has the advantage of re\ufb02ecting the intuition that the systems that had access to human labels are more trustworthy, without explicitly using any human labels to estimate the level of this trust. 4 Detailed Overview of Systems and Runs Thus far, we described hierarchical reciprocal rank fusion (h-RRF), the general ensembling framework within which we operated in our submissions. In this section, we provide a detailed exposition of the systems that were ensembled using h-RRF, and elaborate on the runs that were produced using 4 \fLexical Retrieval Runs h-RRF Semantic Retrieval Runs Relevance Feedback Runs TFR-BERT Runs Relevance judgments Weighted h-RRF RRF102 submission run Figure 3: Schematic diagram of the weighted hierarchical rank fusion ensemble. System Type # Runs Produced 1 Terrier Lexical Retrieval 14 2 Anserini Lexical Retrieval 12 3 Dual Encoder Semantic Retrieval 24 4 Terrier Relevance Feedback 2 5 Anserini Relevance Feedback 2 6 MS-Marco BERT TFR-BERT 30 7 Finetuned BERT TFR-BERT 18 Total: 102 Table 1: Summary of retrieval and ranking systems used, and the runs produced by each system. these systems. In general, as discussed in the beginning of the previous section, we gave preference to simple, replicable systems that require as little training data as possible. Figure 3 provides a schematic overview of our overall ensembling \ufb02ow. First, we use lexical and semantic retrieval from either inverted indices or a k-nearest neighbor database, respectively, to retrieve a set of candidates, ranked by a simple match score (e.g., BM25 or vector dot product). The candidates from the runs generated by these retrieval systems are fed into a hierarchical rank fusion (as shown in Equation 2), and its output is re-ranked using multiple Tensor\ufb02ow Ranking BERT models [Han et al., 2020]. In addition, we perform several runs of a standard relevance feedback-based retrieval. Finally, the outputs of these four systems (lexical and semantic retrieval, relevance feedback and TFRBERT) are all fused used weighted h-RRF (Equation 3). This constitutes our best \ufb01nal submission run; we refer to it as RRF102, with the name indicating the total number of runs being ensembled. A summary of these runs and the systems that produced them is provided in Table 1. We use the remainder of this section to describe them in more detail. 4.1 Lexical Retrieval Systems We used two popular open source search engines, Terrier [Ounis et al., 2005] and Anserini [Yang et al., 2017] to generate our lexical retrieval runs. Terrier was elected based on its excellent documentation, expressive query language, and its ability to implement common retrieval algorithms via con\ufb01guration. In the case of Anserini, we used the runs kindly published by the covidex team [Zhang et al., 2020]. These runs were also used by many of the competing teams, thus providing a natural benchmark for evaluating the performance of the other systems in the ensemble. 5 \f4.1.1 Terrier For the Terrier lexical retrieval system we generate multiple retrieval runs, each using a different representation of a subset of topic \ufb01elds: \u2022 Bag-of-words representation of the query \ufb01eld \u2022 DFR-based dependence model [Peng et al., 2007] representation of the query \ufb01eld \u2022 Bag-of-words representation of the question \ufb01eld \u2022 DFR-based dependence model representation of the question \ufb01eld \u2022 Bag-of-words representation of the concatenation of query and question \ufb01elds \u2022 Bag-of-words representation of the concatenation of question and narrative \ufb01elds \u2022 Bag-of-words representation of the concatenation of query and question \ufb01elds, expanded with 10 most informative terms appearing in the top documents. For each of these representations, we apply the resulting queries to both abstract and full-text indices. In all the runs, unless speci\ufb01ed otherwise, we use the default Terrier settings. Overall, this results in 14 Terrier lexical retrieval runs. 4.1.2 Anserini For Anserini, we do not conduct any runs ourselves, but rather use the runs provided by the covidex team. Speci\ufb01cally we use the following combinations of indices and topic \ufb01elds: \u2022 abstract AND query+question \u2022 abstract AND UDel-qgen \u2022 full-text AND query+question \u2022 full-text AND UDel-qgen \u2022 paragraph AND query+question \u2022 paragraph AND UDel-qgen We use these combinations both for regular4 and doc2query expanded5 Anserini indices, resulting in 12 runs. Note that we do not use any of the published Anserini rank fusion runs, as we rely on our own implementation of the hierarchical rank fusion. 4.2 Relevance Feedback Systems As in each of the TREC-COVID challenge rounds the majority of the existing topics were being reused, past participants found relevance feedback to be bene\ufb01cial for obtaining effective submissions. As an example, UIowaS team had achieved consistently high ranks across multiple rounds using a simple Borda fusion of multiple Terrier runs with BM25 weighting and relevance feedback. Inspired by this simple yet effective approach we implement two relevance feedback runs using our Terrier system with abstract index: \u2022 Relevance feedback run with query+question \ufb01eld expanded by 300 terms from 10 highest ranked relevant documents. \u2022 Relevance feedback run with query+question \ufb01eld expanded by 1, 000 terms from 30 highest ranked relevant documents. In addition, we use two published Anserini relevance feedback runs using both regular and doc2query expanded abstract indices. Overall, this results in four relevance feedback runs used in our submission. 4https://github.com/castorini/anserini/blob/master/docs/experiments-covid.md 5https://github.com/castorini/anserini/blob/master/docs/experiments-covid-doc2query.md 6 \f4.3 Semantic Retrieval System 4.3.1 Neural retrieval model based on BERT The neural retrieval model belongs to the family of relevance-based dense retrieval or dual encoder models that encodes pairs of items in dense subspaces [Palangi et al., 2016]. In particular, our encoders are based on BERT [Devlin et al., 2019], which takes a query(or document) as input, and then projects the [CLS]token representation down to a 768 dimensional vector as the embedding of that query (or document). We then compute the relevance score as vector dot product between the query and document embedding. We share parameters between query and document encoder, so called Siamese networks \u2013 as we found this greatly increased performance while reducing parameters. We train our dual-encoder models using softmax cross-entropy loss together with in-batch negatives, i.e., given a query in a batch of (query, relevant-passage) pairs, passages from other pairs are considered irrelevant for that query. In-batch negatives has been widely adopted in training neural network based retrieval models as it enables ef\ufb01cient training via computation sharing [Gillick et al., 2018, Karpukhin et al., 2020]. For serving, we \ufb01rst run the encoder over every passage of\ufb02ine to create a distributed lookup-table as a backend. At inference, we only need to run the encoder on the input query. The query encoding is used to perform nearest neighbour search against the passage encodings in the backend. Since the total number of passages is in the order of millions and each passage is projected to a 768 dimensional vector, we use distributed brute-force search for exact inference instead of approximate nearest neighbour search [Liu et al., 2011, Johnson et al., 2017]. 4.3.2 Synthetic question generation One critical ingredient for training deep neural models is the abundant training data. However, such resource is not always available, especially on specialized domains such as biomedical domain. To handle the data scarcity issue, we adopt the data augmentation approach proposed by Ma et al. [2020], which automatically generates synthetic questions on the target domain documents. In particular, a transformer-based [Vaswani et al., 2017] encoder-decoder generation model is trained to generate questions speci\ufb01c to a given passage. On completion, we apply the question generator on abstracts of PubMed/MEDLINE articles. This generates roughly 166 million (synthetic question, abstract) pairs for training our dual encoder model. 4.3.3 Hybrid retrieval system Although dual encoder models are good at capturing semantic similarity, e.g., \u201cTheresa May\u201d and \u201cPrime Minister\u201d [?], we observe lexical matching consistently poses a challenge for the dual encoder model. To mitigate the issue, we build a hybrid retrieval system by combining the dual encoder model with BM25 model, exploiting the strength of BM25 in term matching.6 Note that lexical retrieval systems like BM25 can be viewed as vector dot-product with nearest neighbor search. Formally, let qbm25 \u2208[0, 1]|V | be a |V |-dimensional binary encoding of a query q, i.e., qbm25[i] is 1 if the i-th entry of vocabulary V is in q, 0 otherwise. Furthermore, let dbm25 \u2208R|V | be a sparse real-valued vector where, dbm25 i = IDF(di) \u2217cnt(di, d) \u2217(k + 1) cnt(di, d) + k \u2217(1 \u2212b + b \u2217 m mavg ). We can see that, BM25(q, d) = \u27e8qbm25, dbm25\u27e9 Here \u27e8, \u27e9denote vector dot-product. This view gives rise to a simple hybrid model: sim(qhyb, dhyb) = \u27e8qhyb, dhyb\u27e9 = \u27e8[\u03bbqnn, qbm25], [dnn, dbm25]\u27e9 = \u03bb\u27e8qnn, dnn\u27e9+ \u27e8qbm25, dbm25\u27e9, 6Note that while the lexical match issue may be somewhat mitigated by the lexical retrieval systems in the ensemble, we did \ufb01nd combining with BM25 at a system level helpful in our investigations, as it provides more diversity in the runs to the \ufb01nal ensemble. 7 \fwhere qnn and dnn denote query and document embedding from the dual encoder model, respectively. \u03bb is a hyper-parameter that controls the weight of the dual encoder system. We use this hybrid system to generate multiple runs, based on different topic and index con\ufb01gurations: \u2022 abstract AND query+question+narrative \u2022 full-text AND query+question+narrative \u2022 full-text AND query+question \u2022 full-text AND query For each the above con\ufb01guration, we also try different \u03bb values within {1, 5, 10, 15, 20, 30}. This results in 24 overall dual encoder runs. 4.3.4 Implementation Details Both the encoder and decoder of the question generation model have the same con\ufb01guration as a BERT-base model. In addition, we share parameters between encoder and decoder, and parameter values are initialized with the public uncased BERT-base checkpoint7. We truncate answer passage to 128 tokens and limit decoding to 64 steps. The training objective is the standard cross entropy, and the model is trained with a batch size of 128 and learning rate 1e-4 using Adam [Kingma and Ba, 2014] optimizer. The dual encoder model described in Section 4.3.1 is based on a customized BERT model, which contains 12 transformer layers [Vaswani et al., 2017], each layer with 1024 hidden dimension and 16 attention heads. We pretrain our own BERT model on PubMed abstracts with a customized wordpiece vocabulary which contains 107K entries. We follow the same sentence sampling procedure as reported in the original BERT paper, e.g., the combined sequence has length no longer than 512 tokens, and we uni-formly mask 15% of the tokens from each sequence for masked language model prediction. We update the next sentence prediction task by replacing original binary-cross-entropy loss with softmax cross-entropy loss. We use the same hyper-parameter values for BERT pretraining as Devlin et al. [2019] except that the learning rate is set 2e-5, and the model is trained for 300,000 steps. For dual encoder training, we use a batch size of 6144. Each training example in the batch is a question-abstract pair, and we truncate queries and abstracts to 48 and to 350 tokens by BERT wordpiece tokenization. We train the model for 100,000 steps using Adam with a learning rate 5e-6. Similar to BERT pretraining, we also apply L2 weight decay of 0.01, and warm up learning rate for the \ufb01rst 10,000 steps. 4.4 TFR-BERT Rankers We base our re-ranking strategies on TFR-BERT [Han et al., 2020]. In general, we \ufb01ne-tune a pretrained contextual representation model like BERT based on ranking losses and score each document d (since BERT-style encoders are usually applied to shorter text sequences, in practice we only apply them for scoring document abstracts). Then we re-rank all the documents based on the ranking scores. We \ufb01ne-tune the re-ranker model based on different pre-trained models and with different strategies and include these runs into the hierarchical reciprocal rank fusion. First, we brie\ufb02y introduce the model structure of the re-ranker. For each query q and a candidate document d retrieved by the retrieval system, we construct the input sequence of tokens by concatenating the query tokens and the document tokens, separated by [SEP] tokens. We also add a [CLS] token at the beginning of the sequence. Then, we feed the sequence into an encoder based on pre-trained BERT-like model and take the output embedding of the [CLS] token. Take BERT as an example, we denote the output embedding as eBERT(d). Based on the embedding, we simply use a dense layer to get the ranking score of document d by: scBERT(d) = \u03c3 \u0000WeBERT(d) + b \u0001 (4) where W and b are trainable parameters of the dense layer. 7https://github.com/google-research/bert 8 \fThe entire scorer can be trained with ranking losses for optimal ranking performance. In this work, we apply a softmax ranking loss. For each query q, if we denote the retrieved candidate document set as C and the ground-truth relevance of each document d \u2208C as yd, then the softmax ranking loss for this query can be written as: \u2113q = X d\u2208C yd P d\u2032\u2208C yd\u2032 log exp(scBERT(d)) P d\u2032\u2208C exp(scBERT(d\u2032)) ! (5) We can \ufb01ne-tune the entire ranking model by using the softmax ranking function. Notice that this loss function is a simpli\ufb01ed version of the ranking loss proposed by Xia et al. [2008]. It is worth pointing out that the encoder does not need to be pre-trained BERT. There are many similar encoders publicly available with similar structure that can be plugged in seamlessly. After trying multiple alternatives, we \ufb01nd that ELECTRA [Clark et al., 2019] and RoBERTa [Liu et al., 2019] are the most effective ones. We denote the ranking scorers based on these two encoders as scELECTRA(\u00b7) and scRoBERTa(\u00b7) respectively. 4.4.1 TFR-BERT \ufb01ne-tuned on MS-MARCO Since the relevance labels provided in TREC-COVID dataset are extremely limited, we experiment with utilizing external datasets to \ufb01ne-tune the re-ranking models. The MS-MARCO dataset [Bajaj et al., 2016] is a passage ranking dataset which aims to rank passages based on their relevance to questions. The dataset contains about 1 million queries and more than 8 million passages. For each query, some relevant passages are annotated. We \ufb01ne-tune the re-ranking scorer based on the labeled data in MS-MARCO dataset. For each query q, we \ufb01rst retrieve all the candidate passages C from the retriever. Then for each passage d \u2208C relevant to the query (i.e., yd > 0), we randomly sample another (l \u22121) negative passages with yd\u2032 = 0 and assemble them together as a candidate subset C\u2032 \u2282C with size l. We train the re-ranking scorer based on C\u2032. This step reduces the computational requirements and avoid numerical instability instead of feeding more than 1,000 passages into the re-ranker for \ufb01ne-tuning. Notice that although we only sample a very small subset for \ufb01ne-tuning, it is not necessary for inference. Since our re-ranker only needs a query-document pair as input during inference, we can always score all the candidate documents retrieved in the \ufb01rst-stage for each query and re-rank all of them to ensure recall. For inference on TREC-COVID dataset, we take the \u201cquery\u201d and \u201cquestion\u201d segment and directly concatenate them together as the query tokens. We then concatenate [CLS], the query tokens and the passage tokens separated by [SEP] as described above and feed the whole sequence into the \ufb01ne-tuned scorer. We use BERT-Large pre-trained with whole-word masking, ELECTRA, and RoBERTa as the encoder respectively. They are \ufb01ne-tuned on MS-MARCO dataset for 200,000 steps with learning rate 1e-5. The batch size is set to 32 and the candidate subset size is set to l = 12. The maximum sequence length is set to 512 and any passages resulting in longer sequences will be truncated. We keep other \ufb01ne-tuning con\ufb01gurations such as optimizer and warming-up steps the same as the default BERT \ufb01ne-tuning con\ufb01gurations. We \ufb01ne-tuned 10 individual re-rankers for each encoder type, resulting in 30 re-rankers. All of the 30 re-rankers are regarded as a single system S and fused together. 4.4.2 TFR-BERT \ufb01ne-tuned on TREC-COVID As the number of relevance judgments available per topic grew substantially in the \ufb01nal rounds (see Figure 2(b)), we also attempted to leverage the limited relevance labels from TREC-COVID dataset. The overall model structure is the same as before. However, since there is only a small number of relevance labels, the re-ranker could quickly over\ufb01t if being \ufb01ne-tuned for too many steps. To explore the best number of \ufb01ne-tuning steps, we randomly sample 20% of queries from labeled data as validataion dataset and \ufb01ne-tune the re-ranker on the other 80% of the dataset. We monitor the performance curve on validation dataset and manually select a reasonable number of \ufb01ne-tuning steps. We then \ufb01ne-tune the re-rankers on all labeled data for the selected number of \ufb01ne-tuning steps. Depending on different encoders, the selected number of \ufb01ne-tuning step vary from 3,000 to 10,000. 9 \f#Runs fused NDCG@20 P@20 MAP Recall@1000 RRF(Terrier) 14 50.18\u2217 55.56\u2217 23.10\u2217 53.45\u2217 RRF(Anserini) 8 51.83\u2217 56.67\u2217 23.68\u2217 54.52\u2217 RRF(Dual Encoder) 24 50.55\u2217 56.11\u2217 20.12\u2217 47.51\u2217 RRF(TA) 26 53.52\u2217 57.78\u2217 25.95\u2217 56.72\u2217 RRF(TAD) 50 55.47 60.67 27.44\u2217 59.00 h-RRF(TAD) 3 56.64 62.22 27.98 59.67 Table 2: Ablation study of the retrieval systems. Individual runs performance is not reported, since we found them to be generally well below the performance of the RRF runs. Statistically signi\ufb01cant differences (paired t-test, p < 0.05) from the last row are marked by \u2217. Best overall metric is bolded. #Runs fused NDCG@20 P@20 MAP Recall@1000 RRF(MS-Marco BERT) 30 52.74\u2217 55.44\u2217 24.11\u2217 56.30\u2217 RRF(Finetuned BERT) 18 68.92 70.78 36.36\u2217 67.75\u2217 RRF(Relevance Feedback) 4 62.67\u2217 66.56\u2217 30.20\u2217 60.60\u2217 RRF(TADM) 80 59.97\u2217 64.22\u2217 29.31\u2217 60.86\u2217 RRF(TADMF) 98 64.51\u2217 67.11\u2217 33.66\u2217 65.66\u2217 RRF(TADMFR) 102 64.88\u2217 67.56\u2217 34.07\u2217 65.77\u2217 h-RRF(TADMFR) 6 62.67\u2217 66.56\u2217 30.20\u2217 60.60\u2217 hw-RRF(TADMFR) 6 71.61 72.56 39.13 69.36 Table 3: Ablation study of all the retrieval and ranking runs that comprise the \ufb01nal weighted hierarchical rank fusion ensemble. Individual runs performance is not reported, since we found them to be generally well below the performance of the RRF runs. Statistically signi\ufb01cant differences (paired t-test, p < 0.05) from the last row are marked by \u2217. Best overall metric is bolded. Similarly to MS-MARCO, we use BERT-Large pre-trained with whole-word masking, ELECTRA, and RoBERTa as the encoders, respectively. The learning rate is still 1e-5 and the batch size is still 32. Due to limited labeled data, we only set the candidate subset size to 6. The maximum sequence length is still 512. We also try two different ways to construct the query sequence: 1) concatenating the \u201cquery\u201d and \u201cquestion\u201d \ufb01elds as the query sequence; 2) concatenating the \u201cquestion\u201d and \u201cnarrative\u201d \ufb01elds as the query sequence. For each query sequence construction method, We \ufb01ne-tune 3 individual re-rankers for all 3 types of encoders, resulting in 18 re-rankers to be fused together. 5 Experimental Results We begin this section by reporting the results of the ablation studies designed to evaluate the various aspects of our overall ensemble approach using relevance judgments from Round 1 \u2013 4. These analyses were done in a lead up to Round 5, and form the basis for our \ufb01nal submission to this round. Then, we report the of\ufb01cial metrics for our best automatic and feedback runs for Round 5 of the TREC-COVID challenge.8 5.1 Ablation studies In these ablation studies, we use the relevance judgments from Rounds 1 \u2013 4 to better understand the contributions of the systems to be used in our \ufb01nal ensemble. In Table 2, we look at the performance of each of the retrieval systems, both lexical (Terrier and Anserini) and semantic (the Dual Encoder described in Section 4.3). Overall, it is clear from Table 2 that while the three retrieval systems are comparable in their performance (with system D slightly trailing the lexical retrieval systems in MAP and Recall@1000), their results are highly complementary. RRF(TA) achieves large gains as compared to either Terrier or Anserini. Fusion with Dual Encoder system leads to additional gains, especially in the h-RRF(TAD) 8Our submissions performed equally well in Round 4 of the competition, but since these submissions do not neatly correspond to the ensembling approach discussed in this paper, we only report Round 5 results here. 10 \fRun ID NDCG@20 P@20 MAP Recall@1000 rk_ir_trf_logit_rr 79.56\u2217 82.60\u2217 37.89\u2217 62.91\u2217 covidex.r5.2s.lr 83.11 84.60 39.22\u2217 61.47\u2217 sab20.5.4.dfo 77.91\u2217 82.10\u2217 40.61\u2217 72.17\u2217 elhuyar_rrf_nof09p 77.89\u2217 83.10\u2217 41.69\u2217 70.68\u2217 UPrrf102-r5 80.92\u2217 85.30 45.69\u2217 76.09\u2217 UPrrf102-wt-r5 84.90 86.90 47.31 75.53 Table 4: Comparison to feedback runs by four other top-performing (as measured by NDCG@20 and MAP) teams in TREC-COVID Round 5. The best run per team is used. Runs are sorted by the MAP metric, and statistically signi\ufb01cant differences (paired t-test, p < 0.05) from the last row are marked by \u2217. Best overall metric is bolded. Run ID NDCG@20 P@20 MAP Recall@1000 covidex.r5.d2q.2s 75.39 77.00 32.27\u2217 60.22\u2217 uogTrDPH_QE_SB_CB 74.27 79.10 33.05\u2217 59.05\u2217 UPrrf80-r5 71.16 75.90 35.98 69.43 UPrrf89-r5 72.35 75.90 36.12 69.48 Table 5: Comparison to automatic runs by two other top-performing (as measured by NDCG@20 and MAP) teams in TREC-COVID Round 5. The best run per team is used. Runs are sorted by the MAP metric, and statistically signi\ufb01cant differences (paired t-test, p < 0.05) from the last row are marked by \u2217. Best overall metric is bolded. variant, which has a 8% increase in MAP compared to RRF(TA). Overall, h-RRF(TAD) is statistically signi\ufb01cantly better than the other alternatives on most of the reported metrics. In Table 3, we switch our attention to the \ufb01nal combination of the retrieval systems with the ranking systems: MS-Marco BERT, Finetuned BERT and Relevance Feedback. For Relevance Feedback, as described in Section 4.2, we fuse two Terrier relevance feedback retrieval runs, and two Anserini relevance feedback runs provided by the covidex team. For RRF(\u2217BERT) runs, we use a fusion of multiple rerankers (described in Section 4.4) each applied to the top 2000 results from the hRRF(TAD) run, the best performing run in Table 2. In Table 3, again, we see a clear indication of the more is more principle: ensembles with a larger number of runs achieve better performance. The best unweighted retrieval and ranking ensemble, RRF(TADMFR), achieves an almost 20% gain in MAP, as compared to the best retrieval-only ensemble, h-RRF(TAD). Heuristic weighting of the runs that have access to relevance judgments, as described in Section 3.4, results in an additional signi\ufb01cant improvement. hw-RRF(TADMFR) achieves roughly 15% and 10% improvement over the best unweighted ensemble in terms of MAP and NDCG@20, respectively. In both cases these improvements are statistically signi\ufb01cant. With these ablation studies in mind, we use our 102-run weighted hierarchical rank fusion ensemble (a.k.a RRF102) as the highest priority submission for Round 5 of the TREC-COVID challenge. As we were allowed to submit additional runs, we submit other alternative ensemble combinations as well. 5.2 TREC-COVID Round 5 Of\ufb01cial Results In this section, we brie\ufb02y summarize the of\ufb01cial performance of our runs in Round 5 of the challenge. Since TREC-COVID challenge uses residual collection evaluation, all the documents that were evaluated in Rounds 1 \u2013 4 are \ufb01ltered out from the submitted runs. Table 4 compares the performance of the weighted hierarchical reciprocal rank fusion run UPrrf102-wt-r5 (equivalent to hw-RRF(TADMFR) in Table 3) to four other runs by top ranked teams, as well as our unweighted variant UPrrf102-r5 (equivalent to h-RRF(TADMFR) in Table 3). UPrrf102-wt-r5 outperforms all other submissions, in most cases to a statistically signi\ufb01cant degree. In particular, the increases in MAP and Recall@1000 are especially impressive. UPrrf102-wt-r5 achieves 13.4% MAP gain as compared to the next best team\u2019s run (elhuyar_rrf_nof09p). This 11 \fdemonstrates the utmost importance of retrieval and ranking ensembles for systems that require high relevant document recall. In addition to feedback runs, i.e., runs that are produced using systems that have access to relevance labels from prior rounds, TREC-COVID challenge allowed submission of automatic runs \u2013 runs that were not tuned or modi\ufb01ed using prior relevance judgments. We submitted two such runs, UPrrf80-r5 and UPrrf89-r5, that are compared to other top-performing automatic runs in Table 5. UPrrf80-r5 is equivalent to RRF(TADM) in Table 3, which fuses Terrier, Anserini, Dual Encoder and MS-Marco BERT system runs. UPrrf89-r5 incorporates 9 additional runs \ufb01ne-tuned on BioASQ9, a document ranking task dataset with biomedical questions. We use questions from year 1 to 5 of the BioASQ competition. We follow the same data split as McDonald et al. [2018], where we use year 1 to 4 as training data, and use batch 1 of year 5 for tuning. Negative passages are abstracts of documents returned by a BM25 system. The BioASQ re-ranker is \ufb01ne-tuned almost in the same manner as described in Section 4.4.1 for MS-MARCO re-ranker, except that we set the candidate subset size as l = 6 and the number of \ufb01ne-tuning steps to 10,000. Neither UPrrf80-r5 nor UPrrf89-r5 use any information from TREC-COVID, and thus can be classi\ufb01ed as automatic runs. Overall, these automatic runs once again demonstrate the importance of retrieval and ranking ensembles for achieving high recall. UPrrf89-r5 achieves 9.2% MAP gain as compared to the next best team\u2019s run (uogTrDPH_QE_SB_CB). While our runs are not ranked the highest in terms of NDCG@20 and P@20 metrics, the difference from the top runs by covidex and uogTr were not found to be statistically signi\ufb01cant in our analysis. In addition, while UPrrf89-r5 slightly outperforms UPrrf80-r5 on all metrics, no statistically signi\ufb01cant differences were found between the two runs. 6" + } + ], + "Fernando Diaz": [ + { + "url": "http://arxiv.org/abs/2307.03201v1", + "title": "Scaling Laws Do Not Scale", + "abstract": "Recent work has proposed a power law relationship, referred to as ``scaling\nlaws,'' between the performance of artificial intelligence (AI) models and\naspects of those models' design (e.g., dataset size). In other words, as the\nsize of a dataset (or model parameters, etc) increases, the performance of a\ngiven model trained on that dataset will correspondingly increase. However,\nwhile compelling in the aggregate, this scaling law relationship overlooks the\nways that metrics used to measure performance may be precarious and contested,\nor may not correspond with how different groups of people may perceive the\nquality of models' output. In this paper, we argue that as the size of datasets\nused to train large AI models grows, the number of distinct communities\n(including demographic groups) whose data is included in a given dataset is\nlikely to grow, each of whom may have different values. As a result, there is\nan increased risk that communities represented in a dataset may have values or\npreferences not captured by (or in the worst case, at odds with) the metrics\nused to evaluate model performance for scaling laws. We end the paper with\nimplications for AI scaling laws -- that models may not, in fact, continue to\nimprove as the datasets get larger -- at least not for all people or\ncommunities impacted by those models.", + "authors": "Fernando Diaz, Michael Madaio", + "published": "2023-07-05", + "updated": "2023-07-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cond-mat.dis-nn", + "cs.AI", + "cs.CY" + ], + "main_content": "INTRODUCTION Recent studies have investigated the relationship between machine learning model performance\u2014as measured by some evaluation metric\u2014 and model design variables like dataset size, number of model parameters, and compute, or what is commonly known as scaling laws for AI [91]. In general, these analyses demonstrate a monotonic relationship between design variables and model performance\u2014 e.g., as the dataset size increases, models will perform better on some metric computed over an evaluation dataset. This phenomenon is often presented graphically where the x-axis indicates the design variable and the y-axis indicates model performance, with lines suggesting the superlinear relationship for deep learning models. This relationship has been used to justify the collection of everlarger datasets (e.g., the Colossal Clean Crawled Corpus [53]) used to train large language models [36, 124]. In the context of AI systems that are used by or directly impact people, an evaluation metric, the dependent variable underlying many scaling laws, re\ufb02ects the performance or quality of the system for those people. A metric, then, should not be considered a decontextualized property of a model, but rather a re\ufb02ection of the quality of a model used for a particular task, in a speci\ufb01c social context. On the one hand, the mathematical form of a metric ideally re\ufb02ects what a system designer thinks is important to users or others impacted by the system. The mathematical formula for a metric encodes assumptions about user beliefs, values, and behavior. On the other hand, the evaluation data used to compute the metric represents\u2014through a sampling procedure\u2014the particular population of interest. In combination, the mathematical form of a metric and the associated data estimate the performance of a model on a speci\ufb01c population. The quantity computed by an evaluation metric, then, is shaped both by its underlying mathematical assumptions and sampling procedures. Just as a predictive model and its training data can be scrutinized for biases, we can\u2014and should\u2014scrutinize metrics, especially in light of their role in scaling laws. We contend that, when designing for a su\ufb03ciently diverse population of users, measuring performance with a single, universal metric is precarious. Speci\ufb01cally, we argue that as the size of the evaluation dataset increases, the number of subpopulations present in the evaluation set is likely to increase as well. By subpopulations, we mean members of particular identity groups, including demographic groups (e.g., gender, racial or ethnic groups, religion, caste, national identity, disability status), as well as other cultural or sociopolitical groups, among the many other ways that human societies have variously organized themselves.1 The samples used to evaluate model performance will thus (intentionally or not) capture some set of communities in the social context in which the data was collected. As sample size grows, the likelihood of a larger number of communities included in the evaluation set also grows. While more inclusive, comprehensive datasets might be desirable, the diverse communities themselves are likely to hold di\ufb00erent uses, behaviors, and values with respect to the model being evaluated. Increasing evaluation dataset size increases the number of subpopulationspresent and multiplies the number of values that should be considered. Beyond the well-known challenges of developing AI systems to be used by diverse subpopulations\u2014such as evaluating algorithmic unfairness in model performance with respect to a \ufb01xed evaluation metric (e.g., equalized odds)\u2014we argue that diverse subpopulations are likely to have di\ufb00erent and potentially con\ufb02icting notions of \u2018good\u2019 as instantiated in both the mathematical form and data behind an evaluation metric, which machine learning evaluations using single metrics often assume to be universal. While evaluations of algorithmic unfairness may surface the systematic variation of the true relationship between features and targets across subpopulations,we contend that the targets themselves vary across subpopulations.2 1In section 4.1, we return to the ways that the groupings of subpopulations are not inherent, but are always imagined and created\u2014for example, by researcherscollecting data on those populations, by the communities themselves, or by state or institutional actors, to name just a few. 2Note that prior work in algorithmic fairness has argued that fairness itself is fundamentally contested by di\ufb00erent communities, where di\ufb00erent populations may not agree on how to best operationalize a construct such as fairness in terms of a single fairness metric, such as equalized odds [71, 85] \fFernando Diaz and Michael Madaio The current lack of attention on the subpopulations and communities represented in scaling law evaluation datasets3 poses major challenges for proponents of AI scaling laws\u2014and presents potential risks for all those impacted by the deployment of large models. Despite claims that a larger training dataset (e.g., a crawl of the predominantly English-speaking Internet [53]) will lead to improved model performance, when such models are deployed at scale, the larger numbers of people included in the evaluation dataset\u2014and thus a larger number of communities\u2014may lead to breakdowns in model performance for di\ufb00erent communities. Di\ufb00erent communities of users or impacted stakeholders may necessitate evaluating with di\ufb00erent metrics (due to di\ufb00erent behaviors or values), each of which may be in con\ufb02ict with each other, or in con\ufb02ict with commonly used evaluation metrics. We demonstrate that current AI scaling law analyses overlook the large and diverse set of constructs required to truly assess performance for large and diverse sets of communities. As a result, scaling laws conceal sub-optimal performance for under-sampled communities of users. In other words, despite claims to the contrary, larger training datasets may not in fact lead to improved model performance for all users when deployed at scale. We draw on scholarship from the social sciences to question the validity of claims made about scaling laws for dataset size and model performance when considering their use in models deployed at scale, on global populations whose values may not be re\ufb02ected in current performance metrics or which may be in irreconcilable tension with each other. Because evaluation data can vary in size and composition, we propose that, for a given metric and sampling procedure, scaling laws consider, in addition to the two axes of (training) dataset size and performance on a given metric, a third axis indicating the size of the evaluation data set. Doing so allows us to capture the change in composition of evaluation data as the sample size increases, be it an artifact of a small sample size or a sampling strategy that varies with the size. Moreover, studying dynamic evaluation sets is consistent with production systems where the evaluation set depends on the number and composition of system users, a population that technology designers hope will grow as a given system is more widely adopted. These observations suggest that scaling laws, in the pursuit of general principles, can obscure systematic under-performance for subpopulations and communities of people using or impacted by these models. 2 SCALING LAWS In order to scrutinize scaling laws from the perspective of evaluation metrics, we \ufb01rst review the relevant concepts from AI evaluation and scaling law literature. We will introduce some concepts from measurement modeling [74, 85] and introduce some notation that we will refer to throughout our analysis of scaling laws. 2.1 Performance Metrics Evaluation of AI systems involves computing and comparing quantitative measures of performance of a system on a task. In o\ufb04ine settings, including laboratoryor benchmark experiments, researchers 3See prior work on understanding communities represented in other AI evaluation datasets [e.g., 6, 7, 14]. use evaluation metrics based on data labeled through dedicated annotators. In online settings, including deployed AI systems in production environments, organizations use evaluation metrics based on logged behavior data. In both settings, evaluation metrics play a critical role in guiding high-level research and model development decisions as well as more granular parameter optimization [38, 72, 87, 98]. In line with modern AI paradigms, we focus on the evaluation of models trained on a set of data. We represent a model trained on a dataset D as \ud70b(D). For the purpose of our analysis, we are interested in the relationship between data and model performance and therefore will not specify the input or output space of \ud70b(D), nor do we care about the speci\ufb01c functional form or design of \ud70b(D). We use \u03a0 to refer to the space of all trained models, of which \ud70b(D) is one member. For clarity, we will sometimes refer to a model as \ud70b, even though it is always the outcome of a training procedure and data. Generic metrics like accuracy or squared error are often adopted in machine learning contexts to evaluate the quality of the model output.These metrics are simple, well-understood,and usually amenable to direct optimization. For many AI systems, however, generic metrics do not capture how people use the system output. For instance, for a language model used for question-answering, how re\ufb02ective is the measure of \u2018accuracy of predicting the next word\u2019 of the underlying user goal of \u2018understanding the answer to a question\u2019? For a recommender system, how re\ufb02ective is the measure of \u2018accuracy of predicting a rating\u2019 of the underlying user goal of \u2018discovery of new, relevant content\u2019? There is subtlety in how the system output is used by people, which is lost when performance is measured with generic metrics. This is why, in applied contexts, domain-speci\ufb01c metrics are usually developed. Indeed, a variety of areas of research, including natural language processing (e.g., BLEU [121, 127], ROUGE [62]), search and recommendation [30], and more [e.g., 125], have adopted families of metrics informed by technology use. Even within a given domain, the quality of models\u2019 output may be tightly coupled with a user\u2019s speci\ufb01c task; for example, the quality of a predictive typing application may be related to how useful users \ufb01nd it in e\ufb00ectively completing a writing task\u2014but this quality may di\ufb00er greatly between di\ufb00erent use cases for the same task, such as (for instance) informal messages and creative writing tasks compared to professional communication or scienti\ufb01c writing tasks. For AI systems, we are thus interested in evaluation metrics that measure\u2014explicitly or not\u2014the quality of a system\u2019s output (e.g., predictions, decisions, recommendations) for a given population of people for a given task. Drawing on the measurement modeling literature [e.g., 74, 85], we refer to the unobservable domainand task-speci\ufb01c notion of model output quality as a construct, or, the true performance of a system when used by a speci\ufb01c population for a speci\ufb01c task. While in many cases, a construct is completely unobservable, in others it may merely be very expensive to collect or more accurately estimate. We de\ufb01ne \ud707\u2217(U, \ud70b(D)) as the latent scalar value associated with the quality of \ud70b(D) for a population U. This population could be compact and well-de\ufb01ned (e.g., \u2018coworkers in a \fScaling Laws Do Not Scale speci\ufb01c academic department\u2019) or vague and general (e.g., \u2018any person with access to a computer, regardless of location, race, ethnicity, age, religion, gender identity, sexual orientation, disability, or economic status\u2019). Although it may be di\ufb03cult or impossible to observe the latent construct, \ud707\u2217, we can \u201coperationalize\u201d it using an observable evaluation metric as a proxy [cf. 85, 145]. For example, we can evaluate the quality of a text prediction model\u2019s output in terms of its usefulness for a given writing task by, for instance, measuring composition time, number of edits, the number of accepted text prediction suggestions, or using delayed user feedback on the quality of the text predictions [cf. 129]\u2014but all of these are only ever proxies (some better than others) for the latent construct of interest, \ud707\u2217. To make this clear, let \ud707(\ud448, \ud70b(D)) be the evaluation metric for a sample of user-associated data \ud448\u223cU (or, a sample \ud448from a population U). User-associated data, often referred to as \u2018test data,\u2019 may consist of individual data (i.e., each data point or instance is a person) or derived data (e.g., each data point is a text document written by a person). So, when we refer to an evaluation metric, it approximates both the functional form of \ud707\u2217as well as the associated target population U. In Section 4, we discuss the ways that populations might impact the validity of evaluation metrics. Let M be the set of all evaluation metrics, (including better and worse proxies for the latent construct), that can be used for measuring system performance. From measurement modeling, the validity of an evaluation metric \ud707is based on the extent to which it captures the salient aspects of the latent construct \ud707\u2217it is designed to operationalize [85]. The better the variable captures the construct, the better able that metric is to assess the performance of a given model. Precisely how a metric is related to a construct can be measured in a variety of ways [85]. One approach measures the extent to which a metric orders models consistently with the construct. Assume we have a subset \u02dc \u03a0 \u2282\u03a0 of models used to measure the metric quality. If we could directly observe the construct (which we can in situations where it is observable but expensive), we could compute a \ud707and \ud707\u2217for each model in \u02dc \u03a0 in order to construct two rankings of models, one ranking by \ud707and another by \ud707\u2217. We can then measure the consistency in ranking models by \ud707and \ud707\u2217[94]. This is related to Skalse et al. [141]\u2019s proposal to de\ufb01ne a pair of metrics as incompatible if there is any discordance. Or, in the context of binary-valued constructs, we might consider a metric incompatible with a construct if their values disagree; this is the approach adopted in analysis of fairness metrics [35, 59, 96]. Even in cases where a metric and a construct show some correlation in general, their relationship can still exhibit heteroscedasticity. For instance, a metric may agree with a construct in some systematic way (e.g. distinguishing a high quality system from a low quality system) and disagree in other systematic ways (e.g. distinguishing between two high quality systems). All of this suggests that the relationship between what we may want to measure about AI systems (e.g., \ud707\u2217(U, \ud70b(D))) and the way we measure that (e.g., \ud707(\ud448,\ud70b(D))) may not be straightforward. In Section 3, we unpack the ways that this relationship breaks down in practice and the implications for AI scaling laws. 2.2 Scaling Laws Scaling law analyses use \u201clearning curves\u201d to represent the relationship between system performance and training set size (i.e., the x-axis is a sequence of training datasets ordered by increasing size |D\ud456| < |D\ud456+1|). In general, proponents of AI scaling laws argue that \ud707\u2217(U, \ud70b(D\ud456)) < \ud707\u2217(U, \ud70b(D\ud456+1)). However, this is based on observations from proxy metrics (\ud707) instead of the construct (\ud707\u2217) and population samples (\ud448) instead of the true target distribution (U), as constructs and the full population may be di\ufb03cult or impossible to directly observe. Neural scaling laws state that the relationship between |D\ud456| and \ud707(\ud448,\ud70b(D\ud456)) follows a power law. In other words, as long as it is not bottlenecked by its capacity or access to compute resources, a model\u2019s performance improves superlinearly as a function of data [91, 131]. Although initial results demonstrated scaling laws for natural language processing tasks, similar laws have been developed for multimodal and reinforcement learning models [33, 81]. The evidence of scaling laws has motivated their usage in model design. They have been used as motivation for scraping ever-larger training datasets [53], to develop quantization methods [49], to extrapolateperformance from smaller datasets [84], and to determine data minimization policies [137]. Despite their popularity, however, insu\ufb03cient attention to the relationship between latent constructs of model quality and their operationalization in performance metrics for particular communities included in evaluation datasets poses serious questions to the validity of relying on scaling laws when evaluating deployed models at scale. 3 THE PRECARITY OF METRICS In order to understand precisely how scaling laws might be compromised, we \ufb01rst need to review the various ways in which metrics are far from the \u2018ground truth\u2019 they are often considered to re\ufb02ect. Through a discussion and review of existing observations in the computer science and social science literature, we demonstrate that evaluation metrics are inherently contestable and precarious and that, at any one point, there may be multiple metrics and constructs in tension. This echoes recent work in the responsible AI and computational social science communities that similarly recognizes that, although presented as a reliable proxy for the construct, evaluation metrics are often tenuous [59, 85, 149, 153]. 3.1 Metric (In)compatibility As discussed in Section 2.1, an evaluation metric an that approximation of the construct of interest. A generic metric like accuracy is often adopted because it can be used without modi\ufb01cation. But, for any given task, a construct is more complicated than suggested by a generic metric. It might be impacted by an individual user\u2019s expectations, interface constraints, or normative social values, none of which are captured by a generic metric like accuracy. Because metrics are errorful proxies for constructs, any two metrics, even if they largely agree with the construct, may disagree with each other. Moreover, because constructs and how they are operationalized may be contested, two metrics may disagree because each is modeling a di\ufb00erent (though related) underlying construct (e.g., \fFernando Diaz and Michael Madaio two fairness metrics may be operationalizing di\ufb00erent understandings of fairness [35]). As a result of this potential incompatibility between metrics, adopting any single metric for evaluating a model\u2014including for scaling law analyses\u2014requires careful understanding of the validity of the metric with respect to the construct of interest, and how that relationship may be more or less stable for various use cases, social contexts, or even di\ufb00erent ranges of the metric\u2019s value. Moreover, understanding the relationship between multiple metrics and how this may also vary for di\ufb00erent ranges of the metrics\u2019 values helps designers avoid situations where a metric becomes unreliable as model performance improves. While the concept of convergent validity (i.e., the mutual alignment of a set of metrics with a construct) is a desirable property, the inherent friction between metrics means that this relationship may be less rigid than implied by literature on scaling laws, posing serious questions for whether the performance metrics used in scaling law analyses adequately re\ufb02ect model output quality for users and others impacted by such systems. 3.2 Metric Nonstationarity In addition to evaluation metrics being potentially incompatible with each other, their usefulness (in terms of their ability to capture a construct of interest) may also change over time. This nonstationarity arises for three reasons. First, due to the complexity of sociotechnical systems made up of deployed AI systems used by or impacting people,metric development\u2014 like model development\u2014is iterative, based on our understanding of how people and social groups engage with the technology [31]. In web search evaluation, increasingly sophisticated models of user interaction inform evaluation metric development [27]. Similarly, in multidocument summarization, a better understanding of how summaries are used by people has led also to the development of new evaluation metrics [138]. As such, at any point in time, a speci\ufb01c metric captures the researcher\u2019s or practitioner\u2019s best understanding of how to model the construct. But, because our understanding of users and the world is changing, this metric will also change over time. Second, as suggested by the possible heteroscedastic relationship between metrics and constructs (Section 2.1), the \u2018appropriate metric\u2019 at any point in time may depend on the set of systems being compared. In practice, this means that the most reliable evaluation metric may change depending on the e\ufb00ectiveness of the systems being evaluated. Because models conceivably improve over time, metrics will become stale and require replacement with other, more appropriate metrics [152]. Relatedly, some AI research, inspired by Goodhart\u2019s law, focuses on the fallibility of proxy metrics with respect to a target metric or construct. For instance, Gao et al. [64] use simulations to demonstrate that proxy metrics, when optimized for by reinforcement learning systems, will initially be correlated with the construct, but this eventually degrades as the AI system over-optimizes for the proxy. Similarly, Skalse et al. [141] show that even small di\ufb00erences between proxy and target metrics can result in over-optimization and degradation of performance with respect to the target metric. Third, model development often exists in a broader sociotechnical context, which itself might change over time. As an example, consider a metric such as \u2018consumption time,\u2019 used in a number of media platforms to model relevance by measuring how long someone spends consuming media. The higher the consumption time of a particular piece of media, the more relevant the item to that user. Improvements to bandwidth and rendering times on devices can result in substantive changes in the relationship between the consumption time and relevance. Similarly, acute shocks to the environment (e.g., unexpected exogenous events, such as disasters, or regulatory policy changes) can change how people, technology, and their measured values behave and, as a result, relate to a construct of interest [cf. 136]. In some cases, the model itself may lead to behavioral or environmental changes that impact the usefulness of the metric. For example, users of a system may, over time, adapt their behavior in ways that compromise assumptions in the metric (e.g., how people express queries on mobile search engines [89, 90]). Or, so-called adversaries may attempt to manipulate the system by gaming the metric (e.g., clickbait). We return to how metrics may be impacted by sociotechnical systems in section 3.5. 3.3 Metric Staging Multiple metrics may arise for pragmatic reasons. For example, in the context of model development, di\ufb00erent metrics are used at di\ufb00erent stages due to the costs involved in metric computation. In production systems, behavioral metrics based on real interactions with end users (e.g., user engagement metrics) may be more accurate re\ufb02ections of a latent construct (e.g., due to face validity [85]) but more expensive because\u2014in addition to potential costs of deployment or data collection\u2014they incur a risk of harming the users, losing their trust, and having them abandon the technology altogether. On the other hand, in o\ufb04ine evaluation, metrics based on data labeled by annotators (e.g. precision, recall) are less accurate re\ufb02ections of the construct than behavioral metrics. That said, they are extremely fast to compute, often reusable, and can be used in an isolated environment where any unexpected or harmful outputs may be insulated from impacting users.4 Unfortunately, o\ufb04ine evaluation requires an annotation budget and the development of guidelines for annotators, both of which can be prohibitive. In practice, model development involves a coordination of these metrics: using o\ufb04ine evaluation and associated metrics for a large set of models before selecting a subset for production evaluation and a di\ufb00erent set of metrics. 3.4 Metric Variation Across Subtasks Although an AI system intended for a speci\ufb01c task or domain is often evaluated using a single construct or metric, in reality, there are often a diversity of specialized subtasks or subdomains. For example, a search engine supports a wide variety of possible information needs, including navigating to a speci\ufb01c web page, learning about a topic, or making a purchase [22]. Question-answering similarly can be decomposed into multiple classes of questions [109]. In this case, each subtask or subdomain has a unique construct. 4Although such harmful outputs are not, crucially, prevented from impacting annotators [cf. 70]. \fScaling Laws Do Not Scale This may manifest from some property of the labels (e.g. contextualizing labeled data on the subtask or subdomain) or the formula for the metric itself (e.g. in the search context, reciprocal rank for navigational intents versus recall for literature review). In some situations, subtask metrics may be compatible or \u2018compatible enough\u2019 (see Section 3.1), allowing for shared information in learning and evaluation [160]. In other situations, two subtasks may be have quite di\ufb00erent constructs in tension [5]. While a single, universal metric may be one way to resolve inconsistency amongst subtask or subdomain constructs or metrics, in a reinforcement learning context, Skalse and Abate [140] demonstrate that jointly optimizing a single policy by reducing multiple rewards (i.e. metrics) to a single number is only possible for linearly interpolated rewards. However, despite this variation across tasks, the dominant paradigm used in scaling law analyses (and in training large-scale pre-trained models) is to optimize and evaluate for a single performance metric. 3.5 Metric Power Beyond the technical and pragmatic reasons for metric precarity, when metrics are used to guide research in academic communities, make decisions in industry, and evaluate performance of machine learning systems, they become social objects amongst researchers, funding agencies, engineers, product managers, and other individuals involved in the processes of machine learning research, development, deployment, and use. As a result, metrics used to evaluate AI systems are always subject to (and in turn, shape) the social dynamics of the sociocultural systems in which they are embedded. When metrics are used to measure phenomena in social contexts, it is well-established that they not only enable one to understand a particular phenomena, but they also have the power [cf. 13] to change social actors\u2019 behaviors. In other words, metrics do not simply re\ufb02ect the world, but they shape it, and are shaped by it [cf. 119] as well, as part of the previously mentioned Goodhart\u2019s Law [26, 66, 144]. This phenomena has been identi\ufb01ed across numerous \ufb01elds, including education [144], economics [107], and organizational studies [69]. For instance, when schools use students\u2019 test scores as a metric to evaluate teacher quality, some teachers and administrators have responded by teaching to the test or in the worst case, altering students\u2019 test scores [61]. In the workplace more generally, companies have long adopted performance metrics of various sorts to measure their employees\u2019 productivity [126] and attempt to incentivize them to be more productive [159]\u2014metrics that are increasingly based on \ufb01ne-grained data from those employees, and which may shape employees\u2019 behaviors in other ways (e.g., employees using applications to move their cursor to simulate productivity) [17]. However, the speci\ufb01c ways that metrics impact people\u2019s behavior is itself shaped by the norms, culture, and organizational dynamics of the context in which they are used. For instance, Christin [37] found that two newsrooms in France and the United States that had access to similar web tra\ufb03c data about their news stories di\ufb00ered greatly in how the metrics they derived from those data shaped their approach to journalistic decision-making, in ways impacted by their respective organizational, professional, and cultural contexts. In addition to the culturally speci\ufb01c ways that metrics may impact particular social worlds, metrics may take on a social life of their own. Within research communities, metrics can have a stickiness when entire research programs develop around them. The canonical example here is the evaluation and optimization of recommender systems. For more than a decade, the majority of research evaluated performance using various rating prediction accuracy measures, root mean squared error (RMSE) being common [80]. This was mathematically convenient and amenable to direct optimization. However, Cremonesi et al. [40] conducted a study demonstrating that accuracy metrics were poorly correlated with user satisfaction and argued for their abandonment (cf. Section 3.1). Despite these issues, RMSE continues to be used for some recommender system evaluation [128]. All of this suggests that the performance metrics used to evaluate AI systems may be inherently unstable or precarious in ways that raise serious questions for the validity and robustness of scaling laws for AI, which rely on metrics that are believed to be divorced from any particular social context. 4 SCALING LAWS DO NOT SCALE We now turn to an analysis of scaling laws in light of the precarity of evaluation metrics. While there are existing initiatives to identify \u2018inverse scaling laws\u2019 for particular tasks [155], our claim is that, when we consider training and evaluating with human data, scaling laws, as currently posed, are, at best, incomplete, and may be fundamentally \ufb02awed. Our claim is divided into four parts. First, that evaluation metrics re\ufb02ect the compositionof the evaluation dataset, which is shaped by the sampling approach used to collect that data; second, that the number of sub-groups within a given dataset grows with data size; third, those sub-groups can have incompatible values and preferences for appropriate evaluation metrics; and fourth, that the risk of that metric incompability grows with dataset size. 4.1 Sampling Approaches Shape who is Included in Evaluation Datasets As we mentioned in Section 2.1, evaluations provide estimates of the performance of a model when used by an intended population of users. Ideally both training and evaluation data are drawn from the same population using the same sampling distribution (although in practice this may not be the case). If U is the population, then \ud703de\ufb01nes the sampling distribution from which we draw both the training set D and the evaluation set \ud448. And so, when describing an evaluation metric, we can describe it\u2014or, parameterize it\u2014in terms of the sample size |\ud448|. For instance, benchmarking or o\ufb04ine experiments have a \ufb01xed sample (and therefore sample size) determined by the collected, static evaluation data. In deployed systems, the sample size varies with the number of users (and the training data size |D|). While we have talked about models evaluated when used by or impacting people for speci\ufb01c tasks, we have avoided the question of which people an AI system is designed for, used by, or impacts. Given a current set of users (e.g., in a deployed system), we can answer this question narrowly by evaluating a model with respect to the existing set of users of a system or application that a given \fFernando Diaz and Michael Madaio model is embedded in; in this case, we say that the sampling distribution has \u201csupport\u201d constrained to the subpopulationsre\ufb02ected by the current set of users. Or, we can answer this aspirationally by evaluating with respect to some future population of users; in this case, we assume that the sampling distribution has support that bounds all subpopulations present in the complete population. For the purpose of our analysis, because of the aspirational nature of claims made about AI scaling laws [e.g., 91], we assume that the support of the sampling distribution is complete. This implies that every possible user has a nonzero probability of occurring in the training or evaluation data. However, regardless of the sampling distribution, in any speci\ufb01c sample, we rarely (if ever) have reliable data for every possible user in every context. Nevertheless, the desire for such a dataset has led researchers to seek out larger sources of data on human populations [3, 76, 95, 158]. Debates about whether and how di\ufb00erent groups of people may be represented by or within large datasets have persisted for decades, across multiple \ufb01elds [cf. 32]. Social scientists conducting public opinion surveys have long wrestled with what it means for their samples to be representative (and of whom they might be representative), including, for instance, examining representativeness of survey respondents from various demographics across geographic scales (e.g., from cities to states to national populations) [45]. Many theories and methods in the social sciences have been developed to grapple with the fundamental heterogeneity of human populations, as groups of people may vary based on gender, age, race/ethnicity, disability status, as well as behavior, physiology, language, religion, culture, and numerous other dimensions [56]. These dimensions of di\ufb00erence may themselves be more or less stable, as people may claim membership in numerous communities at di\ufb00erent points in their lives, as such identities become more or less salient (e.g., a person leaving or joining a religion, moving to a new city or country, joining or leaving the military, and numerous other examples). In other cases, the community itself may spring into existence in response to a particular political issue (e.g., DREAMers, NIMBYs, climate change or election deniers, and more) [cf 52]. Depending on one\u2019s question of interest, di\ufb00erent approaches may be needed to understand how a sample captures variation across a larger population [56]. That is, the means we use to capture these di\ufb00erent dimensions or communities are often designed based on the goal: political surveys may be designed to capture population-level variation across politically salient demographic categories, such as gender, race/ethnicity, education level, and income, while marketing research surveys may be designed to specifically target consumer-relevant demographics and behaviors, such as family size, technology use, spending behaviors, and more [45]. While many datasets about people, such as opinion polls, are intentionally designed to answer speci\ufb01c research questions, the advent of large-scale data collection enabled by data traces on digital platforms has led computational social scientists and others to explore how such datasets may enable them to better understand people [95, 158]. Despite claims for massive datasets (in the form of \u2018big data\u2019) to usher in \u201cthe end of theory\u201d [3] via datasets where, allegedly, \u201cn = all\u201d [95], subsequent research has demonstrated that large-scale platform datasets are always re\ufb02ections of behaviors of particular groups of people (rather than being somehow inclusive of everyone) [21]. As large datasets from social media platforms (e.g., Twitter, Facebook, Reddit, etc) are used to investigate research questions not only about platform use, but about social dynamics more generally, these platforms may su\ufb00er from a \u2018model organism\u2019 problem [150], where claims are made about the world in general, based on data from one speci\ufb01c organism, such as mice or fruit \ufb02ies (or here: one speci\ufb01c social media platform). In reality, there is a non-random selection of users into social media platforms [76]; for example, Twitter is used by less than 20% of the US population [106]. An empirical analysis of who is left out of so-called \u201cbig data\u201d from social media found that social media users tend to be more educated, higher-income, and more technologically savvy than non-users, with substantial di\ufb00erences in gender and race/ethnicity across different platforms. [77]. Similarly, massive datasets used to train large models, such as the Colossal Clean Crawled Corpus (C4), trained on a crawl of the web, or others such as LAION [e.g., 19, 100], are not representative of everyone on the planet, but contain particular snapshots of particular populations (and not others). For instance, some communities may not be included at all in the C4 corpus or other large web-based corpora: low-literacy communities or those who rely on radio for information and communication [e.g., 93]; communities with low or no technology use\u2014or those whose technology use is primarily on mobile devices, and does not involve producing webbased content legible to web crawlers [e.g., 1, 112]; or communities who speak low-resource languages [102], among many others. As such, it is an empirical question as to who, precisely, is represented in massive datasets used to train and evaluate models for scaling laws [cf. 14]. The composition of a dataset depends on the sampling approach used to create it\u2014whether that is a random population sample, a sample strati\ufb01ed by some dimension(s) (e.g., randomized within di\ufb00erent state populations), or a convenience or platform sample (i.e., a sample shaped by the nature of the platform used to collect the data, such as Twitter, Reddit, or the web). For all of these approaches, we cannot say with certainty precisely how many communities or sub-populations are re\ufb02ected in a given dataset without access to information about its sampling or data collection methodology. Even then, because the de\ufb01nition of a given community and an individual\u2019s membership in it may be \ufb02uid over time or potentially overlapping or intersectional [cf. 39, 41, 114], the number of communities represented in a dataset may depend on the research questions used to investigate that question. 4.2 The Number of Sub-Groups in Evaluation Datasets Grows with Dataset Size Since evaluation metrics are in\ufb02uenced by the sampled population \ud448, which itself may be non-representative for a variety of reasons, often related to the sampling approach, we now ask what happens as we increase the sample size. For this, we can introduce a third \u2018z-axis\u2019 to scaling law analysis re\ufb02ecting the size of the evaluation data, which, we will show, re\ufb02ects the number of sub-populations present. \fScaling Laws Do Not Scale Consider a theoretical sampling distribution that aims to be representative of the population U. In this case, the sequence of samples \ud4480,\ud4481, . . . will, although initially missing out on many subgroups, converge toward covering a broad, global set of people, ostensibly representing all sub-groups in U, which is consistent with the aspirational, though unrealistic, claims for large models to bene\ufb01t \u201call of humanity\u201d or \u201cthe human race\u201d [2]. As discussed in the previous section though, the sampling frames typically used to evaluate AI systems are not intentionally collected to be representative of any particular sub-group or community\u2014 nor is it clear what it would even mean for a sampling frame to be representative of all sub-groups of people. This means that although smaller samples are likely to contain fewer sub-groups and a large enough sample may theoretically converge toward a broad, global set of sub-groups, the rate at which the number of sub-groups are encountered is dependent on properties of the sampling distributions, and the broader sampling approach taken. Given the tendency for data collection practices to be biased toward communities that are easier for the researchers collecting the data to access [cf. 101], or datasets that are easier to scrape [e.g., 53], the rate of observing new sub-groups will be slower than a representative sample. More realistically, if sampling is done by \u2018organic user growth,\u2019 as is typical in production settings, the sampling distribution itself is changing as sample size increases. Consider deployed systems, where early adopters of the technology will not be representative of later users. For example, user growth on social media platforms tends to occur non-uniformly within and across national boundaries [118, 122, 157]. Assuming consistent adoption and growth, we can use a model of nested subpopulation support, supp(\ud7030) \u2286supp(\ud7031) \u2286. . . \u2286U. As suggested by social media adoption, this growth in support is structured: we tend to see some populations adopt before others due to homophily. These observations suggest that, regardless of the sampling strategy or the way we might represent a sequence of samples (e.g., the model of nested support), the number of unique sub-groups present will grow with sample size. However, crucially for scaling laws, the nature of that growth\u2014and thus the particular composition of sub-groups and communities contained in the evaluation dataset\u2014is heavily impacted by the sampling strategy used to collect the evaluation dataset. 4.3 Sub-Groups Can Have Incompatible Metrics Along with the substantial heterogeneity of populationscomes heterogeneity of preferences and values. Although di\ufb00erent sub-populations may have di\ufb00erent relationships with a single evaluation metric (as in disaggregated evaluations of algorithmic fairness [12, 24, 130]), we are particularly interested in di\ufb00erent, incompatible metrics and constructs themselves [cf. 71, 85], a subtle but important di\ufb00erence. For instance, it is well-established via decades of the World Values Survey that there are tensions in values across international populations [73]. In addition, there is large sub-national variation in public opinion; for example, in the US, public opinion di\ufb00ers greatly on topics such as support for gay marriage [28], the New Deal [29], and the death penalty [139], among others [16], in ways that are shaped by various cultural and political factors [28]. In AI ethics, prior work has identi\ufb01ed substantial di\ufb00erences in how various populations\u2019 values manifest in terms of preferences for AI systems. For instance, Jobin et al. [88] analyzed AI ethics principles statements from nearly a hundred di\ufb00erent institutions across various countries, \ufb01nding that while there was convergence in high-level values such as fairness and transparency, there was high divergence between countries in the speci\ufb01c ways those values are operationalized in AI principles statements, the practices that enact those principles, and the mechanisms used to enforce them. In addition, Awad et al. [9] collected data on millions of people\u2019s preferences for which of two personas an autonomous vehicle should kill in a car accident, via an online tool used in 233 countries and territories. They found substantial cross-cultural variation in preferences [9], and they attempted to explain that variation by drawing on various economic and cultural indicators, such as the World Values Survey [83], Gini coe\ufb03cient scores [54], and other cultural frameworks [82, 104].5 Relatedly, Jakesch et al. [86] conducted a survey of how di\ufb00erent groups prioritize ethical values in AI development, \ufb01nding statistically signi\ufb01cant di\ufb00erences in how members of di\ufb00erent occupations and demographic groups prioritize values such as fairness, privacy, and transparency in particular AI deployment scenarios. Recent work has explored the relationship between di\ufb00erent groups\u2019 responses to public opinion polls(e.g., Pew American Trends and the World Values Survey) and the output of large language models, \ufb01nding that large models\u2019 output is more similar to the average responses from survey respondents in the USA, Canada, and Australia, compared to respondents from other countries [55] and within the US, language models\u2019 output re\ufb02ects certain groups\u2019 opinion poll responses more often than others [134]. As one example of how cultural di\ufb00erences in values may impact AI development and evaluation, Sambasivan et al. [133] has identi\ufb01ed how algorithmic (un)fairness in the Indian context operates along di\ufb00erent axes than those identi\ufb01ed in Western contexts. For instance, they found that algorithmic fairness in India entails di\ufb00erent sets of sub-groups, frameworks, and methods, including how algorithmic harms are shaped by the forces of caste and religion, among others. They discuss how popular fairness measurements are informed by speci\ufb01c cultural and historical circumstances, such as approaches for measuring disparate impact or disparate treatment arising from US anti-discrimination law [cf. 154], and by Western philosophical frameworks and approaches to justice more generally [133]. This suggests that large-scale datasets (which may contain numerous communities or sub-groups) may thus inadvertantly collapse meaningful di\ufb00erences in those sub-groups\u2019 preferences for how values in AI are operationalized\u2014leading to what some have referred to as \u201caggregation bias\u201d [146]. There is substantial empirical evidence for how such aggregate approaches to evaluating models may hide disparities between (or within) subpopulations for a \ufb01xed evaluation metric. To uncover these disparities, 5Although it is important to note that the data collection method\u2014a platform sample based on users accessing an interactive MIT website\u2014is not likely to lead to nationally representative samples. \fFernando Diaz and Michael Madaio \u201cdis-aggreggated evaluations\u201d [12] of model performance are conducted by disaggregating a single performance metric across multiple groups. Such approaches have been the foundation of highpro\ufb01le evaluations of machine learning failures, such as evaluations of how gender recognition systems are less accurate for women of color than others [23], or how speech recognition systems have higher word error rates for speakers of African-American Language [97], among other examples [e.g., 46, 110, 111]. However, prior empirical work found that when AI product teams are deploying models \u201cat scale\u201d (i.e., across numerous geographic and cultural contexts), choices about precisely how to dis-aggregate evaluations of model performance\u2014in terms of which evaluation metric to use, or along which demographic dimensions to dis-aggregate\u2014 posed major obstacles to AI teams\u2019 ability to e\ufb00ectively conduct fairness evaluations at scale [101]. Indeed, concerns with aggregation bias are only ampli\ufb01ed when we move from \ufb01xed evaluation metrics to subpopulation-speci\ufb01c metrics, which may be in tension. Further contributing to potential aggregation harms, during the data annotation process, prior work has found substantial disagreement between annotators from di\ufb00erent demographic groups when determining what constitutes hate speech or toxic language [44, 135, 148], with other work suggesting that it is crucial to understand the subjective identities of crowdworkers [48, 51], developing methods for handling disagreement between groups of annotators in situations where there may not be a single \u201cground truth\u201d in annotation labels [43, 67, 68]. Scaling laws, which are, in essence, aggregate evaluations of models\u2019 performance across the entire evaluation dataset \ud448, may similarly hide failures or inverse relationships amongst constructs and values when evaluated with di\ufb00erent sub-populationscontained within the evaluation data. 4.4 Risk of Metric Incompatibility Grows with Data Size Given that the number of sub-groups or communities within the evaluation dataset grows with the size of that data, and these groups may have incompatible values (i.e., constructs for model output quality) and relationships to performance metrics that operationalize those constructs, we turn to our \ufb01nal claim: that the risk of failures or harms of AI systems grows as evaluation data size grows. We assume that an AI system is trained and evaluated on a single, \u2018dominant\u2019 metric, the proxy for a construct of interest, such as model output quality. Notwithstanding the fact that this proxy may be insu\ufb03cient to model the construct, when used in social contexts, we know that the variety of values of di\ufb00erent communities or sub-groups means that many such values (e.g., constructs) are not considered when AI system are evaluated with a single evaluation metric, capturing a single construct. Goodhart\u2019s Law suggests that using a proxy metric can lead to over-optimization and degradation of the actual performance on the construct, something con\ufb01rmed in the AI alignment literature [64, 141]. Similarly, using a single dominant metric can lead to overoptimization and degradation of the actual performance for other constructs and values of di\ufb00erent communities, especially because they are more likely to be incompatible with the dominant construct. Indeed, work in multi-task learning has demonstrated that, when optimizing for one task, more data can degrade performance for other tasks [5]. In addition, Solaiman and Dennison [143] \ufb01nd evidence for a scaling law between model size and toxicity\u2014that is, as model size increases, the models were more likely to generate toxic language. Similarly, Lin et al. [99] found evidence for an inverse scaling law for model size and truthfulness in a questionanswering (QA) task (i.e., models were less truthful the larger they were), and Parrish et al. [116] found that larger models performed worse on the task of detecting biased language, using a bias benchmark dataset they developed for QA. This phenomena has also been shown as the training dataset size is increased, in addition to the model size. When analyzing the LAION datasets for the presence of hateful content in images and alt-text, Birhane et al. [20] found that as the dataset size increased, the likelihood for models trained on those datasets to label images of Black people\u2019s faces as criminals also increased. Since the number of distinct sub-populations (and thus their respective latent constructs for model quality) represented in an evaluation set are likely to grow as a function of dataset size, there is an increasing chance of a dramatically misaligned evaluation of model quality\u2014leading to potential impacts or harms for communities whose values are not represented by the dominant performance metric used for model evaluation [e.g., 50, 57, 113]. Given that we might observe a disparity in \u2018true performance\u2019 across populations in the evaluation data or, more generally, in the target population, we need to quantify the severity of this disparity. The systematic under-performance and exclusion of values from some sub-populations in scaling law analyses raises issues of (un)fairness and justice. While we emphasize that our claims are di\ufb00erent from those in the existing algorithmic bias literature that evaluate a \ufb01xed metric for di\ufb00erent populations [e.g., 12, 23]\u2014as we are interested here in values tensions that might result in different communities valuing di\ufb00erent constructs entirely or di\ufb00erent proxy measures to capture those constructs\u2014we can still adopt methods from that community to measure disparity [11]. From the perspective of robustness or Rawlsian fairness, we can look at the worst case true performance of a system on a sub-population in terms of its own values [65, 79, 92]. Poor performance is likely to be ampli\ufb01ed by the fact that the worst o\ufb00sub-populations are likely to be from groups that historically have not been considered, represented, or participated in AI development processes [e.g., 123] (Section 4.2). Given the catastrophic deterioration of performance according to Goodhart\u2019s law, other notions of unfairness (in terms of incompatibility of values and the metrics used to operationalize those values) are also more likely to occur as more sub-populations manifest in evaluation data and model performance according to a target metric grows. 5 DISCUSSION 5.1 Implications for existing approaches to scaling laws Proponents of scaling laws for AI systems argue for the existence of a power law relationship between the size of a model (i.e., number of parameters, dataset size, compute) and its performance (along \fScaling Laws Do Not Scale some metric). While this narrative has led to increased investment in collection of large datasets [19, 100] and in ever-larger models and compute power\u2014along with supporting a narrative of progress akin to Moore\u2019s Law\u2014recent work has demonstrated that scaling laws may not hold for particular tasks [25, 99, 103, 116, 155]. However, while the current work on exploring the limitations of scaling laws (e.g., inverse scaling laws [103], etc) has largely kept the same parameters\u2014the relationship between some aspect of model size and model performance\u2014and just uses a di\ufb00erent task (e.g., generating truthful text, negation, bias detection, etc), we argue that there may be other relevant dimensions along which scaling laws may not hold. For instance, we argue that scaling laws should incorporate a third dimension\u2014a z-axis\u2014drawing on the evaluation dataset (in addition to the typical x-axis of the training dataset). In addition, what would it look like to evaluate scaling laws for AI systems where our proposed \u2018z-axis\u2019, instead of evaluation data size, might re\ufb02ect the di\ufb00erent groups of people whose values (and thus latent metrics: \ud707\u2217) may fundamentally di\ufb00er? This might include some combination of modeling scaling laws on the basis of: 1) the number of countries in which an AI system is deployed or used; 2) the number of languages in which the output is generated; 3) the number of users; 4) the number of unique communities or sub-populations represented in the dataset. In addition, given our argument that di\ufb00erent communities represented within a dataset (or impacted by a particular system) may have fundamentally di\ufb00erent values and metrics, what might it look like to evaluate scaling laws where the y-axis, instead of decontextualized model performance metrics like accuracy (or F1 score, RMSE, ROUGE, etc), were instead chosen for particular use cases or system deployment contexts, or were chosen by particular impacted communities in participatory ways [e.g., 47, 147, 153] to better re\ufb02ect their values. As previously discussed, substantial work on model evaluations has shown that aggregate metrics of model performance may hide worse performance for particular sub-groups that can be observed when model performance is dis-aggregated by some demographic categories [e.g., 23, 97, 110, 111]. Analogously, the current paradigm of evaluating scaling laws on aggregations of performance metrics evaluated on a single training dataset is likely to hide similar divergences in values and preferences for metrics for sub-groups within an evaluation dataset. For instance, a recent paper proposed a benchmark for evaluating bias in QA, and evaluated it on several large language models; however, they caveat at the end of the paper that \u201cthe data in BBQ is only designed to test biases associated with US English-speaking cultural contexts, and it should not be used as evidence that a model would still look unbiased for contexts from a di\ufb00erent culture\u201d [116]. What would it look like for evaluations of scaling laws to be dis-aggregated for datasets collectedfrom (or ideally collected, curated, or annotated by) di\ufb00erent communities [cf. 153], be that linguistic communities, cultural communities, countries, or other set of sub-populations? In other words, would an observed scaling law relationship for performance metric \ud707evaluated with dataset \ud448still hold if that dataset were collected from a di\ufb00erent context or collected or annotated by a di\ufb00erent community? 5.2 Broader questions for scaling laws While the previous section suggests some shorter-term means to investigate the limits of scaling laws, this work raises more fundamental questions for scaling laws for AI. For instance, while analyses like the ones we proposed in section 5.1 may reveal broken, inverse, or other non-monotonically increasing functional forms for scaling laws across di\ufb00erent communities within a given dataset (or who might use or be impacted by a particular system), what to do about that is a much thornier question. In many ways, tensions in values (sometimes referred to as values pluralism [15, 42, 151]) are a fundamental challenge of political systems. Methods have been developed from participatory democracy [cf. 120], deliberation theory [58], and value-sensitive design [105] (among other areas) to identify and resolve tensions in values. However, these methods have largely been designed for smaller-scales: e.g., town halls, focus group discussions, and participatory design workshops [108]. As such, it is not clear how such approaches may be able to address value tensions at the scale of modern AI systems [cf. 75, 142]. In the \ufb01eld of AI, this has manifested in tensions in how highlevel principles for ethics (e.g., fairness) are operationalizedin terms of speci\ufb01c practices or metrics, despite widespread agreement on the high-level principle itself [e.g., 88]. Foundational work on bias in recidivism prediction [4] highlighted the consequences of this\u2014 the original COMPAS algorithm was evaluated using predictive parity across demographic groups, but subsequent research identi\ufb01ed disparities in false positives and false negatives for di\ufb00erent demographic groups despite equivalent predictive parity (due in part to systemic injustice in the US criminal justice system) [34]. How might such tensions in values be resolved when investigating scaling laws for large models? Recent work on AI \u201calignment\u201d has attempted to develop approaches to aligning AI with human values [8, 10, 63, 143]. Gabriel [60] discusses how values pluralism may impact the goal of aligning AI systems with \u201chuman values\u201d (in all of their multiplicity and tensions), and he discusses tradeo\ufb00s in several potential approaches to resolving those tensions. Some currently adopted approaches include red-teaming [63], reinforcement learning from human feedback (RLHF) [10] or creating \u201cvalues-targeted datasets\u201d [143]. However, in recent work attempting to incorporate values into training large models, people involved in red-teaming and RLHF appear to largely be US-based and may not be representative even of the US population [10, 63]\u2014despite prior work \ufb01nding e\ufb00ects of annotators\u2019 identity on their AI safety annotations [6]. Meanwhile, in this example, the \u201cvalues-targeted datasets\u201d were created by the researchers, who acknowledge this limitation, writing that \u201ccreating many values-targeted datasets to re\ufb02ect the cultures of the many peoples impacted by language models is a di\ufb03cult feat\u201d [143]. However, this is not simply a limitation of their work, but a more fundamental challenge for values tensions in scaling laws\u2014what might it mean to not only create values-targeted datasets from di\ufb00erent communities or cultural contexts, but to resolve potentially irreconcilable di\ufb00erences in values between such communities, at scale? While we proposed a simple worst-case approach to quantify the robustness of a model across subpopulations in Section 4.4, the \fFernando Diaz and Michael Madaio reality of dealing with multiple, con\ufb02icting values is more complex. The evidence from Birhane et al. [20] demonstrates that values of emergent subpopulationscan be toxic, suggesting that simply looking at the worst-o\ufb00subpopulation risks buttressing toxic behavior. Although we do not o\ufb00er answers to the questions we have proposed in this paper, we suggest that, in part, what is needed are interdisciplinary, mixed-methods approaches to theoretically and empirically investigate the questions we raise. There are existing theories and methods from numerous \ufb01elds\u2014across the social sciences (including computational social science) and computer science\u2014that have been developed to explore questions related to whether and in what ways various communities or sub-groups are represented by data [e.g., 158] as well as identifying and resolving values tensions among communities [e.g., 105]. For instance, various methods have been developed for identifying sub-populations within large datasets about people, in political science (e.g., for public opinion polling) [56], demography [115], healthcare [156], genetics [117], and more. However, work from the social sciences suggests that some communities may be hidden in ways not legible to data available for computational community detection (e.g., injection drug users; sex workers) [132], requiring approaches such as qualitative or ethnographic research to identify such communities\u2014 which may be di\ufb03cult and prohibitively expensive in current practice at the scales implicated by AI scaling laws, not to mention that many such communities may not want to have data collected at all, as it may put them at risk. In addition, recent work demonstrating the lack of relevance of Western AI fairness frameworks for other cultural contexts, such as India, has drawn on approaches from both qualitative [133] and quantitative research paradigms, including natural language processing [18]. We argue that such interdisciplinary, mixed-methods approaches such as these, involving deep partnerships with community members or community organizations, such as participatory or community-based research methods [e.g., 47, 78, 108, 133, 153] may be one way to empirically investigate our claims and grapple with the inherent tensions and limitations of scaling laws for AI. 5.3" + }, + { + "url": "http://arxiv.org/abs/2306.07908v1", + "title": "Best-Case Retrieval Evaluation: Improving the Sensitivity of Reciprocal Rank with Lexicographic Precision", + "abstract": "Across a variety of ranking tasks, researchers use reciprocal rank to measure\nthe effectiveness for users interested in exactly one relevant item. Despite\nits widespread use, evidence suggests that reciprocal rank is brittle when\ndiscriminating between systems. This brittleness, in turn, is compounded in\nmodern evaluation settings where current, high-precision systems may be\ndifficult to distinguish. We address the lack of sensitivity of reciprocal rank\nby introducing and connecting it to the concept of best-case retrieval, an\nevaluation method focusing on assessing the quality of a ranking for the most\nsatisfied possible user across possible recall requirements. This perspective\nallows us to generalize reciprocal rank and define a new preference-based\nevaluation we call lexicographic precision or lexiprecision. By mathematical\nconstruction, we ensure that lexiprecision preserves differences detected by\nreciprocal rank, while empirically improving sensitivity and robustness across\na broad set of retrieval and recommendation tasks.", + "authors": "Fernando Diaz", + "published": "2023-06-13", + "updated": "2023-06-13", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Evaluating ranking systems for users seeking exactly one relevant item has a long history in information retrieval. As early as 1968, Cooper [7] proposed Type 1 expected search length or ESL1, defined as the rank position of the highest ranked relevant item. In the context of TREC-5, Kantor and Voorhees [12] proposed using the reciprocal of ESL1 in order to emphasize rank changes at the top of the ranked list and modeling the impatience of a searcher as they need to scan for a single item. Over the years, reciprocal rank (and less so ESL1) has established itself as a core metric for retrieval [5] and recommendation [4], adopted in situations where there is actually only one relevant item as well as in situations where there are multiple relevant items. Given two rankings, reciprocal rank and ESL1 always agree in terms of which ranking is better. Because of this, we refer to them collectively as the recall level 1 or RL1 metrics. Despite the widespread use of reciprocal rank, recent evidence suggests that it may brittle when it comes to discriminating between ranking systems [10, 19, 20]. In particular, the low number of unique values of reciprocal rank means that, especially when evaluating multiple highly-performing systems, we are likely to observe tied performance. Voorhees et al. [21] demonstrate that these conditions exist in many modern deep learning benchmarks. We address these issues by theoretically interpreting RL1 as a population-level metric we refer to as best-case retrieval evaluation. This allows us to propose a generalization of the RL1 ordering based on social choice theory [17] and preference-based evaluation [8]. This evaluation method, lexicographic precision or lexiprecision, (1,2,3,4,5) (1,2,3,4,6) (n-4,n-3,n-2,n-1,n) (1,2,3,4,7) (n-5,n-3,n-2,n-1,n) (2,3,4,5,6) (1,n-3,n-2,n-1,n) \u2026 \u2026 (1,*,*,*,*) (2,*,*,*,*) (3,*,*,*,*) (n-4,n-3,n-2,n-1,n) (n-5,*,*,*,*) \u2026 RL1 lexicographic precision Figure 1: Hasse diagram of possible positions of relevant items. Each tuple represents the possible positions of five relevant items in a corpus of \ud835\udc5bitems. RL1 metrics such as reciprocal rank (left) have \ud835\udc5b\u22124 unique values and, therefore, result in a partial order over all possible positions of relevant items. Lexicographic precision (right) is a total order over all possible positions of relevant items that preserves all strict orders in RL1 evaluation. preserves any strict ordering between rankings based on RL1 while also providing a theoretically-justified ordering when RL1 is tied. We compare lexiprecision and RL1 orderings using Hasse diagrams in Figure 1. On the left, we show the partial order of all possible positions of five relevant items in a corpus of size \ud835\udc5b. Since reciprocal rank and ESL1 only consider the position of the first relevant item, we only have \ud835\udc5bdifferent relevance levels. While this may not be an issue in general (since \ud835\udc5bis usually large), the number of rankings within each level can be very large and multiple highly effective systems can result in numerous ties. In contrast, lexiprecision has one relevance level for each unique arrangement of relevant items. That is, the number of relevance levels scales with the number relevant items and, by design, two rankings are tied only if they place relevant items in exactly the same positions. In this paper, we contribute to the theoretical understanding of evaluation through a detailed study of RL1 metrics, best-case retrieval evaluation, and lexiprecision. In Section 2, we motivate our work by showing that RL1 has fundamental theoretical limits, especially in situations where there are multiple relevant items. In Section 3, we demonstrate that RL1 can be interpreted as best-case retrieval evaluation, allowing us to to address its limitations by using methods from social choice theory and generalizing it as lexiprecision. In Section 5, we then conduct extensive empirical arXiv:2306.07908v1 [cs.IR] 13 Jun 2023 \fFernando Diaz analysis to show that lexiprecision is strongly correlated with RL1 metrics while substantially improving its discriminative power.1 2 MOTIVATION Our work is based on the observation that ceiling effects are inherent in RL1 evaluation. Assume a standard ranking problem where, given a query with \ud835\udc5aassociated relevant items, a system orders all \ud835\udc5bdocuments in the collection in decreasing order of predicted relevance. The set of all possible rankings of \ud835\udc5bis referred to as the symmetric group over \ud835\udc5belements and is represented as \ud835\udc46\ud835\udc5b. For a given ranking \ud835\udf0b\u2208\ud835\udc46\ud835\udc5b, let \ud835\udc5d\ud835\udc56be the position of the\ud835\udc56th highest-ranked relevant item. We can then define reciprocal rank as RR1(\ud835\udf0b) = 1 \ud835\udc5d1 . When no relevant document is retrieved (e.g. if no relevant items are in the system\u2019s top \ud835\udc58retrieval), we set RR1 = 0. For two rankings, we define \ud835\udeffRR1(\ud835\udf0b, \ud835\udf0b\u2032) = RR1(\ud835\udf0b)\u2212RR1(\ud835\udf0b\u2032). For the remainder of this section, we will use reciprocal rank for clarity although the analysis applies to ESL1 as well. Although we can easily see that there are \ud835\udc5bdifferent values for RR1(\ud835\udf0b), we are interested in the distribution of ties amongst system rankings for these \ud835\udc5bvalues as predicted by theoretical properties of reciprocal rank. Specifically, we want to compute, for a given position of the first relevant item \ud835\udc5d1 and a random second ranking, the probability that we will observe a tie. For any \ud835\udc5d1, there are \u0000\ud835\udc5b\u2212\ud835\udc5d1 \ud835\udc5a\u22121 \u0001 tied arrangements of positions of relevant items amongst all of the possible arrangements \ud835\udc5d\u2032 from a second system. If we sample an arrangement of relevant items uniformly at random, then the probability of a tie with \ud835\udf0bis \ud835\udc43\ud835\udc5f(\ud835\udc5d1 = \ud835\udc5d\u2032 1|\ud835\udc5d1) = (\ud835\udc5b\u2212\ud835\udc5d1 \ud835\udc5a\u22121) ( \ud835\udc5b \ud835\udc5a) . We plot this probability in Figure 2. We can observe that, when we have few relevant items (i.e. small \ud835\udc5a), we have a relatively small and uniform probability of ties across all values of \ud835\udc5d1. However, as we increase the number of relevant items, the distribution begins to skew toward a higher probability of a tie as \ud835\udc5d1 is smaller. This means that, if we have a ranking where the first relevant item is close to the top, even if the second ranking is drawn uniformly at random, we will be more likely to find a tie than if the first relevant item were lower in the ranking. While our analysis indicates a lack of sensitivity of reciprocal rank for \ud835\udc5d\u2032 drawn uniformly at random as \ud835\udc5aincreases, we are also interested in the probability of ties when \ud835\udc5d\u2032 is drawn from rankings produced by real systems. We collected runs associated with multiple public benchmarks (see Section 4.1 for details) and computed the the empirical distribution of ties conditioned \ud835\udc5d1 (Figure 3). Because of the highly skewed distribution, we plot the logarithmic transform of the probability of a rank position. As we can see, across both older and newer benchmarks, the probability of a tie for rankings when the top-ranked relevant item is at position 1 is substantially larger than if we assume \ud835\udc5d\u2032 is drawn uniformly at random. The 2021 TREC Deep Learning track data in particular demonstrates higher skew than others, confirming observations previously made about saturation at top rank positions [21]. Taken together, these results demonstrate a fundamental limitation of RL1 metrics (i.e., reciprocal rank and ESL1) for evaluation. 1In lieu of an isolated \u2018Related Work\u2019 section, we have included discussion of relevant literature when necessary. This helps make connections explicit to our work. 0 100 200 300 400 500 0.000 0.001 0.002 0.003 0.004 0.005 p1 Pr(p1=p1 ' ) m 10 50 100 250 Figure 2: Given a ranking where the highest-ranked relevant item is at position \ud835\udc5d1, the probability of a tie with a second ranking sampled uniformly from all arrangements of relevant items for a corpus size of \ud835\udc5b= 50000. This figure and others are best rendered or printed in color. 1 2 5 10 0.005 0.010 0.020 0.050 0.100 0.200 0.500 p1 Pr(p1=p1 ' ) robust web 2013 DL 2021 (docs) ml-1m Figure 3: Empirical probability of a tie with a second ranking for several benchmarks (see Section 4.1 for details). Horizontal and vertical axes are on a logarithmic scale for clarity. As retrieval and other scenarios where reciprocal rank is used begin to attract highly performant systems, we need to extend our evaluation approaches to address these issues. 3 LEXICOGRAPHIC PRECISION RL1 evaluation emphasizes precision by considering the position of the top-ranked relevant item and ignoring the positions of other relevant items. However, only ever looking at the position of the topranked relevant item results in the ceiling effects described in the previous section. Our goal is to develop an evaluation method that preserves the ordering of a pair of rankings by RL1 (i.e., agree with RR1 when RR1(\ud835\udf0b) \u2260RR1(\ud835\udf0b\u2032)) and provides a justified ordering \fBest-Case Retrieval Evaluation: Improving the Sensitivity of Reciprocal Rank with Lexicographic Precision of a pair of rankings when RL1 is tied (i.e., generate a sensible order when RR1(\ud835\udf0b) = RR1(\ud835\udf0b\u2032)). Although metrics like expected reciprocal rank [6] and average precision include reciprocal rank as a component in their computation, they are not guaranteed to preserve the ordering of reciprocal rank when there is one. In this section, we will interpret RL1 metrics as best-case retrieval evaluation, allowing us to derive a preference-based evaluation method based on social choice theory. 3.1 Best-Case Retrieval Evaluation When a user approaches a retrieval system, there is a great deal of uncertainty about their information need. While a request such as a text query provides information about which items in the corpus might be relevant, it says much less about the user\u2019s appetite for relevant information. As a result, there is an implicit population of possible users issuing any particular request, each of whom may have a different utility for any particular ranking. In this section, we explore two types of uncertainty and demonstrate that, from both perspectives, RL1 evaluation represents the best-case utility over that population. We first consider uncertainty over recall requirements. Robertson [15] presented a model for evaluating rankings based on the diverse set of recall requirements that a user might have. Given a request and its associated relevant items, users may be interested in one relevant item, a few relevant items, or the complete set of all relevant items. We can assess the quality of a ranking for any particular user recall requirement with what Cooper [7] refers to as the Type 2 expected search length: the number of items a user with requirement \ud835\udc56has to scan before finding \ud835\udc56relevant items. So, each information need has \ud835\udc5arecall levels and RL1 is the evaluation measure associated with users requiring exactly one relevant item. From this perspective, we can, for a specific ranking, look at how utility is distributed amongst possible users, as represented by their recall levels. For example, we can ask how utility for users with high and low recall requirements compares; or what the average utility across these populations is. While previous work has looked at the average-case utility [8] and worst-case utility [9], in this work we suggest that RL1 represents the best-case performance over these possible users. The proof is relatively simple. Because RR\ud835\udc56monotonically degrades in rank, the best-case utility over this representation of users is RR1 (equivalently ESL1). The next-best-case is RR2 and so forth until we reach RR\ud835\udc5a, which we refer to as the worst-case. So, given two rankings \ud835\udf0band \ud835\udf0b\u2032, observing RR1(\ud835\udf0b) > RR1(\ud835\udf0b\u2032) implies that the best-case performance over possible user recall requirements is higher in \ud835\udf0bcompared to \ud835\udf0b\u2032. Next, we consider uncertainty over psychologically relevant items. When evaluating a retrieval system, we often use relevance labels derived from human assessors or statistical models. But what if a specific user does not find the top-ranked item labeled relevant actually relevant to them? For example, a user may have already seen a specific item or they may desire an item with a specific (missing) attribute. A judged relevant item might be inappropriate for any number of reasons not expressed in the request. The concept of psychological relevance [11] suggests that judging any item relevant in general (as is the case in many retrieval benchmarks, including those used in TREC) is a necessary but not sufficient criteria to determine an item\u2019s psychological relevance to any particular user. From this perspective, there are 2\ud835\udc5a\u22121 possible non-empty sets of relevant items for a specific request, each representing psychological relevance to a possible user. Nevertheless, amongst these possible users, if they are interested in precisely one relevant item, there are \ud835\udc5aunique utilities. Again, since RL1 monotonically decreases in rank, the best-case utility is RR1, followed by RR2 until we reach RR\ud835\udc5a. Both uncertainty over recall levels and over psychological relevance focus on possible populations of users. Because the utility to the user implies utility to the system designer (e.g., for objectives like retention), understanding the best-case performance is valuable in decision-making. From the perspective of social choice theory, best-case retrieval evaluation is inherently optimistic and represents risk-seeking decision-making. 3.2 Lexicographic Precision The problem with evaluating for best-case retrieval (as shown in Section 2) is the tendency for multiple rankings to be tied, especially as (i) we increase the number of relevant items and (ii) systems optimize for retrieval metrics. We can address these ceiling effects by developing a best-case preference-based evaluation that focuses on measuring differences in performance instead of absolute performance [8]. While metric-based evaluation models the preference between rankings by first computing some evaluation metric for each ranking, preference-based evaluation explicitly models the preference between two rankings. Prior research has demonstrated that preference-based evaluation can be much more sensitive than metric-based evaluation [8], making it well-suited for addressing the ceiling effects described in Section 2. Under best-case preference-based retrieval, we are interested in answering the question, \u2018under the best possible scenario, which ranking would the user prefer?\u2019 In this respect, it is a user-based evaluation method, but one based on preferences and measurement over a population of users. More formally, given an information need and two rankings \ud835\udf0band \ud835\udf0b\u2032 associated with two systems, metric-based evaluation uses an evaluation metric \ud835\udf07: \ud835\udc46\ud835\udc5b\u2192\u211c(e.g. reciprocal rank or average precision) to compute a preference, \ud835\udf07(\ud835\udf0b) > \ud835\udf07(\ud835\udf0b\u2032) =\u21d2\ud835\udf0b\u227b\ud835\udf0b\u2032 where \ud835\udf0b\u227b\ud835\udf0b\u2032 indicates that we prefer \ud835\udf0bto \ud835\udf0b\u2032. Notice that, if \ud835\udf07(\ud835\udf0b) = \ud835\udf07(\ud835\udf0b\u2032), then we cannot infer a preference between \ud835\udf0band \ud835\udf0b\u2032. We contrast this with preference-based evaluation, which directly models this relationship \u0394 : \ud835\udc46\ud835\udc5b\u00d7 \ud835\udc46\ud835\udc5b\u2192\u211c, \u0394(\ud835\udf0b, \ud835\udf0b\u2032) > 0 =\u21d2\ud835\udf0b\u227b\ud835\udf0b\u2032 Our goal is to design a preference-based evaluation that preserves the best-case properties of RL1 metrics with much higher sensitivity. Consider the two position vectors \ud835\udc5dand \ud835\udc5d\u2032 in Figure 4 associated with the two rankings \ud835\udf0band \ud835\udf0b\u2032. These two vectors are tied in the best case (i.e., \ud835\udc5d1 = \ud835\udc5d\u2032 1). However, we can break this tie by looking at the next-best case (i.e. \ud835\udc5d2) where, because \ud835\udc5d2 < \ud835\udc5d\u2032 2, we say that \ud835\udf0b\u227b\ud835\udf0b\u2032. If we had observed a tie between the next-best case, we could compare \ud835\udc5d3, and so forth. This is known as lexicographic sorting in the social choice literature [17] and reflects a generalization of best-case sorting. Given two sorted vectors of utilities, here reflected by the rank position, the \fFernando Diaz 2 4 10 n-1 n 2 8 9 500 n AB7HicbZA9SwNBEIbn4 leMX1FLm8UgWIU7EbUzaGMZwUsCyRH2NnvJkr29Y3dOCG/w cZCEVt/kJ0/xc7NJYUmvrDw8L4z7MyEqRQGXfLKaysrq1vF DdLW9s7u3vl/YOGSTLNuM8SmehWSA2XQnEfBUreSjWncSh5M xzeTvPmI9dGJOoBRykPYtpXIhKMorX8jskY65YrbtXNRZbBm0 Pl+jvKVe+WPzu9hGUxV8gkNabtuSkGY6pRMknpU5meErZkP Z526KiMTfBOB92Qk6s0yNRou1TSHL3d8eYxsaM4tBWxhQHZj Gbmv9l7Qyjq2AsVJohV2z2UZRJgmZbk56QnOGcmSBMi3srI QNqKYM7X1K9gje4srL0DirehfV83u3UruBmYpwBMdwCh5cQg 3uoA4+MBDwBC/w6ijn2Xlz3melBWfecwh/5Hz8AL2uklc=\u227b = AB6HicbVDJSgNBE K2JW4xb1KOXxiDkFGYEl2PQi8cEzAKZIfR0apI2PQvdPU IY8gUe9KCIVz/FT/Dmh3i3sxw08UHB470qur5ieBK2/a XlVtZXVvfyG8WtrZ3dveK+wdNFaeSYPFIpZtnyoUPMKG5 lpgO5FIQ19gyx9eT/zWPUrF4+hWjxL0QtqPeMAZ1UaqJ9 1iya7YU5Bl4sxJqVp2y98fj26tW/x0ezFLQ4w0E1SpjmM n2suo1JwJHBfcVGFC2ZD2sWNoRENUXjY9dExOjNIjQSxNR ZpM1d8TGQ2VGoW+6QypHqhFbyL+53VSHVx6GY+SVGPEZo uCVBAdk8nXpMclMi1GhlAmubmVsAGVlGmTcGE4Cy+vEy apxXnvHJWN2lcwQx5OIJjKIMDF1CFG6hBAxgPMAzvFh31 pP1ar3NWnPWfOYQ/sB6/wGimJCI p AB6XicbVDJSgNBEK2JW4xb1KOXxiDmFGYEl2PQi8coZoHMEHo6PUmTnu6hu0cIQ/5AEA+KePVP/ARvfoh3O8tBEx8U PN6roqpemHCmjet+Obml5ZXVtfx6YWNza3unuLvX0DJVhNaJ5FK1QqwpZ4LWDTOcthJFcRxy2gwHV2O/eU+VZlLcmWFCgxj 3BIsYwcZKt8lxp1hyK+4EaJF4M1Kqlv3y98ejX+sUP/2uJGlMhSEca9323MQEGVaGEU5HBT/VNMFkgHu0banAMdVBNrl0hI 6s0kWRVLaEQRP190SGY62HcWg7Y2z6et4bi/957dREF0HGRJIaKsh0UZRyZCQav426TFi+NASTBSztyLSxwoTY8Mp2BC8+Z cXSeOk4p1VTm9sGpcwR4O4BDK4ME5VOEalAHAhE8wDO8OAPnyXl13qatOWc2sw9/4Lz/AMXkLk= p0 (a) Lexicographic Precision \u0394(\ud835\udf0b, \ud835\udf0b\u2032) \ud835\udeffRR1(\ud835\udf0b, \ud835\udf0b\u2032) 0 \ud835\udeffESL1(\ud835\udf0b, \ud835\udf0b\u2032) 0 sgnLP(\ud835\udf0b, \ud835\udf0b\u2032) \u22121 rrLP(\ud835\udf0b, \ud835\udf0b\u2032) \u22121 8 (b) Preference Magnitude Figure 4: Lexicographic precision between two rankings \ud835\udf0b and \ud835\udf0b\u2032 with \ud835\udc5a= 5 relevant items in corpus of size \ud835\udc5b. (a) Using the sorted positions of relevant items, lexiprecision returns a preference based on the highest-ranked difference in positions. (b) The magnitude of preference between \ud835\udf0band \ud835\udf0b\u2032 under different schemes. lexicographic maximum begins by looking at utilities in the bestoff positions (i.e. \ud835\udc5d1 and \ud835\udc5d\u2032 1) and iteratively inspects lower utility positions until we find an inequality. If we exhaust all \ud835\udc5arelevance levels, we indicate that there is not preference between the rankings. Note that a tie can only happen if two rankings have all relevant items in exactly the same positions. Lexicographic sorting generates a total ordering over all positions of relevant items, in contrast with just inspecting \ud835\udc5d1, which compresses all arrangements onto \ud835\udc5bpossible values. Because of its basis in lexicographic ordering, we refer to this lexicographic precision or lexiprecision. 3.3 Number of Ties Under Lexicographic Precision We can contrast the number of ties as \ud835\udc5aincreases in RL1 metrics with the number of ties as \ud835\udc5aincreases in lexiprecision. In the latter, we only observe ties when the positions of the relevant items for two rankings are the same and, therefore, we have \u0000\ud835\udc5b \ud835\udc5a \u0001 possible \u2018values\u2019 and the number of ties given a fixed ranking is constant. If we add \ud835\udc58relevant items, the number of \u2018values\u2019 increases, resulting in an increase in discriminative power. Specifically, if we add \ud835\udc58 relevant items to \ud835\udc5a, then the number of possible values scales exponentially in \ud835\udc58. \u0000 \ud835\udc5b \ud835\udc5a+\ud835\udc58 \u0001 \u0000\ud835\udc5b \ud835\udc5a \u0001 = \ud835\udc5a+\ud835\udc58 \u00d6 \ud835\udc56=\ud835\udc5a+1 \ud835\udc5b+ 1 \u2212\ud835\udc56 \ud835\udc56 (1) By contrast, for RL1 metrics, this increase in the number of unique position vectors needs to be allocated to a fixed \ud835\udc5bvalues, resulting in collisions, as suggested by the pigeonhole principle. Moreover, these collisions will tend to increasingly occur at values associated with position vectors where \ud835\udc5d1 is small (Section 2). 3.4 Best-Case Retrieval Evaluation Revisited In Section 3.1, we described two dimensions of uncertainty in retrieval evaluation: recall level and psychological relevance. In both cases, we saw that the best-case utility was represented by RL1. In terms of preference-based evaluation, we would like to show that, for both recall level uncertainty and psychological relevance uncertainty, the highest ranked difference in utility will be \ud835\udeffRR\ud835\udc56\u2217, where \ud835\udc56\u2217= argmin\ud835\udc57\u2208[1,\ud835\udc5a]\ud835\udeffRR\ud835\udc56(\ud835\udf0b, \ud835\udf0b\u2032) \u22600. This is clear for recall level uncertainty because the population of possible users exactly matches the recall levels defining \ud835\udc56\u2217. However, for psychological relevance uncertainty, we have 2\ud835\udc5a\u22121 possible users. That said, there are only \ud835\udc5apossible RL1 metric values. Moreover, the number of possible users tied at the first recall level is 2\ud835\udc5a\u22121; at the second recall level is 2\ud835\udc5a\u22122; down to the final recall level where there is a single possible user. This arrangement of ties is the same regardless of the exact positions of the relevant items. Therefore, if we observe \ud835\udeffRR1 = 0, we will observe 2\ud835\udc5a\u22121 ties amongst the possible psychological relevance states where where the first relevant item is at position \ud835\udc5d1. The next highest utility is, by the monotonicity of RL1 metrics, associated with the second recall level. We can continue this procedure until we observe an inequality, which will occur exactly at the first \ud835\udc56such that \ud835\udeffRR\ud835\udc56(\ud835\udf0b, \ud835\udf0b\u2032) \u22600. In other words, \ud835\udc56\u2217. These observations are important since they demonstrate that lexiprecision generalizes RL1 evaluation and best-case performance across two types of uncertainty. 3.5 Quantifying Preferences Although lexiprecision provides a ordering over a pair of rankings, it does not quantify the magnitude of the preference (i.e. the value of \u0394(\ud835\udf0b, \ud835\udf0b\u2032)). Defining a magnitude allows us to measure the degree of preference, which can then be averaged over multiple requests. We can define the magnitude directly as the value of \ud835\udeffRR\ud835\udc56and, therefore, defining \u0394(\ud835\udf0b, \ud835\udf0b\u2032) as, rrLP(\ud835\udf0b, \ud835\udf0b\u2032) = \ud835\udeffRR\ud835\udc56\u2217(\ud835\udf0b, \ud835\udf0b\u2032) (2) where \ud835\udc56\u2217is defined in Section 3.4. This has the advantage of, when \ud835\udc56\u2217= 1, reproducing the difference in reciprocal rank. Under this definition, the magnitude of preferences for higher recall levels will tend to be smaller due to the aggressive discounting in reciprocal rank. Alternatively, we can be more conservative in our quantification and just return a constant value based on the preference, defining \u0394(\ud835\udf0b, \ud835\udf0b\u2032) as, sgnLP(\ud835\udf0b, \ud835\udf0b\u2032) = sgn(\ud835\udeffRR\ud835\udc56\u2217(\ud835\udf0b, \ud835\udf0b\u2032)) (3) where \ud835\udc56\u2217is defined as above. Although the direction of the preference agrees with rrLP, we discard its magnitude and, as a result, differences at lower ranks are equal to those at higher ranks. Prior work found that looking at unweighted preference information alone can help with preference sensitivity [8]. 3.6 Lexicographic Precision as Modeling \ud835\udeffRR1 A different way to interpret lexiprecision is as a method to estimate a high-precision preference between rankings. Assume that we have some latent preference between two rankings, \u02c6 \u0394(\ud835\udf0b, \ud835\udf0b\u2032), that we know to be \u2018high-precision\u2019. That is, users prefer finding some relevant items quickly than all relevant items quickly. One way to model this preference is to inspect the positions of relevant items in \ud835\udf0band \ud835\udf0b\u2032. From the perspective of \u2018very high precision\u2019, observing \ud835\udeffRR1(\ud835\udf0b, \ud835\udf0b\u2032) > 0 provides significant evidence that \u02c6 \u0394(\ud835\udf0b, \ud835\udf0b\u2032) > 0. What if we do not observe a preference at the \fBest-Case Retrieval Evaluation: Improving the Sensitivity of Reciprocal Rank with Lexicographic Precision 5 10 15 20 0.0 0.2 0.4 0.6 0.8 1.0 i Pr(pi=pi ') robust web 2013 DL 2021 (docs) ml-1m Figure 5: Empirical probability of a tie in position by recall level. Note that, while Figure 2 measures the probability of a tie for different positions of the highest ranked relevant item (i.e. \ud835\udc5d1), this figure measures the probability of a tie for different recall levels. first recall level? Inspired by Katz\u2019s back-off model [13], we inspect the second recall level for evidence of the value of \u02c6 \u0394(\ud835\udf0b, \ud835\udf0b\u2032). If we do not observe a preference, we can progressively back off to higher and higher recall levels. While Section 2 demonstrated that \ud835\udeffRR1(\ud835\udf0b, \ud835\udf0b\u2032) = 0 with high probability, backing off our estimates works best if, for \ud835\udc56> 1, we expect \ud835\udeffRR\ud835\udc56(\ud835\udf0b, \ud835\udf0b\u2032) = 0 with lower probability. Using the runs associated with several public benchmarks, we computed \ud835\udeffRR\ud835\udc56for all pairs of rankings generated by multiple systems for the same query. We show the probability of a tie for the first twenty recall levels in Figure 5. We can see that the number of ties at \ud835\udeffRR1 are high, ranging from roughly 20 Inspecting the number of relevant items retrieved confirms this. The DL 2021 submissions had 38.74 \u00b1 21.75 relevant items in their retrievals, compared to web with 53.14 \u00b1 47.06. Meanwhile, robust submissions had 40.51 \u00b1 41.49 relevant items retrieval, suggesting much higher variance and ml-1m with 7.46 \u00b1 8.57 relevant items retrieved and much higher variance, leading to more more ties at higher recall levels. Given that different benchmarks observed different behaviors for ties amongst recall levels, we need to understand how many recall levels we need to visit before finding evidence for \u02c6 \u0394. If a benchmark needs many recall levels but observes many ties at high recall levels, then our model of \u02c6 \u0394 may be less reliable. We computed the number of recall levels needed, \ud835\udc56\u2217, for each benchmark and plotted the empirical cumulative distribution function in Figure 6. We find that we need fewer than ten recall levels to capture 90 Although our preceding analysis demonstrates that a backoff model of \u02c6 \u0394 based on lexiprecision will terminate at a reasonable depth, we still need to show that there is locality amongst \ud835\udeffRR\ud835\udc56. This means that we ask, if we observe \ud835\udeffRR1(\ud835\udf0b, \ud835\udf0b\u2032) > 0, how likely is it that \ud835\udeffRR2(\ud835\udf0b, \ud835\udf0b\u2032) > 0? \ud835\udeffRR3(\ud835\udf0b, \ud835\udf0b\u2032) > 0? If there is high locality amongst \ud835\udeffRR\ud835\udc56, then information from \ud835\udeffRR\ud835\udc56+1 can help in predicting 0 5 10 15 20 0.0 0.2 0.4 0.6 0.8 1.0 recall level robust web 2013 DL 2021 (docs) ml-1m Figure 6: Empirical cumulative distribution function of recall level needed to distinguish systems. the true value of \ud835\udeffRR\ud835\udc56when it is missing or tied. Note that, if we observe \ud835\udeffRR\ud835\udc56> 0 and \ud835\udc5bis large, there is absolutely no guarantee that \ud835\udeffRR\ud835\udc56+1 > 0 since the next ranked relevant items could, in theory, occur anywhere in the range [\ud835\udc5d\ud835\udc56+ 1,\ud835\udc5b] and [\ud835\udc5d\u2032 \ud835\udc56+ 1,\ud835\udc5b]. That said, given the number of ties at recall level 1, we are interested in understanding whether information at other rank positions can provide a way to distinguish tied rankings. In Figure 7a, we computed the Pearson correlation amongst all pairs of \ud835\udeffRR\ud835\udc56for \ud835\udc56\u2208[1, 8] for the Robust 2004 benchmark. The fact that correlation between \ud835\udeffRR\ud835\udc56 and \ud835\udeffRR\ud835\udc56+\ud835\udc57degrades as \ud835\udc57increases from 1 demonstrates that there is indeed high locality. The implication justifies the use of backoff modeling of \u02c6 \u0394. To test this hypothesis explicitly, we fit a linear model of \ud835\udeffRR1 using \ud835\udeffRR2, . . . ,\ud835\udeffRR4 as independent variables. We plot the coefficients of the linear regression in the solid line in Figure 7b. The substantially larger coefficient on \ud835\udeffRR2 indicates that the majority of the predictive power can be found at recall level 2 (\ud835\udc57= 1). Higher recall levels (\ud835\udc57> 1) are associated with much smaller coefficients. The actual contributions of higher recall levels are much smaller than this suggests since, because we are operating with reciprocals, the magnitude of \ud835\udeffRR\ud835\udc56shrinks as \ud835\udc56grows. While the colinearity in Figure 7a might explain some of this disparity in weights, the locality of individual Pearson correlations and high predictive accuracy means, from a modeling perspective, that a backoff model is justified. We repeated this analysis for predicting \ud835\udeffRR2 from \ud835\udeffRR3, . . . ,\ud835\udeffRR6 and similarly for \ud835\udeffRR3 and \ud835\udeffRR4. Similar to our observation when modeling \ud835\udeffRR1, these results suggest that the next higher recall level is the most valuable predictor when modeling \ud835\udeffRR\ud835\udc56for any specific recall level. We repeated this regression analysis for explicitly cascaded data (i.e. only modeling cases when there is a tie at positions \ud835\udc56\u2032 < \ud835\udc56) as well as for regressing against the sign of the preference and observed identical findings. Although we omit those plots due to space constraints, they further support a backoff model intrepretation of lexiprecision. \fFernando Diaz \u03b4RR1 0.73 \u03b4RR2 0.62 0.84 \u03b4RR3 0.56 0.74 0.87 \u03b4RR4 0.51 0.67 0.80 0.91 \u03b4RR5 0.47 0.63 0.73 0.84 0.92 \u03b4RR6 0.44 0.59 0.69 0.78 0.86 0.93 \u03b4RR7 0.42 0.55 0.64 0.74 0.80 0.87 0.94 \u03b4RR8 (a) Correlation between \ud835\udeffRR\ud835\udc56 1 2 3 4 0.0 0.2 0.4 0.6 0.8 1.0 predicting \u03b4RRi using \u03b4RRi+j j coefficient i 1 (R2=0.53) 2 (R2=0.70) 3 (R2=0.76) 4 (R2=0.83) (b) Regression of \ud835\udeffRR\ud835\udc56using \ud835\udeffRR\ud835\udc56+1, . . . ,\ud835\udeffRR\ud835\udc56+4 Figure 7: Locality of \ud835\udeffRR\ud835\udc56. Relationship between the difference in reciprocal rank across recall levels using Robust 2004 runs. (a) Pearson and linear fit between all pairs of \ud835\udeffRR\ud835\udc56. (b) Linear regression of\ud835\udeffRR\ud835\udc56using\ud835\udeffRR\ud835\udc56+1:\ud835\udc56+4 as independent variables. Regression shown for \ud835\udc56= {1, 2, 3, 4}. 4 METHODS In previous sections, we theoretically and conceptually connected RL1 to the notion of best-case retrieval evaluation, with a few illustrative empirical results. In order to rigorously test the viability of lexiprecision, we conducted a series of empirical analyses based on publicly available benchmarking data.2 4.1 Data We analyzed the performance of lexiprecision across a variety of retrieval and recommendation tasks. Specifically, we collected runs submitted to TREC news (Robust 2004, Core 2017 and 2018), web (Web 2009-2014), and deep learning (Deep Learning 2019-2021) 2Code for computing lexiprecision can be found at https://github.com/diazf/pref_eval. Table 1: Datasets used in empirical analysis. requests runs rel/request docs/request news robust (2004) 249 110 69.93 913.82 core (2017) 50 75 180.04 8853.11 core (2018) 50 72 78.96 7102.61 web web (2009) 50 48 129.98 925.31 web (2010) 48 32 187.63 7013.21 web (2011) 50 61 167.56 8325.07 web (2012) 50 48 187.36 6719.53 web (2013) 50 61 182.42 7174.38 web (2014) 50 30 212.58 6313.98 deep deep-docs (2019) 43 38 153.42 623.77 deep-docs (2020) 45 64 39.27 99.55 deep-docs (2021) 57 66 189.63 98.83 deep-pass (2019) 43 37 95.40 892.51 deep-pass (2020) 54 59 66.78 978.01 deep-pass (2021) 53 63 191.96 99.95 recsys movielens 6005 21 18.87 100.00 libraryThing 7227 21 13.15 100.00 beerAdvocate 17564 21 13.66 99.39 tracks as well as several public recommendation tasks [20]. We present details of these datasets in Table 1. 4.2 Analyses Our empirical analyses were founded on two core questions, (i) how empirically correlated are lexiprecision and RL1 metrics, and (ii) how much more robust is lexiprecision than RL1 metrics. Because of its widespread adoption in the research community, we will use reciprocal rank for analyses. In order to answer the first question, we conducted experiments designed to predict the agreement between lexiprecision and RL1 metrics under different conditions. We considered two types of agreement. Agreement in ranking preference tests whether \ud835\udf0b\u227bLP \ud835\udf0b\u2032 agrees with \ud835\udf0b\u227bRR \ud835\udf0b\u2032. Because lexiprecision is substantially more sensitive than RL1 metrics, we only consider situations where \ud835\udeffRR1(\ud835\udf0b, \ud835\udf0b\u2032) \u22600. Because sgnLP and rrLP always agree in sign, we will only show results for one of the metrics when computing ranking agreement. Agreement in system preference tests whether E\ud835\udc5e\u223cQ h \u0394LP(\ud835\udf0b\ud835\udc5e, \ud835\udf0b\u2032 \ud835\udc5e) i agrees in sign with E\ud835\udc5e\u223cQ h \u0394RR(\ud835\udf0b\ud835\udc5e, \ud835\udf0b\u2032 \ud835\udc5e) i . This measures whether our choice of rrLP or sgnLP affects its correlation with reciprocal rank. Agreement is measured as a percentage of preferences agreed upon. In order to assess the robustness of lexiprecision, we measure the number of ties observed amongst pairs of rankings and discriminative power. We claim that a robust approach has fewer ties and higher discriminative power. For discriminative power, we adopt Sakai\u2019s approach of measuring the number of statistically significant differences between runs [16], using both Tukey\u2019s honestly significant difference (HSD) test [3] and classic paired test to compute \ud835\udc5d-values. The paired test uses the Student\u2019s \ud835\udc61-test for reciprocal rank and rrLP [18]; and the binomial test for sgnLP. \fBest-Case Retrieval Evaluation: Improving the Sensitivity of Reciprocal Rank with Lexicographic Precision Table 2: Ranking agreement between \ud835\udeffRR1 and preferences based on the positions of the last \ud835\udc5a\u22121 relevant items. The computation of sgnLP in the table is based on the \ud835\udc5a\u22121 positions of relevant items after the top-ranked relevant item. sgnLP \ud835\udeffRR2 news robust (2004) 85.78 83.44 core (2017) 89.23 87.30 core (2018) 88.01 86.58 web web (2009) 85.87 84.79 web (2010) 87.29 85.41 web (2011) 88.91 87.54 web (2012) 87.22 85.45 web (2013) 86.51 84.45 web (2014) 88.02 85.82 deep deep-docs (2019) 86.56 83.10 deep-docs (2020) 83.73 79.34 deep-docs (2021) 92.41 89.78 deep-pass (2019) 90.45 88.87 deep-pass (2020) 92.86 91.08 deep-pass (2021) 91.97 90.14 recsys ml-1M (2018) 78.90 77.56 libraryThing (2018) 66.50 66.08 beerAdvocate (2018) 58.84 58.25 5 RESULTS 5.1 Correlation with Reciprocal Rank By construction, we know that \ud835\udeffRR1(\ud835\udf0b, \ud835\udf0b\u2032) > 0 =\u21d2LP(\ud835\udf0b, \ud835\udf0b\u2032) > 0 and, so, the correlation between the two will be high. We can further test this by comparing how well lexiprecision predicts a ground truth preference between rankings based on \ud835\udeffRR1. In our first analysis, given an observed \ud835\udeffRR1(\ud835\udf0b, \ud835\udf0b\u2032) \u22600, we measure the ability of lexiprecision and reciprocal based only on \ud835\udc5a\u22121 subsequent recall levels to predict the sign of \ud835\udeffRR1(\ud835\udf0b, \ud835\udf0b\u2032). That is, we use \ud835\udeffRR1(\ud835\udf0b, \ud835\udf0b\u2032) as a target value and compute \ud835\udeffRR1 and LP using suffixes \ud835\udc5d2:\ud835\udc5aand \ud835\udc5d\u2032 2:\ud835\udc5a. Although artificial, this analysis provides an indication of the predictive value gained through cascaded modeling (as opposed to just looking at the top-ranked relevant item). We present the results in Table 2. As we can see, lexiprecision consistently agrees more with the target (masked) \ud835\udeffRR1 than \ud835\udeffRR1 of the suffix across all datasets, indicating that the additional information in higher recall levels can be used to predict the target (masked) \ud835\udeffRR1. This agrees with our preliminary analysis in Section 3.6. We can also test the relationship between reciprocal rank and lexiprecision by measuring the agreement under incomplete information. Specifically, we consider removing either labels (treating unlabeled items as non-relevant) or requests (i.e. queries or users). We then measure the agreement between preferences with incomplete data and \ud835\udeffRR1 on complete data (i.e. all requests and labels). Methods that agree more with reciprocal rank on complete data are considered more correlated. We present results for ranking and system agreement when removing labels (Figure 8a) and queries 0.7 0.8 0.9 news ranking 0.6 0.7 0.8 0.9 web 0.6 0.7 0.8 0.9 deep 0.4 0.5 0.6 0.7 0.8 0.9 recsys 0.2 0.4 0.6 0.8 label fraction sign agreement 0.8 0.9 news system 0.7 0.8 0.9 web 0.7 0.8 0.9 deep 0.9 0.95 recsys 0.2 0.4 0.6 0.8 label fraction sign agreement (a) Removing labels. 0.7 0.8 0.9 news system 0.7 0.8 0.9 web 0.5 0.6 0.7 0.8 0.9 deep 0.9 0.95 recsys 0.2 0.4 0.6 0.8 query fraction sign agreement (b) Removing queries. Figure 8: Preference agreement with \ud835\udeffRR1 with full data. Labels and requests removed randomly. Results averaged across ten samples. Solid green lines: \ud835\udeffRR1 with incomplete information. Dashed red lines: rrLP with incomplete information. Dotted blue lines: sgnLP with incomplete information. Shaded areas: one standard deviation across samples. Ranking agreement with incomplete labels for sgnLP is identical to rrLP and omitted for clarity. (Figure 8b). Across all conditions, we observe that the rrLP has as \fFernando Diaz Table 3: Percentage of ties between pairs of rankings from two systems for the same request. We collapse rrLP and sgnLP for clarity. rrLP, sgnLP \ud835\udeffRR1 news robust (2004) 0.39 44.22 core (2017) 0.23 48.50 core (2018) 1.72 31.43 web web (2009) 4.93 15.13 web (2010) 0.61 25.85 web (2011) 1.02 41.99 web (2012) 0.34 34.01 web (2013) 0.83 31.09 web (2014) 0.64 41.93 deep deep-docs (2019) 1.06 68.45 deep-docs (2020) 2.43 73.99 deep-docs (2021) 0.23 80.84 deep-pass (2019) 2.63 56.89 deep-pass (2020) 2.58 50.30 deep-pass (2021) 1.32 47.41 recsys ml-1M (2018) 3.38 21.39 libraryThing (2018) 16.48 25.85 beerAdvocate (2018) 41.73 45.72 high or slightly higher agreement with \ud835\udeffRR1 with complete information than \ud835\udeffRR1 with incomplete information. This means that rrLP can accurately predict \ud835\udeffRR1 with complete information as well or better than using reciprocal rank. Moreover, we observed that sgnLP shows weaker system agreement which occurs because its magnitude does not decay with rank position and, therefore, resulting averages are inconsistent with averages of position-discounted reciprocal rank values. 5.2 Sensitivity In Section 2, we motivated our work by showing that RL1 metrics theoretically and empirically suffer from ceiling effects. The primary instrument we used to determine this was the probability of ties between rankings. In Table 3, we present the percentage of tied rankings from different systems for the same request. As predicted by our analysis in Section 3.3, lexiprecision has substantially fewer ties because this only happens when two rankings place relevant items in exactly the same positions. In Section 3.3, we showed that lexiprecision implicitly and exponentially increased its fidelity as the number of relevant items \ud835\udc5aincreased, while RL1 would quickly suffer from ties. In Figure 9, we show the number of tied rankings as a function of incomplete labels. This allows us to see trends with respect to \ud835\udc5a. Across our three retrieval benchmark sets, we see the growth in number of ties for RL1 as \ud835\udc5aincreases; meanwhile, they shrink for lexiprecision. The drop in ties for recommender systems benchmarks suggests that, as described in Section 3.6, rankings contain very few relevant items and, as a result, removing labels will result in no relevant items present and increasingly tied rankings. 0 0.2 0.4 0.6 news ranking 0 0.2 0.4 0.6 web 0 0.2 0.4 0.6 deep 0 0.2 0.4 0.6 recsys 0.2 0.4 0.6 0.8 label fraction fraction tied Figure 9: Number of ties as labels are removed randomly. Results are averaged across ten samples. Solid green lines: \ud835\udeffRR1 with incomplete information. Dashed red lines: rrLP with incomplete information. Shaded areas: one standard deviation across samples. Number of ties with incomplete labels for sgnLP is identical to rrLP and omitted for clarity. While the number of ties indicates that RL1 might not be able to distinguish systems, for a large enough sample of requests, a metric might still be good enough to distinguish systems. A different approach to measuring the discriminative power of an evaluation method is to count the number of differences that are statistically significant [16]. When we compare the percentage of pairs registering a statistically significant difference (Table 4), both rrLP and sgnLP outperform reciprocal rank, often by a very large margin. This indicates that the number of ties indeed hurts the ability of reciprocal rank to detect significant differences, while both variants of lexiprecision are much more sensitive. 6 DISCUSSION Our results demonstrate that our lexiprecision variants capture the properties of RL1 while substantially increasing the ability to distinguish systems under the same best-case evaluation assumptions. Practitioners and evaluators need to assess whether the assumptions behind RL1 metrics, including reciprocal rank, or lexiprecision or any other evaluation scheme are aligned with the use case. If a retrieval environment supports the assumptions behind RL1 metrics, including ties, then, by all means, they should be used to assess performance. However, in Section 3.1, we raised several reasons why uncertainty over recall requirements and psychological relevance suggest that RL1 metrics make quite strong assumptions not realized in most retrieval settings. We designed lexiprecision \fBest-Case Retrieval Evaluation: Improving the Sensitivity of Reciprocal Rank with Lexicographic Precision Table 4: Percentage of run differences detected at \ud835\udc5d< 0.05. Red: better than reciprocal rank. Bold: best for an evaluation setting. (a) Tukey\u2019s HSD test rrLP sgnLP RR news robust (2004) 27.42 27.34 23.55 core (2017) 17.41 14.67 15.03 core (2018) 28.60 31.42 27.39 web web (2009) 23.85 28.28 24.11 web (2010) 18.95 13.51 18.35 web (2011) 14.70 10.22 13.83 web (2012) 13.39 11.61 13.21 web (2013) 5.85 5.79 6.07 web (2014) 20.00 11.72 18.85 deep deep-docs (2019) 8.25 19.20 6.97 deep-docs (2020) 5.26 3.47 2.88 deep-docs (2021) 6.39 11.19 4.48 deep-pass (2019) 16.52 18.47 13.21 deep-pass (2020) 37.46 40.91 28.35 deep-pass (2021) 24.07 24.78 20.38 recsys ml-1M (2018) 81.43 90.95 80.00 libraryThing (2018) 93.81 96.67 93.81 beerAdvocate (2018) 92.38 96.19 90.95 (b) Paired test with Bonferroni correction rrLP sgnLP RR news robust (2004) 26.22 27.36 21.45 core (2017) 16.22 11.35 11.53 core (2018) 29.30 31.73 27.03 web web (2009) 23.76 25.18 23.49 web (2010) 18.55 9.27 17.74 web (2011) 12.30 6.94 9.73 web (2012) 11.97 10.11 11.35 web (2013) 4.75 4.64 4.32 web (2014) 15.86 7.13 14.02 deep deep-docs (2019) 11.66 16.36 5.69 deep-docs (2020) 2.33 1.79 0.60 deep-docs (2021) 3.73 9.14 3.03 deep-pass (2019) 15.02 17.42 10.36 deep-pass (2020) 39.04 39.45 28.00 deep-pass (2021) 23.55 20.99 16.79 recsys ml-1M (2018) 90.00 92.38 90.48 libraryThing (2018) 97.14 97.62 96.67 beerAdvocate (2018) 94.76 96.67 94.76 to operate as conservatively as possible, preserving any preference from RL1 metrics and only acting to break ties. Although RL1 metrics and lexiprecision agree perfectly when there is only one relevant item, this does not mean that all situations where we have a single judged relevant item should adopt a metric like reciprocal rank. For example, the MSMARCO dataset [14] includes requests and very sparse labels; the majority of requests have one judged relevant item. One might be tempted to use reciprocal rank but Arabzadeh et al. [1] demonstrate that this would obscure the multitude of unjudged relevant items (of which there are many). This hurts efficacy of best-case retrieval evaluation including reciprocal rank, as shown in Figures 8a and 9. Recommendation tasks have similar issues with sparsity due in part to it being more difficult for a third party to assess the relevance of personalized content and to the difficulty in gathering explicit feedback. Labels derived from behavioral feedback in general suffer from similar sparsity [2]. In this respect, we echo the call from Arabzadeh et al. [1] to make labeling practices across all of these domains much more robust. Given the observation of Voorhees et al. [21] that better labeling can result in less informative evaluation, we need to also develop more sensitive evaluation schemes such as lexiprecision. Finally, this study has introduced a new preference-based evaluation method for RL1 metrics. As such, our focus has been on developing an understanding for comparing pairs of rankings and systems. We do not claim that lexiprecision itself is a metric and emphasize that we use it for comparing two rankings or systems. As such, although we address some concerns with reciprocal rank raised by Ferrante et al. [10], we do not make claims about lexiprecision being an interval measure. That said, the total ordering shown in Figure 1 suggests that there may be version of lexiprecision that can indeed be represented as an interval measure. 7" + }, + { + "url": "http://arxiv.org/abs/2204.11400v1", + "title": "Offline Retrieval Evaluation Without Evaluation Metrics", + "abstract": "Offline evaluation of information retrieval and recommendation has\ntraditionally focused on distilling the quality of a ranking into a scalar\nmetric such as average precision or normalized discounted cumulative gain. We\ncan use this metric to compare the performance of multiple systems for the same\nrequest. Although evaluation metrics provide a convenient summary of system\nperformance, they also collapse subtle differences across users into a single\nnumber and can carry assumptions about user behavior and utility not supported\nacross retrieval scenarios. We propose recall-paired preference (RPP), a\nmetric-free evaluation method based on directly computing a preference between\nranked lists. RPP simulates multiple user subpopulations per query and compares\nsystems across these pseudo-populations. Our results across multiple search and\nrecommendation tasks demonstrate that RPP substantially improves discriminative\npower while correlating well with existing metrics and being equally robust to\nincomplete data.", + "authors": "Fernando Diaz, Andres Ferraro", + "published": "2022-04-25", + "updated": "2022-04-25", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION A fundamental step in the offline evaluation of search and recommendation systems is to determine whether a ranking from one system tends to be better than the ranking of a second system. This often involves, given item-level relevance judgments, distilling each ranking into a scalar evaluation metric \ud835\udf07, such as average precision (AP) or normalized discounted cumulative gain (NDCG). We can then say that one system is preferred to another if its metric values tend to be higher. We present a stylized version of this approach in Figure 1a. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SIGIR \u201922, July 11\u201315, 2022, Madrid, Spain \u00a9 2022 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-8732-3/22/07. https://doi.org/10.1145/3477495.3532033 AB6nicbVBNSwMxEJ3Ur1q/qh69BIvgqexKUY9FLx 4r2g9ol5JNs21okl2SrFCW/gQvHhTx6i/y5r8xbfegrQ8GHu/NMDMvTAQ31vO+UWFtfWNzq7hd2tnd2z8oHx61TJxqypo0FrHuhMQwRVrWm4F6ySaERkK1g7HtzO/cS04bF6tJOEBZIMFY84JdZJDz2Z9sVr+rNgVeJn5MK5Gj0y1+9QUxTyZSlg hjT9b3EBhnRlPBpqVealhC6JgMWdRSQzQTY/dYrPnDLAUaxdKYvn6u+JjEhjJjJ0nZLYkVn2ZuJ/Xje10XWQcZWklim6WBSlAtsYz/7GA64ZtWLiCKGau1sxHRFNqHXplFwI/vLq6R1UfUvq7X7WqV+k8dRhBM4hXPw4QrqcAcNaAKFITzDK7w hgV7QO/pYtBZQPnMf4A+fwBgTI3f\u00b5 AB63icbVBNS8NAEJ3Ur1q/qh69LBbRU0mkqMeiF4 8VTFtoQ9lsN+3S3U3Y3Qgl9C948aCIV/+QN/+NmzYHbX0w8Hhvhpl5YcKZNq7ZTW1jc2t8rblZ3dvf2D6uFRW8epItQnMY9VN8Saciapb5jhtJsoikXIaSec3OV+54kqzWL5aKYJDQeSRYxgk0u9UV6PqjW3Lo7B1olXkFqUKA1qH71hzFJBZWGc Kx1z3MTE2RYGUY4nVX6qaYJhM8oj1LJRZUB9n81hk6s8oQRbGyJQ2aq78nMiy0norQdgpsxnrZy8X/vF5qopsgYzJDZVksShKOTIxyh9HQ6YoMXxqCSaK2VsRGWOFibHxVGwI3vLq6R9Wfeu6o2HRq15W8RhM4hQvw4BqacA8t8IHAGJ7hFd4 c4bw4787HorXkFDPH8AfO5w/Bjo4Q \u00b50 AB7XicbVBNS8NAEN3Ur1q/qh69LBbBU0l E1GNRDx4r2A9oQ9lsJ+3aTbsToQS+h+8eFDEq/Hm/GbZuDtj4YeLw3w8y8IJHCoOt+O4WV1bX1jeJmaWt7Z3evH/QNCrVHBpcSaXbATMgRQwNFCihnWhgUSChFYxupn7rCbQRKn7AcQJ+xAaxCAVnaKVm9xYksl654lbdGegy8 XJSITnqvfJXt694GkGMXDJjOp6boJ8xjYJLmJS6qYGE8REbQMfSmEVg/Gx27YSeWKVPQ6VtxUhn6u+JjEXGjKPAdkYMh2bRm4r/eZ0Uwys/E3GSIsR8vihMJUVFp6/TvtDAUY4tYVwLeyvlQ6YZRxtQyYbgLb68TJpnVe+ien5/Xql d53EUyRE5JqfEI5ekRu5InTQIJ4/kmbySN0c5L8678zFvLTj5zCH5A+fzB2N6jwU=\u2206 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU 0mkqMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0Et4v1xq+4 cZJV4OalAjka/NUbxCyNUBomqNZdz02Mn1FlOBM4LfVSjQlYzrErqWSRqj9bH7qlJxZUDCWNmShszV3xMZjbSeRIHtjKgZ6WVvJv7ndVMTXvsZl0lqULFojAVxMRk9jcZcIXMiIklClubyVsRBVlxqZTsiF4y+vktZF1 bus1u5rlfpNHkcRTuAUzsGDK6jDHTSgCQyG8Ayv8OYI58V5dz4WrQUnzmGP3A+fwBSq43W\u21e1 AB63icbVBNS8NAEJ3Ur1q/qh69LBbRU 0mkqMeiF48VTFtoQ9lsN+3S3U3Y3Qgl9C948aCIV/+QN/+NmzYHbX0w8Hhvhpl5YcKZNq7ZTW1jc2t8rblZ3dvf2D6uFRW8epItQnMY9VN8Saciapb5jhtJsoikXIaSec3OV+54kqzWL5aKYJDQeSRYxgk0u9RN2PqjW3Lo 7B1olXkFqUKA1qH71hzFJBZWGcKx1z3MTE2RYGUY4nVX6qaYJhM8oj1LJRZUB9n81hk6s8oQRbGyJQ2aq78nMiy0norQdgpsxnrZy8X/vF5qopsgYzJDZVksShKOTIxyh9HQ6YoMXxqCSaK2VsRGWOFibHxVGwI3vLq6R9W feu6o2HRq15W8RhM4hQvw4BqacA8t8IHAGJ7hFd4c4bw4787HorXkFDPH8AfO5w+z5I4H \u21e10 (a) Metric Difference AB7XicbVBNS8NAEN3Ur1q/qh69LBbBU0l E1GNRDx4r2A9oQ9lsJ+3aTbsToQS+h+8eFDEq/Hm/GbZuDtj4YeLw3w8y8IJHCoOt+O4WV1bX1jeJmaWt7Z3evH/QNCrVHBpcSaXbATMgRQwNFCihnWhgUSChFYxupn7rCbQRKn7AcQJ+xAaxCAVnaKVm9xYksl654lbdGegy8 XJSITnqvfJXt694GkGMXDJjOp6boJ8xjYJLmJS6qYGE8REbQMfSmEVg/Gx27YSeWKVPQ6VtxUhn6u+JjEXGjKPAdkYMh2bRm4r/eZ0Uwys/E3GSIsR8vihMJUVFp6/TvtDAUY4tYVwLeyvlQ6YZRxtQyYbgLb68TJpnVe+ien5/Xql d53EUyRE5JqfEI5ekRu5InTQIJ4/kmbySN0c5L8678zFvLTj5zCH5A+fzB2N6jwU=\u2206 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU 0mkqMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0Et4v1xq+4 cZJV4OalAjka/NUbxCyNUBomqNZdz02Mn1FlOBM4LfVSjQlYzrErqWSRqj9bH7qlJxZUDCWNmShszV3xMZjbSeRIHtjKgZ6WVvJv7ndVMTXvsZl0lqULFojAVxMRk9jcZcIXMiIklClubyVsRBVlxqZTsiF4y+vktZF1 bus1u5rlfpNHkcRTuAUzsGDK6jDHTSgCQyG8Ayv8OYI58V5dz4WrQUnzmGP3A+fwBSq43W\u21e1 AB63icbVBNS8NAEJ3Ur1q/qh69LBbRU 0mkqMeiF48VTFtoQ9lsN+3S3U3Y3Qgl9C948aCIV/+QN/+NmzYHbX0w8Hhvhpl5YcKZNq7ZTW1jc2t8rblZ3dvf2D6uFRW8epItQnMY9VN8Saciapb5jhtJsoikXIaSec3OV+54kqzWL5aKYJDQeSRYxgk0u9RN2PqjW3Lo 7B1olXkFqUKA1qH71hzFJBZWGcKx1z3MTE2RYGUY4nVX6qaYJhM8oj1LJRZUB9n81hk6s8oQRbGyJQ2aq78nMiy0norQdgpsxnrZy8X/vF5qopsgYzJDZVksShKOTIxyh9HQ6YoMXxqCSaK2VsRGWOFibHxVGwI3vLq6R9W feu6o2HRq15W8RhM4hQvw4BqacA8t8IHAGJ7hFd4c4bw4787HorXkFDPH8AfO5w+z5I4H \u21e10 (b) Direct Preference Figure 1: Metric Difference versus Direct Preference. System rankings \ud835\udf0band \ud835\udf0b\u2032 are represented as boxes with shaded boxes indicating relevant item positions. A traditional evaluation metric \ud835\udf07such as average precision projects two system rankings to scalar values; the scalar metric difference indicates preference. Direct preference compares ranked lists explicitly, bypassing metric computation. Shaded nodes contrast the focus of research work between metrics and preferences. Deriving a system preference from a metric difference can be problematic for two reasons. First, evaluation metrics, because they project a ranking onto a scalar value, can lose information about how two rankings differ. Take, as an example, the popular reciprocal rank metric (RR). Because RR only considers the rank position of the first relevant document, its value can be equal for two rankings that share the position of the first relevant document but differ dramatically at lower ranks. Although most salient for RR, metrics with smooth discount functions such as AP and NDCG still can collapse different rankings into the same or very similar scalar value by virtue of their sharp discounts. We refer to this as the problem of low label efficiency. Second, although most evaluation metrics are meant to model the quality of a ranking for system users, they can suggest similarity between systems that actually behave quite differently for different user populations. For example, RR might be an appropriate model for known-item search, but it does not capture higher-recall behaviors like electronic discovery and systematic review [29]. While metrics with smooth discounts arXiv:2204.11400v1 [cs.IR] 25 Apr 2022 \fcan be interpreted as averaging performance across different possible user behaviors, they make very strong assumptions about the distribution of behaviors. We refer to this as the problem of low robustness to user behavior. We propose rank-paired preference (RPP), an evaluation method that addresses concerns both about label efficiency and robustness to user behavior. For a fixed request, RPP directly computes a preference between systems by modeling how different user subpopulations might prefer one algorithm over another. When aggregating these subpopulation preferences, each is weighted equally, rather than weighting users with lower recall requirements more heavily. We contrast RPP with metric-based evaluation in Figure 1b. By considering the contribution of lower-ranked relevant items, RPP more efficiently exploits available labels, resulting in higher sensitivity and discriminative power between systems compared to metric-based approaches. We analyze RPP across a variety of search and recommendation tasks. Specifically, we show that (i) RPP is correlated with existing ranking metrics, (ii) RPP is equally robust to incomplete evaluation data compared to existing ranking metrics, and (iii) RPP has much higher discriminative power than existing ranking metrics. In particular, RPP\u2019s higher discriminative power suggests that preference-based evaluation should be further explored for offline evaluation. 2 MOTIVATION In order to motivate our work, consider a retrieval scenario with binary relevance. Most ranked list evaluation metrics can be decomposed into a linear function of rank positions of the relevant items. Given a system ranking \ud835\udf0b, let \ud835\udc53\ud835\udc56be the position of the \ud835\udc56th relevant item. We can define many metrics as, \ud835\udf07(\ud835\udf0b) = \ud835\udc5a \u2211\ufe01 \ud835\udc56 \ud835\udeff(\ud835\udc53\ud835\udc56) where \ud835\udc5ais the number of relevant items and \ud835\udeffis a rank discount function (e.g., \ud835\udeffDCG(\ud835\udc56) = 1 log(\ud835\udc56+1) , \ud835\udeffRBP(\ud835\udc56) = \ud835\udefe\ud835\udc56\u22121). In offline evaluation, we are interested in comparing this value to that of a second ranking \ud835\udf0b\u2032, using the difference in metric values to define preference. We can expand this difference into a sum of differences over the \ud835\udc5apositions of the relevant items, \u0394\ud835\udf07(\ud835\udf0b, \ud835\udf0b\u2032) = \ud835\udc5a \u2211\ufe01 \ud835\udc56 \ud835\udeff(\ud835\udc53\ud835\udc56) \u2212 \ud835\udc5a \u2211\ufe01 \ud835\udc56 \ud835\udeff(\ud835\udc53\u2032 \ud835\udc56) = \ud835\udc5a \u2211\ufe01 \ud835\udc56 \ud835\udeff(\ud835\udc53\ud835\udc56) \u2212\ud835\udeff(\ud835\udc53\u2032 \ud835\udc56) = \ud835\udc5a \u2211\ufe01 \ud835\udc56 \u0394\ud835\udf07\ud835\udc56(\ud835\udf0b, \ud835\udf0b\u2032) This disaggregation by recall level lets us observe how the \ud835\udc56th relevant item contributes to a change in \u0394\ud835\udf07. In Figure 2, we examine the behavior of \u0394\ud835\udf07\ud835\udc56under different evaluation metrics. In the left column, we show the relationship between \u0394\ud835\udf07\ud835\udc56and the position of the \ud835\udc56th relevant item in the pair of ranked lists being compared (i.e. \ud835\udc53\ud835\udc56and \ud835\udc53\u2032 \ud835\udc56); in other words, this is the analytic relationship between \ud835\udc53\ud835\udc56, \ud835\udc53\u2032 \ud835\udc56, and \u0394\ud835\udf07\ud835\udc56for different evaluation metrics. The first row considers rank-biased precision rbp fi ' fi \u0394\u00b5i ndcg fi ' fi \u0394\u00b5i sign fi ' fi \u0394\u00b5i -0.6 -0.2 0.0 0.2 0.4 0.6 0.0 0.2 0.4 0.6 0.8 1.0 \u0394\u00b5i F(\u0394\u00b5i) -1.0 -0.5 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0 \u0394\u00b5i F(\u0394\u00b5i) -1.0 -0.5 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0 \u0394\u00b5i F(\u0394\u00b5i) Figure 2: Surface of \u0394\ud835\udf07\ud835\udc56for comparing differences in the top 25 rank positions (left). Empirical cumulative distribution function for differences observed in all runs submitted to the TREC 2019 Deep Learning document ranking task (right). (RBP) with \ud835\udefe= 0.5 [20]; the second row, NDCG [14]; the last row, \u0394\ud835\udf07\ud835\udc56= sgn(\ud835\udeff(\ud835\udc53\ud835\udc56) \u2212\ud835\udeff(\ud835\udc53\u2032 \ud835\udc56)), reflecting the preference between rankings by a user seeking to find exactly \ud835\udc56relevant items with as little effort as possible. Looking at the surface of \u0394\ud835\udf07\ud835\udc56for the RBP and NDCG metrics, we observe that, unless min(\ud835\udc53\ud835\udc56, \ud835\udc53\u2032 \ud835\udc56) is small, the value of \u0394\ud835\udf07\ud835\udc56will be very small; as a result, the summation in \u0394\ud835\udf07will be dominated by changes in rank position amongst documents at the highest rank positions. This means that the relative preference of systems by users interested in higher recall will be overshadowed by the preferences of users interested in fewer relevant items. In the right column of Figure 2, we show the associated empirical cumulative distribution function of \u0394\ud835\udf07\ud835\udc56for all runs submitted to the TREC 2019 Deep Learning document ranking task [11]. We can see that the distrbution of \u0394\ud835\udf07\ud835\udc56for RBP and NDCG is dominated by values close to zero. Looking at the sign of these differences in the last row, we observe that, for roughly 93% of the samples, \ud835\udc53\ud835\udc56\u2260\ud835\udc53\u2032 \ud835\udc56. This analysis provides evidence that the discounting of lowerranked relevant items massively diminishes their contribution to metric differences, resulting lower label efficiency and robustness to user behavior. In the remainder of this paper, we address this by presenting an alternative evaluation method that more equally incorporates preferences of users with different recall requirements. \f3 PRELIMINARIES We are interested in comparing pairs of rankings for the same request from two different systems. Our methods apply to both search and recommendation contexts. As such, we will refer to queries and user profiles as requests and documents as items. Given a request \ud835\udc5eand a corpus of \ud835\udc5bitems, let y \u2208\u211c\ud835\udc5bbe a vector of item relevance grades, where y\ud835\udc56is the relevance grade of item \ud835\udc56. We refer to y as the relevance judgments for \ud835\udc5eand any grade greater than 0 as relevant. Given a request \ud835\udc5e, let \ud835\udf0b\u2208\ud835\udc46\ud835\udc5bbe a system\u2019s ranking of items in the corpus. In cases where a system returns a ranking only of the top \ud835\udc58items, we assume a worst case ordering of the remaining items. For example, if a top \ud835\udc58ranking \ud835\udc4b omits five items with grades {1, 1, 2, 4, 5} from the retrieval, then we treat these items as ranked in increasing order of utility at the end of the total corpus ranking, \ud835\udc4b= 04300010300 . . . 000 | {z } top \ud835\udc58 0000 . . . 0011245 | {z } bottom \ud835\udc5b\u2212\ud835\udc58 Similarly, if a top\ud835\udc58ranking\ud835\udc4comits three items with grades {1, 2, 4} from the retrieval, \ud835\udc4c= 30453001100 . . . 000 | {z } top \ud835\udc58 0000 . . . 0000124 | {z } bottom \ud835\udc5b\u2212\ud835\udc58 This worst case assumption provides a conservative lower bound on system performance. When provided with two rankings \ud835\udf0band \ud835\udf0b\u2032 for the same request, we are interested in determining if we prefer \ud835\udf0bto \ud835\udf0b\u2032, indicated as \ud835\udf0b\u227b\ud835\udf0b\u2032. Metric-based evaluation leverages a function \ud835\udf07: \ud835\udc46\ud835\udc5b\u2192\u211c that computes the quality of a ranking defines a preference as, \ud835\udf07(\ud835\udf0b) > \ud835\udf07(\ud835\udf0b\u2032) \u2192\ud835\udf0b\u227b\ud835\udf0b\u2032 \ud835\udf07(\ud835\udf0b) < \ud835\udf07(\ud835\udf0b\u2032) \u2192\ud835\udf0b\u227a\ud835\udf0b\u2032 Our work considers preference-based evaluation, an approach that directly defines a function \u0394 : \ud835\udc46\ud835\udc5b\u00d7 \ud835\udc46\ud835\udc5b\u2192\u211csuch that, \u0394(\ud835\udf0b, \ud835\udf0b\u2032) > 0 \u2192\ud835\udf0b\u227b\ud835\udf0b\u2032 \u0394(\ud835\udf0b, \ud835\udf0b\u2032) < 0 \u2192\ud835\udf0b\u227a\ud835\udf0b\u2032 In this section, we will review and extend three concepts from the existing literature: position-based metrics, pseudo-populations, and preference-based evaluation. In Section 4, we will synthesize these concepts into our new evaluation method, recall-paired preference. 3.1 Position-Based Evaluation Metrics Let the search length \ud835\udc53\ud835\udc56(\ud835\udf0b) be the position of the \ud835\udc56th ranked relevant item in \ud835\udf0b.1 In our example, \ud835\udc531(\ud835\udc4b) = 2, \ud835\udc532(\ud835\udc4b) = 3, \ud835\udc533(\ud835\udc4b) = 7, and so forth. For clarity, we use \ud835\udc53\ud835\udc56to refer to \ud835\udc53\ud835\udc56(\ud835\udf0b) and \ud835\udc53\u2032 \ud835\udc56to refer to \ud835\udc53\ud835\udc56(\ud835\udf0b\u2032). Position-based evaluation metrics adopt the principle of minimal effort, For a user interested in \ud835\udc56relevant items, \ud835\udc53\ud835\udc56< \ud835\udc53\u2032 \ud835\udc56\u2192\ud835\udf0b\u227b\ud835\udf0b\u2032 1This is slightly different from Cooper\u2019s definition of search length [10] which counts the number of non-relevant items above the \ud835\udc56th relevant item, (i.e. \ud835\udc53\ud835\udc56(\ud835\udf0b) \u2212\ud835\udc56). Cooper [10] describes different types of users who may be interested in different levels of recall \ud835\udc56. We refer to situations where a user is looking for one relevant item as precision-oriented. Historically, the reciprocal of the rank of the first relevant item (i.e. 1 \ud835\udc531 ) has been used for such tasks. We call \ud835\udc53\ud835\udc56the initial search length (ISL) and it is related to Cooper\u2019s \u2018Type 1 search length.\u2019 We refer to situations where a user is looking for all of the relevant items as recall-oriented. Assuming \ud835\udc5arelevant items, we refer to \ud835\udc53\ud835\udc5a, the position of the last relevant item, as the total search length (TSL). This is related to Cooper\u2019s \u2018Type 3 inclusive search length.\u2019 In situations where we are uncertain if the user is precisionor recall-oriented, we can compute the expectation over all possible recall orientations. We can express the average search length (ASL) as,2 ASL(\ud835\udf0b) = E\ud835\udc56[\ud835\udc53\ud835\udc56] (1) Rocchio [23] refers to this as the average rank metric. More recently, ASL has been used in the context of unbiased learning to rank [16]. 3.2 Pseudo-Populations In the previous section, we described metrics that operate under the assumption that a user is interested in precisely \ud835\udc56relevant items. In order to build on this work, we turn to how we can model different possible subpopulations of users who desire different numbers of relevant items. Similar to Cooper, Robertson [21] suggested that a user may have a specific recall requirement when interacting with an information access system. In other words, for a request with \ud835\udc5arelevant items, a user may be in one of \ud835\udc5arecall conditions. We refer to U\ud835\udc56as the pseudo-population interested in \ud835\udc56relevant items and \ud835\udc5d(\ud835\udc56) = \ud835\udc5d(\ud835\udc62\u2208 U\ud835\udc56) is the probability of a user seeking exactly \ud835\udc56relevant items. Robertson observed that, if \ud835\udc5d(\ud835\udc56) = 1 \ud835\udc5a, then AP can be interpreted as the expected precision for users from these pseudo-populations, AP(\ud835\udf0b) = E\ud835\udc56 \u0014 \ud835\udc56 \ud835\udc53\ud835\udc56 \u0015 where \ud835\udc56 \ud835\udc53\ud835\udc56is the precision for users with a recall requirement of \ud835\udc56. Sakai and Robertson [25] extended this to arbitrary metrics, defining Normalized Cumulative Precision (NCP) as, E\ud835\udc56[\ud835\udf07\ud835\udc56(\ud835\udf0b)] = \ud835\udc5a \u2211\ufe01 \ud835\udc56=1 \ud835\udc5d(\ud835\udc56)\ud835\udf07\ud835\udc56(\ud835\udf0b) where \ud835\udf07\ud835\udc56(\ud835\udf0b) is a partial recall metric based on the recall level \ud835\udc56. Example partial recall metrics include \ud835\udf07\ud835\udc56(\ud835\udf0b) = \ud835\udc56 \ud835\udc53\ud835\udc56(i.e. precision at \ud835\udc53\ud835\udc56, as used in AP) and \ud835\udf07\ud835\udc56(\ud835\udf0b) = \ud835\udc53\ud835\udc56(as used in ASL). The distribution \ud835\udc5d(\ud835\udc56) gives us a place where we can explicitly encode any information we have about user recall requirements. For example, when we do not know the distribution of recall requirements of users or if we want robust performance across a 2Two other metrics, bpref [4] and atomized search length [2], compute an expectation over relevant items, but, like Cooper, use the number of preceding nonrelevant items. By linearity of expectation, we observe that E\ud835\udc56[\ud835\udc53\ud835\udc56\u2212\ud835\udc56] = E\ud835\udc56[\ud835\udc53\ud835\udc56] \u2212E\ud835\udc56[\ud835\udc56], which Rocchio [23] refers as the recall error. For a fixed query, this value is rank-equivalent to E\ud835\udc56[\ud835\udc53\ud835\udc56] = ASL(\ud835\udf0b). \fvariety of possible recall requirements, a uniform distribution over all recall levels (i.e. \ud835\udc5d(\ud835\udc56) = 1 \ud835\udc5a) might be appropriate. In situations where we have information about the true distribution of recall requirements or want to emphasize performance for a specific user behavior, we can adopt a non-uniform distribution for \ud835\udc5d(\ud835\udc56). For example, \u2018position bias\u2019 can be reflected in a definition of \ud835\udc5d(\ud835\udc56) that monotonically decreases with \ud835\udc56. In contrast with most existing models, this distribution is over recall levels, as opposed to rank positions. We can also consider pseudo-populations generated from other available information. For example, when we have multiple possible relevance grades, we can consider a pseudo-population of users satisfied by an item if its grade is above some threshold [22]. The utility for the pseudo-population of users interested in items with at least grade \ud835\udf06is, E\ud835\udc56[\ud835\udf07\ud835\udc56(\ud835\udf0b)|\ud835\udf06] = \ud835\udc5a\ud835\udf06 \u2211\ufe01 \ud835\udc56=1 \ud835\udc5d(\ud835\udc56|\ud835\udf06)\ud835\udf07\ud835\udc56,\ud835\udf06(\ud835\udf0b) where \ud835\udc5a\ud835\udf06is the number of items with a grade of at least \ud835\udf06, \ud835\udc5d(\ud835\udc56|\ud835\udf06) is the probability of a user in this population seeks exactly \ud835\udc56relevant items, and \ud835\udf07\ud835\udc56,\ud835\udf06(\ud835\udf0b) is the binary partial recall metric assuming only items with grade greater than \ud835\udf06are relevant. We will use \ud835\udf06= 1 to refer to users who consider relevance as binary. In situations where items are associated with attributes such as genres or per-request subtopics, we can define pseudo-populations based on these categories [1]. As a result, the utility for the pseudopopulation of users interested in category \ud835\udc61is, E\ud835\udc56[\ud835\udf07\ud835\udc56(\ud835\udf0b)|\ud835\udc61] = \ud835\udc5a\ud835\udc61 \u2211\ufe01 \ud835\udc56=1 \ud835\udc5d(\ud835\udc56|\ud835\udc61)\ud835\udf07\ud835\udc56,\ud835\udc61(\ud835\udf0b) where\ud835\udc5d(\ud835\udc56|\ud835\udc61) defines the probability that a user interested in subtopic \ud835\udc61seeks \ud835\udc56relevant items and \ud835\udf07\ud835\udc56,\ud835\udc61(\ud835\udf0b) is the binary partial recall metric assuming only items from subtopic \ud835\udc61are relevant. 3.3 Preference-Based Evaluation Most existing approaches\u2014including those in Sections 3.1 and 3.2\u2014 collapse the performance of a system into a single scalar number. An alternative to comparing metrics is to compare rankings directly. Preference-based evaluation3 assigns, for a pair of rankings, a preference between them. Traditionally, we elicit this preference from human judges in an interface that presents two rankings alongside each other [17, 27]. Sanderson et al. [26] demonstrated that this approach correlated well with metric-based evaluation across a variety of retrieval scenarios. In the context of online experimentation, interleaving combines pairs of rankings and computes a preference between them based on user clicks [15]. Given a request from a user, two rankings \ud835\udf0band \ud835\udf0b\u2032 are randomly interleaved so as to simulate a choice experiment for the user. The user then inspects the ranking, clicking on relevant items. We say that the ranking \ud835\udf0bis preferred to \ud835\udf0b\u2032 if it retrieves more clicked items at a rank cutoff\ud835\udc58, a value based on the position of the last-clicked item. Because of randomness in both user behavior (e.g. their recall requirement) and the interleaving process itself, 3We note that preference-based evaluation differs from evaluating with item preferences, which often still collapses rankings into a single scalar number [6, 9]. = = \u227b \u227b \u227a Figure 3: Recall-paired preference. A user interested in\ud835\udc56relevant items will prefer the ranking that lets them satisfy their need with minimal effort. we can model \ud835\udc58as a random variable. As a result, for a fixed query, we can define the interleaving preference as, I(\ud835\udf0b, \ud835\udf0b\u2032) = E\ud835\udc58 \u0002 sgn(\ud835\udf08\ud835\udc58(\ud835\udf0b) \u2212\ud835\udf08\ud835\udc58(\ud835\udf0b\u2032)) \u0003 (2) where \ud835\udf08\ud835\udc58(\ud835\udf0b) is a partial precision metric based on the rank position \ud835\udc58; we contrast this with partial recall metrics, which are based on recall level. Example partial precision metrics include \ud835\udf08\ud835\udc58(\ud835\udf0b) = |{\ud835\udc53\ud835\udc56|\ud835\udc53\ud835\udc56\u2264\ud835\udc58}| \ud835\udc58 (i.e. \u2018precision at \ud835\udc58\u2019, as used in interleaving). In online evaluation, we compute the interleaving metric by empirically estimating Equation 2 from sampled user requests and clicks on interleaved rankings. Chapelle et al. [8] demonstrated the sensitivity of interleaving experiments across a variety of online search scenarios. 4 RECALL-PAIRED PREFERENCE We are now ready to combine the concepts from Section 3 into a new preference-based evaluation method. We begin by describing conceptually how we compare two rankings. Consider our example rankings \ud835\udc4band \ud835\udc4cintroduced in Section 3. First, we sample a user \ud835\udc62based on our distribution over pseudopopulations. By appealing to the principle of minimal effort (Section 3.1), we can infer which ranking \ud835\udc62prefers. For example, if we sample a user from U\ud835\udf06=1 1 , then \ud835\udc531(\ud835\udc4b) = 2 > 1 = \ud835\udc531(\ud835\udc4c) and, since we prefer higher ranks, \ud835\udc4c\u227b\ud835\udc4b. If we sample a user from U\ud835\udf06=4 1 , then, using \ud835\udc531,\ud835\udf06=4(\ud835\udf0b) to represent the position of the first ranked item with relevance grade at least 4, \ud835\udc531,\ud835\udf06=4(\ud835\udc4b) = 2 < 3 = \ud835\udc53\u2032 1,\ud835\udf06=4(\ud835\udc4b) and \ud835\udc4b\u227b\ud835\udc4c. We can repeatedly sample users, incrementing an accumulator by 1 if \ud835\udc4b\u227b\ud835\udc4cand decrementing by 1 if \ud835\udc4b\u227a\ud835\udc4c. If the value of accumulator is positive, we say that \ud835\udc4b\u227b\ud835\udc4c; if it is negative, then \ud835\udc4b\u227a\ud835\udc4c. This is equivalent to computing the expected preference across the \ud835\udc5apaired positions of relevant items (Figure 3). Because we pair items according to equivalent recall levels, we refer to this metric as recall-paired preference (RPP). More formally, for binary relevance and no subtopics, we define RPP between two rankings as the expected value of the preference, RPP(\ud835\udf0b, \ud835\udf0b\u2032) = E\ud835\udc56 \u0002 sgn(\ud835\udc53\u2032 \ud835\udc56\u2212\ud835\udc53\ud835\udc56) \u0003 (3) = \ud835\udc5a \u2211\ufe01 \ud835\udc56=1 \ud835\udc5d(\ud835\udc56) \u00d7 sgn(\ud835\udc53\u2032 \ud835\udc56\u2212\ud835\udc53\ud835\udc56) (4) \fwhere \ud835\udc5d(\ud835\udc56) is the probability of a user seeking exactly \ud835\udc56relevant items. RPP takes a value in [\u22121, 1] where positive values indicate stronger preference for \ud835\udf0b, negative values a preference for \ud835\udf0b\u2032, and zero indicating indifference. Moreover, RPP is a preference, so RPP(\ud835\udf0b, \ud835\udf0b\u2032) = \u2212RPP(\ud835\udf0b\u2032, \ud835\udf0b). In practice, when we refer to RPP, we will use the graded version, RPP(\ud835\udf0b, \ud835\udf0b\u2032) = \u2211\ufe01 \ud835\udf06\u2208\u039b \ud835\udc5a \u2211\ufe01 \ud835\udc56=1 \ud835\udc5d(\ud835\udc56, \ud835\udf06) \u00d7 sgn(\ud835\udc53\u2032 \ud835\udc56,\ud835\udf06\u2212\ud835\udc53\ud835\udc56,\ud835\udf06) (5) where \u039b is the set of all possible grades for this request and \ud835\udc53\ud835\udc56,\ud835\udf06is the rank position of the \ud835\udc56th relevant item with grade of at least \ud835\udf06. In the binary relevance case, this reduces to Equation 4. The subtopic-aware version of RPP can be similarly defined, ST-RPP(\ud835\udf0b, \ud835\udf0b\u2032) = \u2211\ufe01 \ud835\udc61\u2208T \ud835\udc5a \u2211\ufe01 \ud835\udc56=1 \ud835\udc5d(\ud835\udc56,\ud835\udc61) \u00d7 sgn(\ud835\udc53\u2032 \ud835\udc56,\ud835\udc61\u2212\ud835\udc53\ud835\udc56,\ud835\udc61) (6) where T is the set of all possible subtopics for this request and \ud835\udc53\ud835\udc56,\ud835\udc61 is the rank position of the \ud835\udc56th relevant item with subtopic \ud835\udc61. 4.1 Comparison to Existing Metrics In this section, we compare RPP to the methods presented in Section 3. To begin, we can compare RPP with metric-based evaluation by analyzing how the disaggregated metric \u0394\ud835\udf07\ud835\udc56(Section 2) changes as a function of \ud835\udc56. Figure 4a contains the disaggregated values for several standard evaluation metrics. These expressions encode the relative weight allocated to different pseudo-populations U\ud835\udc56. Standard evaluation metrics such as RBP, AP, DCG, and RR all observe the largest contribution to metric differences at the highest rank positions. Even AP, which modulates the difference in inverse positions with a multiplicative recall level factor \ud835\udc56is dominated by diminishing differences at low ranks. This confirms our claim of poor label efficiency (since relevant items at lower rank positions can contribute less) and poor robustness to user behavior (since the performance difference for users interested in higher recall levels is negligible). RR, ISL, and TSL also exhibit poor label efficiency and robustness to user behavior since, by design, they exclude all but a single difference. Meanwhile, ASL will tend to have the opposite effect of emphasizing differences at lower ranks since, at these higher recall levels, \ud835\udc53\ud835\udc56and \ud835\udc53\u2032 \ud835\udc56are likely to be separated by many more rank positions than earlier recall levels. In contrast, for RPP, the disaggregated magnitude is fixed across recall levels, resulting in both label efficiency (since all relevant items contribute equally regardless of rank position) and robustness to user behavior (since the performance differences for users at all recall levels contribute equally). To illustrate the implication of these position biases, we can see how \u0394\ud835\udf07\ud835\udc56changes for two hypothetical rankings, \ud835\udc4eand \ud835\udc4f. In the left-hand plot of Figure 4b, we depict the uninterpolated precisionrecall curves for \ud835\udc4eand \ud835\udc4f. Whereas AP approximates the area under the uninterpolated precision-recall curve, RPP can be interpreted as a sign test at sampled recall levels, similar to approaches taken for ROC curves [3]. In the right-hand plot of Figure 4b, we present \u0394\ud835\udf07\ud835\udc56(\ud835\udc4e,\ud835\udc4f) as a function of \ud835\udc56. Notice that almost every traditional metric, including \u2018recall-oriented\u2019 metrics like AP, allocate most of the mass to the (a) Disaggregated recall-paired metrics metric \u0394\ud835\udf07\ud835\udc56 RBP [20] \ud835\udefe\ud835\udc53\ud835\udc56\u2212\ud835\udefe\ud835\udc53\u2032 \ud835\udc56 AP \ud835\udc56 \u0012 1 \ud835\udc53\ud835\udc56\u2212 1 \ud835\udc53\u2032 \ud835\udc56 \u0013 NDCG [14] 1 log2 (\ud835\udc53\ud835\udc56+1) \u2212 1 log2 (\ud835\udc53\u2032 \ud835\udc56+1) RR 1 \ud835\udc53\ud835\udc56\u2212 1 \ud835\udc53\u2032 \ud835\udc56 , (\ud835\udc56= 1) ISL [10] \ud835\udc53\u2032 \ud835\udc56\u2212\ud835\udc53\ud835\udc56, (\ud835\udc56= 1) TSL [10] \ud835\udc53\u2032 \ud835\udc56\u2212\ud835\udc53\ud835\udc56, (\ud835\udc56= \ud835\udc5a) ASL, bpref [2, 4, 23] \ud835\udc53\u2032 \ud835\udc56\u2212\ud835\udc53\ud835\udc56 RPP sgn(\ud835\udc53\u2032 \ud835\udc56\u2212\ud835\udc53\ud835\udc56) 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 uninterpolated precision recall precision a b 0.2 0.4 0.6 0.8 1.0 -1.0 -0.5 0.0 0.5 1.0 disaggregated metric difference recall \u0394\u00b5i(a, b) RPP AP RR RBP DCG ASL (b) Metric differences for two systems\u2019 rankings of one request with six relevant items. Figure 4: Disaggregated metric behavior. top position. Conversely, ASL allocates more weight to later values of \ud835\udc56. RPP, on the other hand, treats all recall levels equally. Although position-based metrics like ASL more evenly allocate weight across recall levels, Magdy and Jones [19] note that the range of this metric can be quite large, sensitive to outliers, and difficult to reason with. Unretrieved items, often assumed to be ranked at the bottom of a ranking of the full corpus (Section 3), can exacerbate the variance in the metric, even for moderately sized corpora. In contrast, RPP, because it only considers relative positions, can be computed without an exact corpus size. In comparison to interleaving, while Equations 2 and 4 appear similar, there are a few important differences. First, the event space for interleaving is the set of all rank positions rather than the set of all recall levels. This subtle difference means that interleaving emphasizes the number of relevant items collected at a rank position rather than the effort taken to collect the same number of relevant items. Moreover, in online interleaving, because \ud835\udc5d(\ud835\udc58) is derived from behavioral data, it will be skewed toward higher rank positions (i.e. users introduce position bias) and preferences at lower positions will be overshadowed by those in higher positions. 5 EXPERIMENTS Our main thesis is that RPP, by more uniformly measuring performance across recall levels, more efficiently uses relevance labels in evaluation compared to existing retrieval metrics. As such, experiments are centered around three questions, (i) how does RPP correlate with existing metrics? (ii) how robust is RPP to incomplete \fTable 1: Datasets used in experiments. requests runs rel/request subtopics/request core (2017) 50 75 180.04 0 core (2018) 50 72 78.96 0 deep-docs (2019) 43 38 153.42 0 deep-docs (2020) 45 64 39.27 0 deep-pass (2019) 43 37 95.40 0 deep-pass (2020) 54 59 66.78 0 web (2009) 50 48 129.98 4.98 web (2010) 48 32 187.63 4.17 web (2011) 50 62 167.56 3.36 web (2012) 50 48 187.36 3.90 web (2013) 50 61 182.42 3.18 web (2014) 50 30 212.58 3.12 robust 249 110 69.93 0 ml-1M 6005 21 18.87 0 libraryThing 7227 21 13.15 0 beerAdvocate 17564 21 13.66 0 data? (iii) how effective is RPP at discriminating between runs? In order to answer these questions, we use a variety of information access scenarios covering both search and recommendation tasks. 5.1 Data We present details of the data used in our experiments in Table 1. We include runs submitted to multiple TREC tracks, including the Deep Learning Document Ranking (2019, 2020), Deep Learning Passage Ranking (2019, 2020), Common Core (2017, 2018), Web (2009-2014), and Robust (2004). We downloaded all data from NIST, including runs and relevance judgments. Web track data includes subtopic judgments. Additionally, we used a variety of recommendation systems runs prepared by Valcarce et al. [28] for the MovieLens 1M, LibraryThing, and Beer Advocate datasets.4 Consistent with their work, we converted graded judgments to binary labels by considering any rating below 4 as nonrelevant and otherwise relevant. 5.2 Methods In order to measure the similarity between RPP and metric-based approaches, we measured the Kendall\u2019s \ud835\udf0fcorrelation between system ordering by RPP with system ordering by baseline metrics (Section 5.5). We describe how to compute an ordering of systems from RPP in Section 5.4. We evaluated the robustness to incomplete data under two conditions. Our first experiment tests how well a metric with fewer judged requests can order systems compared to the same metric with the complete set of judged requests. This simulates the scenario where we have a paucity of requests but, for those requests, we have ample labeled items. Our second experiment tests how well a metric with fewer judgments per request can order systems compared to the same metric with the complete set of judgments. This simulates the scenario where we have ample requests but sparse judgments for each request. In order to evaluate the sensitivity of a metric, we adopt Sakai\u2019s method of computing discriminative power [24]. For a single data 4https://github.com/dvalcarce/evalMetrics set (row in Table 1), we compute the RPP or metric differences for all pairs of runs over all requests. We then measure what fraction of system pairs achieve a \ud835\udc5d-value lower than 0.05 for each metric. In order to compute \ud835\udc5d-values, we use two methods: a Student\u2019s \ud835\udc61-test with Bonferonni correction and Tukey\u2019s honestly significant difference (HSD) test. We adopt the randomized HSD as proposed by Carterette [7]. 5.3 RPP Variants For binary relevance with no subtopics, we consider several definitions of \ud835\udc5d(\ud835\udc56). In addition to \ud835\udc5d(\ud835\udc56) = 1 \ud835\udc5a, we include versions that consider non-uniform, top-heavy distributions of pseudo-populations, \ud835\udc5dDCG(\ud835\udc56) \u221d 1 log2(\ud835\udc56+ 1) \ud835\udc5dinverse(\ud835\udc56) \u221d1 \ud835\udc56 which reflect the rank importance for NDCG and RR. Note that these discounts are a function of recall level, rather than rank position. When we adopt graded RPP for evaluation, we assume independence between recall requirements and grade, \ud835\udc5d(\ud835\udc56, \ud835\udf06) = \ud835\udc5d(\ud835\udc56)\ud835\udc5d(\ud835\udf06), and define \ud835\udc5d(\ud835\udf06) for \ud835\udf06\u2208\u039b as, \ud835\udc5d(\ud835\udf06) \u221d|{\ud835\udc56|y\ud835\udc56\u2265\ud835\udf06}| When conducting subtopic evaluation, we again assume independence between recall requirements and subtopic, \ud835\udc5d(\ud835\udc56,\ud835\udc61) = \ud835\udc5d(\ud835\udc56)\ud835\udc5d(\ud835\udc61), with \ud835\udc5d(\ud835\udc61) defined as, \ud835\udc5d(\ud835\udc61) \u221d|{\ud835\udc56|y\ud835\udc56> 0 \u2227\ud835\udc61\u2208s\ud835\udc56}| where s\ud835\udc56\u2286T indicates the subtopics associated with item \ud835\udc56. In addition to these |T | pseudo-populations, we consider a background interest \ud835\udc61\u2217pseudo-population satisfied by any subtopic (i.e. standard relevance, \ud835\udc5d(\ud835\udc61\u2217) \u221d|{\ud835\udc56|y\ud835\udc56> 0}|). 5.4 Aggregating RPP RPP gives us a preference between a pair of rankings for the same request but we are often interested in generating an ordering of more than two systems. Given a set of runs \u02dc \ud835\udc46\ud835\udc5bfor the same request, we can compute the win rate for a ranking \ud835\udf0b\u2208\u02dc \ud835\udc46\ud835\udc5bas, RPP\u02dc \ud835\udc46\ud835\udc5b(\ud835\udf0b) = \u2211\ufe01 \ud835\udf0b\u2032\u2208\u02dc \ud835\udc46\ud835\udc5b RPP(\ud835\udf0b, \ud835\udf0b\u2032) (7) We can then use a preference aggregation scheme to order systems for a set of queries Q. In experiments where we need an ordering of systems for a set of requests, we adopt Markov chain aggregation, due to its effectiveness across a variety of domains [12]. Note that RPP\u02dc \ud835\udc46\ud835\udc5b(\ud835\udf0b) reflects the relative position of \ud835\udf0bwithin \u02dc \ud835\udc46\ud835\udc5band may vary across different sets of runs. 5.5 Baseline Metrics As baseline metrics, we used AP, NDCG, and RR with no rank cutoff as implemented in NIST trec_eval.5 We implemented ASL by moving unranked items to the end of the corpus, using a corpus size equal to the number of unique items in union of all rankings and the relevant items. For subtopic metrics, we use intent-aware mean average precision (MAP-IA) with no rank cutoff and intent-aware expected reciprocal rank (ERR-IA) and subtopic recall (strec), both 5https://github.com/usnistgov/trec_eval \fTable 2: Correlation with Existing Metrics. Kendall\u2019s \ud835\udf0fbetween rankings of runs for pairs of metrics averaged across all datasets. (a) Single Topic Metrics invRPP RPP dcgRPP AP NDCG RR 0.61 0.45 0.49 0.49 0.50 invRPP 0.79 0.85 0.82 0.80 RPP 0.93 0.86 0.87 dcgRPP 0.88 0.88 AP 0.89 (b) Subtopic Metrics st-dcgRPP MAP-IA st-invRPP ERR-IA strec st-RPP 0.88 0.73 0.70 0.26 0.30 st-dcgRPP 0.77 0.79 0.34 0.36 MAP-IA 0.74 0.39 0.36 st-invRPP 0.52 0.49 ERR-IA 0.66 with a rank cutoff of 20, as adopted for Web tracks and implemented in ndeval.6 In order to compare with interleaving, we developed offline interleaving (OI) based on a simulated user. Carterette [5] observed that many existing metric definitions implicitly include a model of user behavior. As such, given relevance information and a pair of system rankings to compare, we can simulate user interaction and compute an offline interleaving preference. To do so, given two rankings \ud835\udf0band \ud835\udf0b\u2032, we can generate the two possible interleaved rankings \u02dc \ud835\udf0band \u02dc \ud835\udf0b\u2032. Then, we can use a browsing model and relevance information to simulate an online interleaving experiment and estimate Equation 2. 6 RESULTS 6.1 Correlation with Existing Metrics In order to get a sense of the relationship between baseline metric differences and RPP, we sampled pairs of runs for random queries in the Robust dataset. We computed baseline metrics for each ranking and then plotted the metric differences against the RPP for the same pair of runs for the same query (Figure 5). Although the sign agreement of RPP with AP, NDCG, and ASL is close to 0.90, it drops to 0.50 for RR. This result is consistent with the sign agreement between RR and AP (0.56), NDCG (0.58), and ASL (0.47). These results can be explained by two properties of RR. First, because RR ignores recall levels higher than 1, metrics that measure higher recall levels will incorporate information that can reverse the order of systems. Second, the small number of unique RR values results in a number of ties between systems which are often resolved by metrics that consider more recall levels. The sign disagreement between RPP and AP, NDCG, and ASL tends to occur for small differences in performance, with metrics largely agreeing for dramatic differences in performance. This indicates that, even when the top ranked relevant items largely agree in classic metrics, there is enough disagreement at higher recall levels to differ from RPP. 6https://github.com/trec-web/trec-web-2014 -1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0 AP (r=0.78; sa=0.90) RPP \u0394AP -1.0 -0.5 0.0 0.5 1.0 -0.4 -0.2 0.0 0.2 0.4 DCG (r=0.68; sa=0.88) RPP \u0394DCG -1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0 RR (r=0.50; sa=0.51) RPP \u0394RR -1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0 ASL (r=0.81; sa=0.89) RPP \u0394ASL -1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0 OI (r=0.98; sa=0.95) RPP OI -1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0 dcgOI (r=0.94; sa=0.92) RPP dcgOI Figure 5: Query-level metrics differences for sampled runs from Robust. Points in red indicate a difference in run ordering. Titles include the Pearson correlation (r) and fraction of points where RPP and the metric difference agree in sign (sa). We implemented offline interleaving with two different user models, uniform and DCG-based. We present the relationship between OI and RPP preference in the bottom row of Figure 5. We found that the sign agreement was higher between RPP and OI compared to RPP and baseline metrics. Moreover, the relationship between the preference magnitudes shows a strong linear correlated (\ud835\udc5f= 0.98, \ud835\udc5d< 0.001). Comparing to OI with a DCG-style user model, the agreement and correlation degrade (\ud835\udc5f= 0.94, \ud835\udc5d< 0.001) but are still higher than baseline metrics. In addition to pairwise preference agreement, we were interested in the similarity between an ordering of runs induced from RPP preferences (Section 5.4) and an ordering induced from baseline metrics. To this end, we computed the Kendall\u2019s \ud835\udf0fbetween the rankings of runs for each dataset. We present the average correlation across these datasets in Table 2a. We first notice that RPP with uniform position weighting (labeled RPP) correlates with AP \fand NDCG at a level close to how those metrics correlate with each other. If we replace the uniform position weighting with a DCGstyle non-uniform position weighting (dcgRPP), this correlation improves close to their correlation with each other. As suggested by Figure 5, the correlation between RR and RPP, AP, and NDCG is low. Although correlation between dcgRPP with RR (0.49) is higher than RPP with RR (0.45), it remains comparable to that of RR with AP and NDCG. Using the reciprocal rank for the position discount (invRPP) improves this correlation further (0.61), suggesting that our pseudo-population modeling works as expected. That said, we do not expect this metric to correlate perfectly with RR since invRPP considers items below the first relevant item. We also include correlations for subtopic metrics in Table 2b. We observe similar patterns to single topic metrics. MAP-IA, a subtopic version of AP, correlates well with the subtopic versions of RPP and dcgRPP. Top-heavy metrics ERR-IA and strec, on the other hand, correlate well with the subtopic version of invRPP, consistent with earlier results. Taken together, these results indicate that RPP metrics effectively capture a variety of aspects of baseline metrics but do not correlate perfectly, suggesting that they add information to evaluation. Moreover, they demonstrate the ability to adapt RPP to different scenarios (e.g. position bias, novelty). 6.2 Robustness to Incomplete Data Because missing data is common in offline evaluation, we explored the behavior of RPP under two degradation schemes. Due to space constraints, we provide representative results for news search (Robust), web search (2020 Deep Learning Passage Ranking), and recommendation (MovieLens 1M). We show the sensitivity of results when evaluating with fewer requests in Figure 6. For each metric, we calculate the correlation between system rankings with missing requests and system rankings with all requests; an insensitive metric will have higher correlation with fewer requests. Consistent with other work, across all datasets, RR correlation degrades the fastest, suggesting that removing a few requests can alter system ordering. In general, the RPP family of metrics degrades as gracefully as or better than existing metrics, AP and NDCG. We show the sensitivity of results when evaluating with fewer labeled items in Figure 7. Here, for each metric, we calculate the correlation between system rankings with missing labels and system rankings with all labels. As with missing requests, RR degrades poorly across datasets, which is expected since the removal of the top ranked item is likely to substantially perturb performance (Section 6.1). AP also degrades poorly, especially when more than 50% of the judgments are missing. RPP variants degrade more gracefully and are comparable to NDCG, a metric considered less sensitive to missing label [28]. 6.3 Discriminative Power We present measurements of discriminative power in Tables 3 and 4. Although results are largely consistent for both the HSD test and \ud835\udc61-test, we include both to further support our analysis. One fundamental impact we should expect with poorer label efficiency is a reduced ability to distinguish pairs of systems. Across almost all datasets, we observe that RPP-style preferences have substantially more discriminative power compared to baseline metrics. Both AP and RR tend to have lower discriminative power than NDCG, consistent with previous results [28]. The low discriminative power of RR certainly arises from both the poor label efficiency and the large number of ties (Section 6.1). And, although nonuniform position-weighting (dcgRPP, invRPP) sometimes improves discriminative power slightly, uniform position-weighting (RPP) consistently has high discriminative power compared to baseline metrics. The discriminative power of RPP is present in subtopic evaluations as well, at times dramatically so compared to existing subtopic metrics. We note that this may be, in part, due to the addition of a background pseudo-population reflecting binary relevance. We observe that the number of detectable differences improves for all methods when more requests are present (e.g. ml-1M, libraryThing, beerAdvocate). This should be expected since, regardless of metric, more evaluation data will result in better performance estimates. That said, even in these regimes, RPP-style evaluation is more sensitive. Moreover, if conducting segment analysis (e.g. for fairness evaluation), under-represented groups, by definition, will have substantially less data. Although the discriminative power of RPP is not alone sufficient to demonstrate effectiveness, it does provide an important property when considering it for model development or evaluation. Moreover, given that RPP and its position-weighted variants correlate well with existing metrics, these results suggest that the RPP family may be a more sensitive set of instruments for the same phenomenon. 7 DISCUSSION Our experiments were designed to understand if RPP captured properties of existing metrics with the benefit of added sensitivity because of better label efficiency. Our correlation results (Section 6.1) support the claim that RPP and variants can measure aspects similar to existing metrics while our robustness to incomplete data experiments (Section 6.2) demonstrate that RPP is as robust to incomplete data as NDCG, an existing metric known to be robust to incomplete data. Our strongest result suggests that, while capturing these properties of existing metrics, RPP is substantially more sensitive (Section 6.3). As a result, RPP can complement the existing suite of evaluation metrics, including less sensitive but more realistic metrics based on a domain\u2019s user behavior. Our philosophy in designing RPP was to minimize the number of assumptions about user behavior, while being flexible enough to model them, as needed. We demonstrated that incorporating user models through, for example, \ud835\udc5d(\ud835\udc56) could increase correlation with existing metrics (Section 6.1) while maintaining RPP\u2019s strong discriminative power (Section 6.3). We believe that careful incorporation of models of user behavior can further improve the grounding of RPP while preserving its discriminative power. For example, referring to Figure 1b, given a labeled preference, we can imagine learning the weights on different pseudo-populations. Labeled preferences could come from editorial data or from behavioral data such as an interleaving experiment. The former would be similar to the approach taken by Hassan Awadallah and Zitouni [13] for top \ud835\udc58rankings. \f0 50 100 150 200 250 0.2 0.4 0.6 0.8 1.0 requests \u03c4 RPP dcgPP invPP NDCG AP RR (a) Robust 0 10 20 30 40 50 0.5 0.6 0.7 0.8 0.9 1.0 requests \u03c4 RPP dcgPP invPP NDCG AP RR (b) Deep Learning Passage Ranking (2020) 0 20 40 60 80 100 0.5 0.6 0.7 0.8 0.9 1.0 requests \u03c4 RPP dcgPP invPP NDCG AP RR (c) MovieLens 1M Figure 6: Kendall\u2019s \ud835\udf0fof a system ranking given \ud835\udc58requests with a system ranking given all requests. Only the first 100 users are shown for ml-1M since metrics converge quickly after. 0.6 0.7 0.8 0.9 1.0 missing (%) \u03c4 RPP dcgRPP invRPP NDCG AP RR 90 80 70 60 50 40 30 20 10 0 (a) Robust 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 missing (%) \u03c4 RPP dcgRPP invRPP NDCG AP RR 90 80 70 60 50 40 30 20 10 0 (b) Deep Learning Passage Ranking (2020) 0.85 0.90 0.95 1.00 missing (%) \u03c4 RPP dcgRPP invRPP NDCG AP RR 90 80 70 60 50 40 30 20 10 0 (c) MovieLens 1M Figure 7: Kendall\u2019s \ud835\udf0fof a system ranking given missing judgments with a system ranking given all judgments. \fTable 3: Percentage of run differences detected at \ud835\udc5d< 0.05 using Tukey\u2019s HSD test. (a) Single Topic Metrics RPP dcgRPP invRPP AP NDCG RR core (2017) 49.91 49.44 40.76 36.54 39.21 14.67 core (2018) 52.35 49.22 42.33 35.29 34.66 27.97 deep-docs (2019) 42.53 40.83 30.01 33.57 41.11 6.97 deep-docs (2020) 18.85 19.35 17.31 17.71 18.06 2.88 deep-pass (2019) 42.34 42.19 36.34 30.03 26.73 6.76 deep-pass (2020) 45.70 47.34 45.82 30.10 20.87 22.21 web (2009) 32.00 33.69 35.02 18.35 31.56 23.23 web (2010) 43.75 39.31 28.43 27.82 37.50 18.15 web (2011) 45.16 43.36 34.06 31.62 40.61 14.17 web (2012) 41.05 36.44 23.49 26.06 32.09 12.77 web (2013) 48.42 44.10 22.30 29.02 41.53 5.96 web (2014) 48.74 48.28 36.55 44.14 49.89 18.85 robust 63.47 60.52 51.08 52.89 54.53 22.84 ml-1M 94.29 94.29 92.38 88.57 88.57 83.81 libraryThing 96.67 97.14 97.62 95.24 95.71 93.33 beerAdvocate 94.76 94.76 95.71 93.33 94.29 91.90 (b) Subtopic Metrics ST-RPP ST-R ERR-IA MAP-IA web (2009) 29.43 28.99 24.20 17.11 web (2010) 40.12 28.02 22.18 20.77 web (2011) 46.11 17.93 16.45 26.12 web (2012) 44.15 12.15 16.58 25.09 web (2013) 49.89 4.48 5.52 24.04 web (2014) 50.11 12.87 17.93 40.46 Chapelle et al. [8] demonstrated the sensitivity of interleaving experiments across a variety of online search scenarios. At the same time, the distribution \ud835\udc5d(\ud835\udc58) is likely to be skewed toward top rank positions, resulting in an under-weighting of higher values of \ud835\udc58in Equation 2. Because this can result in lower label efficiency, using a more uniform \ud835\udc5d(\ud835\udc58) could improve the sensitivity of online interleaving. Although we have presented RPP as a way to evaluate systems, how to optimize RPP is an area for future research. On the one hand, uniformly weighing the importance of different recall levels is similar to methods that train models with a sequence of tasks based on a sampled relevant item combined with sampled negative items [18]. Under these approaches, the model learns to rank all relevant items, weighting them equally. In the context of evaluation, this is similar to uniformly weighting all recall levels (i.e. \ud835\udc5d(\ud835\udc56) = 1 \ud835\udc5a). Our results demonstrate that this may be a more robust way to optimize rankers. 8" + }, + { + "url": "http://arxiv.org/abs/2004.13157v2", + "title": "Evaluating Stochastic Rankings with Expected Exposure", + "abstract": "We introduce the concept of \\emph{expected exposure} as the average attention\nranked items receive from users over repeated samples of the same query.\nFurthermore, we advocate for the adoption of the principle of equal expected\nexposure: given a fixed information need, no item should receive more or less\nexpected exposure than any other item of the same relevance grade. We argue\nthat this principle is desirable for many retrieval objectives and scenarios,\nincluding topical diversity and fair ranking. Leveraging user models from\nexisting retrieval metrics, we propose a general evaluation methodology based\non expected exposure and draw connections to related metrics in information\nretrieval evaluation. Importantly, this methodology relaxes classic information\nretrieval assumptions, allowing a system, in response to a query, to produce a\n\\emph{distribution over rankings} instead of a single fixed ranking. We study\nthe behavior of the expected exposure metric and stochastic rankers across a\nvariety of information access conditions, including \\emph{ad hoc} retrieval and\nrecommendation. We believe that measuring and optimizing expected exposure\nmetrics using randomization opens a new area for retrieval algorithm\ndevelopment and progress.", + "authors": "Fernando Diaz, Bhaskar Mitra, Michael D. Ekstrand, Asia J. Biega, Ben Carterette", + "published": "2020-04-27", + "updated": "2020-10-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Information access systems such as retrieval and recommendation systems often respond to an information need with a ranking of items. Even with more sophisticated information display modalities, the ranked list is a central feature of most interfaces. Since users often inspect a ranked list in a nonrandom\u2013usually linear\u2013order, some items are exposed to the user before others. Even if a system can perfectly model relevance and rank items accordingly, it still must put items in a particular order, breaking relevance ties in some way and reifying small differences in relative relevance into distinct rank positions. Nonuniform exposure of relevant items resulting from ranking has multiple effects. It strongly affects the allocation of user attention (and therefore content exposure, visibility, and consumptionrelated revenue) to results and their producers, giving rise to fairness concerns for content producers [3, 6, 39]; if there are qualitative differences in comparably-relevant results, systematically favoring results preferred by one group of users affects other groups\u2019 quality of service [24] and may affect user retention [17]; similarly, in recalloriented search or in scenarios where a searcher is interested in broad subtopical exposure, systematically promoting some relevant documents over others may risk overlooking important content; arXiv:2004.13157v2 [cs.IR] 20 Oct 2020 \fit can homogenize users\u2019 information experiences and promote rich-get-richer effects [8]; and, although not often analysed in the design of algorithms, nonuniform exposure to relevant content may affect users\u2019 perception of the makeup of relevant information and its production community. There may also be difference of degree: if there is a small difference in the relative relevance of two documents, but a large difference in the attention users tend to pay to the positions in which they are ranked, the ranking may amplify small difference in content value into a large difference in the producers\u2019 return for providing that value. Unfortunately, providing a static ranking for a query (in retrieval) or context (in recommendation) limits the ability for an algorithm to distribute exposure amongst relevant items. We propose to evaluate information access systems using distributions over rankings in response to a query or context. Figure 1 depicts this approach. More precisely, for a fixed query, we assume that a ranker, \ud835\udf0b, samples a permutation \ud835\udf0efrom a distribution over the set of all permutations \ud835\udc46\ud835\udc5bof \ud835\udc5bdocuments. This allows it to provide equal exposure to relevant items in expectation. And while current evaluation metrics and methods measure the relevance or utility of a single ranking per query, with a distribution of rankings, we can compute the expected value of the metric. This paper provides the foundation for an exposure-based approach to evaluating rankings and advocates for exploring the family of stochastic ranking policies within that framework. To that end, we (i) define the concept of expected exposure and ways to operationalize it; (ii) discuss its relationship to existing retrieval metrics, including diversity, novelty, and fairness metrics; (iii) apply it to measure item exposure under stochastic versions of existing retrieval and recommendation algorithms. We argue that exposure provides a means of looking at several different concerns in the evaluation and impact of information access systems, and believe generalizing evaluation from deterministic rankers to stochastic rankers provides a broad area of study with implications for classic and contemporary problems in information access. We begin by discussing the connection of previous work with our exposure-based evaluation and stochastic ranking (\u00a72). We will then present the framework for evaluating with expected exposure and stochastic ranking together (\u00a73). These definitions of expected exposure have deep connections to existing metrics, which we describe in \u00a74. We then describe our experimental apparatus for analyzing these metrics in \u00a75. We also propose a procedure to optimize towards these metrics in \u00a76. We conclude with a discussion of our findings (\u00a77). 2 RELATED WORK Our work is inspired by and draws together two areas of work: (i) metrics recently developed in the context of algorithmic fairness, and (ii) randomized ranking algorithms developed in the context of online learning and optimization. 2.1 Fairness Exposure optimization has been proposed as a means of achieving fairness in ranking: fairness for individuals means that exposure should be proportional to relevance for every subject in a system [3], while fairness for groups means that exposure should be equally distributed between members of groups defined by sensitive attributes such as gender or race [39]. From an optimization point of view, Singh and Joachims [40] and Yadav et al. [45] consider a similar notion of exposure fairness over multiple rankings as our work. Our work situates exposure-based measures in the context of information retrieval evaluation, allowing us to (i) extend them with user models from existing retrieval metrics, (ii) relate them with the objectives and formalisms of other retrieval metrics, and (iii) introduce a new experimentation protocols based on stochastic ranking. Gao and Shah [15] recently proposed a randomized policy for diversifying search results very similar to our work, albeit in the context of group fairness. While studying connection between fairness and diversity empirically, we attempt to more formally elucidate the relationship and study broader connections beyond group fairness. Beyond the definitions explicitly focusing on exposure, other fairness definitions in practice lead to enhanced equality of exposure, for instance, by requiring equal proportions of individuals from different groups in ranking prefixes [7, 47]. Similarly, Yang and Stoyanovich [46] measure fairness by computing sum of positiondiscounted set-wise parity at different rank thresholds. Beutel et al. [2] approach fair ranking by conducting pairwise analysis of user engagement with the protected groups in a ranking. Zehlike and Castillo [48] propose a supervised learning to rank method to optimize for fair exposure but focus only on the top position in ranking. It is not obvious how their proposed approach can be extended beyond the first rank position. In constrast to this literature, we study metrics that have clear user behavior model amenable to extension. The notion of meritocratic fairness, originally introduced as a fairness objective for online bandit learning [20] and then applied to the problem of selecting a group of individuals from incomparable populations [21], intuitively requires that less qualified candidates do not have a higher chance of getting selected than more qualified candidates. In our setting, this translates to ensuring that less-relevant documents are not likely to be ranked above more-relevant documents. Our construct of target exposure connects this work to meritocratic fairness, in that a system satisfying equity of expected exposure will satisfy the goals of meritocratic fairness by allocating more exposure to relevant documents than to non-relevant documents, it also imposes a stronger constraint by requiring documents with comparable relevance to receive comparable exposure, preventing runaway popularity feedback loops that meritocratic fairness allows. 2.2 Stochastic Ranking Randomization (either explicit or implicit) is ubiquitous in many information access systems and has been shown to be useful for eliciting user feedback and lead to desirable system properties. Pandey et al. [26] first proposed randomized ranking motivated by click exploration. Further strategies [18, 31, 32, 43] have been developed following this approach for collecting unbiased feedback for learning to rank. Instead of using randomization to collect unbiased training data, Joachims et al. [19] use it to estimate the parameters of a click propensity model that allows ranking models to be trained \fon biased feedback. Using randomness in ranking may also be a means of improving diversity [33]. Recently, Bruch et al. [4] demonstrate that learning to rank models can be optimized towards expected values of relevance metrics computed over multiple rankings sampled based on estimated relevance. While not developed in the context of deploying a stochastic ranker, we adopt some of the methodologies therein in our experiments. 3 EXPECTED EXPOSURE Given a query, we are interested in measuring the expected exposure of an item to a searcher with respect to items of similar relevance. Specifically, we would like to define a metric that quantifies a system\u2019s deviation from an ideal expected exposure of items of the same relevance. To this end, we adopt the following principle of equal expected exposure,1 Given a fixed information need, no item should be exposed (in expectation) more or less than any other item of the same relevance. This principle complements the existing core principle of ranked retrieval that more relevant documents should appear before less relevant documents. In this section, we will introduce an evaluation methodology based on the principle of equal expected exposure. We note that existing relevance metrics do not measure the extent to which systems satisfy this principle, as they typically ignore differences in exposure amongst items of the same relevance. As a result, existing relevance metrics will not be able to distinguish a system that satisfies this principle from one that does not. We will start by remaining agnostic about how items are exposed to searchers, only that there is some way in which searchers interact with a ranking of items that is related to the exposure. More formally, let a ranking be defined as a permutation of the \ud835\udc5b documents in the corpus. The set of all permutations of size \ud835\udc5bis referred to as the symmetric group or \ud835\udc46\ud835\udc5bin abstract algebra. Given a query \ud835\udc5e\u2208Q with \ud835\udc5arelevant documents, an optimal permutation would place some ordering of the \ud835\udc5arelevant items at the top positions, followed by some ordering of the (\ud835\udc5b\u2212\ud835\udc5a) nonrelevant documents. Per existing models, exposure monotonically\u2014oftenexponentially\u2014decreases with position in a ranking. Therefore, for a static ranking, we can see that (i) some relevant documents receive more exposure than other relevant documents, and (ii) some nonrelevant documents receive more exposure than other nonrelevant documents. A static ranking will therefore always violate equal expected exposure. Unfortunately, classic retrieval systems only provide and are evaluated according to static rankings. However, we know that there are \ud835\udc5a!(\ud835\udc5b\u2212\ud835\udc5a)! optimal rankings. If an oracle provided us with an optimal ranking at random, any relevant item would be ranked in position 0 \u2264\ud835\udc56< \ud835\udc5awith the same probability.2 As a result, all relevant items would receive 1This principle is related to equity of attention [3], which also ties exposure to relevance. However, equity of attention was originally amortized across information needs. While this paradigm accounts for changing relevance, the system might increase exposure of items for inappropriate information needs. Thus, in this paper we propose to measure exposure per information need. In this sense, the distinction between equal expected exposure and equity of attention is similar to the difference between macroaveraging and microaveraging of relevance metrics. 2Note that we use base-0 ranks throughout this manuscript. the same exposure in expectation; similarly all nonrelevant items would receive the same exposure in expectation. Such a oracle would satisfy equal expected exposure. We will refer to the expected exposure of all items under the oracle policy as the target exposure, represented as a \ud835\udc5b\u00d7 1 vector \ud835\udf16\u2217. Just as we can satisfy ideal expected exposure by using a stochastic oracle, a retrieval system can improve the distribution of exposure by using a stochastic policy, a protocol where, in response to a query, a distribution over rankings is provided. Formally, given a query \ud835\udc5e, a ranking policy \ud835\udf0bprovides a distribution over all permutations, \u00cd \ud835\udf0e\u2208\ud835\udc46\ud835\udc5b\ud835\udf0b(\ud835\udf0e|\ud835\udc5e) = 1. Classic ranking algorithms are a special case which only assign probability to a single, static permutation. We will refer to such an algorithm as a deterministic policy. We note that most classic evaluation metrics (e.g. mean average precision) only evaluate a single, static permutation from a deterministic policy. Given a policy \ud835\udf0band a model of how the searcher might interact with a ranking, we can compute the expected exposure of all of the items in the corpus. We will represent the expected exposure of all items under \ud835\udf0bas a \ud835\udc5b\u00d7 1 vector \ud835\udf16. In order to measure the deviation from equal expected exposure, we compare the target exposure \ud835\udf16\u2217and sytem exposure \ud835\udf16. One simple way to do this is to compute the squared error between \ud835\udf16\u2217 and \ud835\udf16, \u2113(\ud835\udf16,\ud835\udf16\u2217) = \u2225\ud835\udf16\u2212\ud835\udf16\u2217\u22252 2 (1) = \u2225\ud835\udf16\u22252 2 |{z} EE-D \u22122\ud835\udf16T\ud835\udf16\u2217 |{z} EE-R +\u2225\ud835\udf16\u2217\u22252 2 (2) where EE-D or expected exposure disparity measures inequity in the distribution of exposure; EE-R or expected exposure relevance measures how much of the exposure is on relevant documents; the remaining term is constant for a fixed information need. This derivation allows us to clearly decompose expected exposure into a relevance and disparity components. A system that achieves optimal EE-R may maximize disparity (e.g. a static ranking with all relevant items at the top). Similarity a system that minimizes EE-D will have very bad expected exposure relevance (e.g. a random shuffling of the corpus every time a query is submitted). We empirically observed (in \u00a75.3) a tradeoff between the disparity (EE-D) and relevance (EE-R). This tradeoff is often controllable by a parameter in a stochastic policy that affects the degree of randomization. So, at one extreme, the parameter results in a deterministic policy that can achieve high relevance but also incurs high disparity. At the other extreme, the parameter results in a policy that randomly samples from amongst all permutations, achieving the lowest disparity but the lowest relevance. Given that such a parameter can often be swept between a minimum and maximum disparity, we can plot a disparity-relevance curve reflecting the nature of this tradeoff. We use the area under this curve, EE-AUC, as a summary statistic of this curve. While we expect EE-R to behave similar to traditional relevancebased metrics\u2013especially those sharing similar assumptions about how searchers interact with a ranking, reasoning about relevance and disparity within a single formalism allows us to compose aggregate metrics like EE-AUC, which traditional metrics do not capture (\u00a75.3). \f3.1 Computing Exposure with User Browsing Models So far, we have remained agnostic about how items are exposed to users. In this section, we will describe how we can compute the exposure vector \ud835\udf16for an arbitrary ranker, including the oracle ranker. Unlike previous fair ranking metrics, we approach exposure by adopting user models from existing information retrieval metrics. We focus on models from two metrics, rank-biased precision and expected reciprocal rank, although this analysis can be extended to more elaborate browsing models [12]. Rank-biased precision (RBP) is a metric that assumes that a user\u2019s probability of visiting a position decreases exponentially with rank [25], RBP(\ud835\udf0e) = (1 \u2212\ud835\udefe) \u2211\ufe01 \ud835\udc56\u2208[0,\ud835\udc58) y\u2217 \ud835\udf0e\ud835\udc56\ud835\udefe\ud835\udc56 (3) where y\u2217is the \ud835\udc5b\u00d7 1 binary relevance vector; \ud835\udefeis referred to as the patience parameter and controls how deep in the ranking the user is likely browse; and \ud835\udc58is the maximum browsing depth. The multiplicative factor 1 \u2212\ud835\udefeensures that the measure lies in the unit range. We consider that the expected exposure of a document \ud835\udc51is computed, in expectation, as, \ud835\udf16\ud835\udc51= \u2211\ufe01 \ud835\udf0e\u2208\ud835\udc46\ud835\udc5b \ud835\udf0b(\ud835\udf0e|\ud835\udc5e)\ud835\udefe\ud835\udf0e\ud835\udc51 (4) where \ud835\udf0eis a map from document indexes to ranks. This allows us to compute \ud835\udf16for an arbitrary policy \ud835\udf0b. Recall that the oracle policy selects randomly amongst all rankings with all of the relevant documents at the top. Since each document occurs at each of the top \ud835\udc5apositions equally, the target expected exposure for a relevant document is, \ud835\udf16\u2217 \ud835\udc51= 1 \ud835\udc5a \u2211\ufe01 \ud835\udc56\u2208[0,\ud835\udc5a) \ud835\udefe\ud835\udc56 = 1 \u2212\ud835\udefe\ud835\udc5a \ud835\udc5a(1 \u2212\ud835\udefe) Since the set of nonrelevant documents is usually very large, all nonrelevant documents will have equal expected exposure close to zero. Expected reciprocal rank (ERR) is a metric that assumes that a user\u2019s probability of visiting a position is dependent on how many relevant documents appear a earlier positions [9]. The intuition is that earlier relevant documents may satisfy the user and prompt them to stop scanning the ranking. We adopt generalized expected reciprocal rank, a model which incorporates a patience parameter similar to that used in RBP [9, \u00a77.2]. ERR(\ud835\udf0e) = \u2211\ufe01 \ud835\udc56\u2208[0,\ud835\udc58) \ud835\udf19(y\u2217 \ud835\udf0e\ud835\udc56) \u00d6 \ud835\udc57\u2208[0,\ud835\udc56) \ud835\udefe(1 \u2212\ud835\udf19(y\u2217 \ud835\udf0e\ud835\udc57)) (5) where \ud835\udf19converts relevance to a probability of stopping the browsing. Normally this is zero for nonrelevant documents and some value between 0 and 1 for relevant documents. As with RBP, the expected exposure of document \ud835\udc51can be computed as, \ud835\udf16\ud835\udc51= \u2211\ufe01 \ud835\udf0e\u2208\ud835\udc46\ud835\udc5b \ud835\udf0b(\ud835\udf0e|\ud835\udc5e)\ud835\udefe\ud835\udf0e\ud835\udc51 \u00d6 \ud835\udc57\u2208[0,\ud835\udf0e\ud835\udc51) (1 \u2212\ud835\udf19(y\u2217 \ud835\udf0e\ud835\udc57)) Similarly, the target expected exposure of a relevant document is, \ud835\udf16\u2217 \ud835\udc51= 1 \ud835\udc5a \u2211\ufe01 \ud835\udc56\u2208[0,\ud835\udc5a) \ud835\udefe\ud835\udc56(1 \u2212\ud835\udf19(y\u2217 \ud835\udc51\u2217))\ud835\udc56 = 1 \u2212\ud835\udefe\ud835\udc5a(1 \u2212\ud835\udf19(y\u2217 \ud835\udc51\u2217))\ud835\udc5a \ud835\udc5a(1 \u2212\ud835\udefe(1 \u2212\ud835\udf19(y\u2217 \ud835\udc51\u2217))) and close to zero for nonrelevant documents. 3.2 Extension to Graded Judgments So far, we have focused on binary relevance. For graded judgments, the ideal ranker always orders documents correctly by grade. We take all permutations satisfying this requirement and assume the ideal ranker has nonzero support only for these values. We then compute the expected exposure for documents by grade. Let \ud835\udc5a\ud835\udc54 be the number of documents with relevance grade \ud835\udc54and \ud835\udc5a>\ud835\udc54the number of documents with relevant grade strictly larger than \ud835\udc54. Without loss of generality, assume that grades take integer values. Given an RBP browsing model, the optimal exposure for a document \ud835\udc51with grade \ud835\udc54is, \ud835\udf16\u2217 \ud835\udc51= 1 \ud835\udc5a\ud835\udc54 \u2211\ufe01 \ud835\udc56\u2208[\ud835\udc5a>\ud835\udc54,\ud835\udc5a>\ud835\udc54\u22121) \ud835\udefe\ud835\udc56 = \ud835\udefe\ud835\udc5a>\ud835\udc54\u2212\ud835\udefe\ud835\udc5a>\ud835\udc54\u22121 \ud835\udc5a\ud835\udc54(1 \u2212\ud835\udefe) The derivation for the ERR user model is similar. We note that this extension assumes that the a searcher will always prefer to see items with higher grade. In situations where, for example, the grade of an item is inversely correlated with some important property of a document (e.g. a subtopic, authors from underrepresented groups), then these groups will be under-exposed. In such cases, an alternative definition of \ud835\udf16\u2217may be more appropriate (see \u00a74.2). 4 RELATIONSHIP TO OTHER METRICS Expected exposure, both in motivation and in definition, has connections to existing retrieval metrics. In this section, we will discuss those relationships, highlighting the unique properties that expected exposure measures. 4.1 Retrieval Metrics Measures such as RBP and ERR could be considered precision metrics, as they reward rankers for retrieving relevant material higher in the ranking. While based on the same user model, it is not the case that optimizing RBP will also minimize Equation 1, even if exposure is based on an RBP browsing model. To see why, consider a deterministic policy that outputs a static optimal ranking. Although EE-R will be optimal, EE-D will be very large since exposure is concentrated at the top ranks. Indeed, the value of EE-D for a static optimal ranking will be as bad as a static ranking that places all of the relevant document at the bottom since disparity is based only on the exposure and not on relevance. The converse, that minimizing Equation 1 also optimizes RBP, is true. If expected exposure is based on the RBP user model, a system that optimizes expected exposure will essentially be shuffling relevant documents at the top of the \franking and nonrelevant items in the bottom, just as with the oracle in \u00a73. Optimizing recall means focusing on ensuring that all of the relevant items in the corpus occur at high ranks. Several of our motivating examples might be considered addressable by a retrieval system optimized for high recall (e.g. e-discovery, academic search, systematic review). However, if we assume, as many user models do, that a user may terminate their scan of a ranking early, then there is a chance that even a high-recall system, especially in situation where there are numerous relevant documents, a user will not be exposed to all relevant items. As a result, we would argue that expected exposure reduces the risk of overlooking a relevant document. 4.2 Fairness Algorithmic fairness, in the context of information retrieval and recommendation, deals with the treatment of individuals associated with retrievable items [6]. These might be document authors in text retrieval, job candidates in recruiting, or musicians in song recommendation. Individual Fairness. Expected exposure is closely related to various notions of individual fairness that quantify the extent to which models are fair to all individuals. Dwork et al. defined individual fairness in the context of classification models seen as mappings from individuals to probability distributions over outcomes [13]. In this setting, individual fairness is defined using the Lipschitz condition: the distributions of classification outcomes \ud835\udc43\ud835\udc36of two individuals \ud835\udc621,\ud835\udc622 who are sufficiently similar according to a chosen similarity metric \ud835\udc51should be close according to a distribution similarity metric \ud835\udc37. Formally, if \ud835\udc51(\ud835\udc621,\ud835\udc622) < \ud835\udeff, then \ud835\udc37(\ud835\udc43\ud835\udc36(\ud835\udc621), \ud835\udc43\ud835\udc36(\ud835\udc622)) < \u0394. When will a retrieval policy be individually fair according to this definition? Assume we define \ud835\udeffand \ud835\udc51such that two documents of equal relevance grade satisfy the above inequality, and two documents of different relevance grades do not. Assume furthermore that outcomes are measured as the expected exposure of individual documents. A stochastic ranker that distributes exposure (almost) equally among the documents of equal relevance grades (in particular if it achieves optimal expected exposure according to Eq. 1) is individually fair according to the above definition. However, the reverse does not hold: It is possible that an individually fair and an unfair stochastic rankers lead to similar values of the expected exposure measure (the total loss value in Eq. 1 can be aggregated equitably from documents of the same relevance level or from only few documents within a relevance grade). Group Fairness. We can use exposure to define a group notion of provider fairness by measuring whether deviation from expected exposure differs between different groups of documents (or their authors). Let A be the set of \ud835\udc58attributes that a document might be associated with. Attributes may be related to, for example, demographic or other group information about the provider. Let A be a \ud835\udc5b\u00d7 \ud835\udc58binary matrix of the group identity associated with each document in the corpus. We can then compute the total exposure for all documents with an attribute by \ud835\udf09A = AT\ud835\udf16. If we are interested in equal exposure across groups, we can define the target group exposure as \ud835\udf09\u2217 e = \ud835\udc50ATe where e is a \ud835\udc58\u00d7 1 vector of ones and \ud835\udc50is a normalizing constant based on the total exposure given a browsing model. We can then use Equation 1 as a measure of demographic parity [39, \u00a74.1]. If desired, we can replace e with some other distributions, such as population level proportions [37]. Target exposures like \ud835\udf09\u2217 e only balance group representation, but some groups may produce more relevant content than others. If we are interested in exposure proportional to relevance, we can define the target exposure as \ud835\udf09\u2217= ATy\u2217, referred to as disparate treatment [39, \u00a74.2]. Finally, if we are interested in ensuring the exposed items are relevant, we can define a new matrix \u02dc A = diag(y\u2217)A and exposure vector \ud835\udf09\u02dc A = \u02dc AT\ud835\udf16. If we let \ud835\udf09\u2217 \u02dc A = \ud835\udc50\u02dc ATy\u2217, then we recover disparate impact [39, \u00a74.3]. 4.3 Topical Diversity Exposure metrics are closely related to topical diversity metrics [36]. One common way to measure topical diversity is to consider so-called \u2018intent-aware\u2019 metrics defined as, IA-\ud835\udf07(\ud835\udf0e) = \u2211\ufe01 \ud835\udc4e\u2208A \ud835\udc5d(\ud835\udc4e|\ud835\udc5e)\ud835\udf07(\ud835\udf0e|\ud835\udc4e) where \ud835\udf07(\ud835\udf0e|\ud835\udc4e) computes a standard metric considering only those documents with aspect \ud835\udc4eas relevant. The intent-aware RBP metric is defined as IA-RBP(\ud835\udf0e) = \u2211\ufe01 \ud835\udc4e\u2208A \ud835\udc5d(\ud835\udc4e|\ud835\udc5e)(1 \u2212\ud835\udefe) \u2211\ufe01 \ud835\udc56\u2208[0,\ud835\udc58) y\ud835\udc4e,\u2217 \ud835\udf0e\ud835\udc56\ud835\udefe\ud835\udc56 If we assume that \ud835\udc5d(A|\ud835\udc5e) is proportional to the frequency of \ud835\udc4ein the set of relevant documents, then IA-RBP(\ud835\udf0e) \u221d\ud835\udf09T \u02dc A\ud835\udf09\u2217 \u02dc A. In other words, topic diversity reduces to a scaled relevance term in the disparate impact metric (\u00a74.2). In the event that we are interested in uniform \ud835\udc5d(\ud835\udc4e|\ud835\udc5e), then we can redefine the target exposure accordingly and recover the relevance term in the demographic parity metric. Both of these formulations ignore EE-D and it is worth observing that intent-aware metrics often include a \u2018subtopic recall\u2019 factor to [35] to ensure that all subtopics are retrieved. We believe that the disparity term captures precisely this behavior. 5 METRIC ANALYSIS We are interested in empirically studying the EE-D and EE-R. Specifically, we will answering the following questions in our experiments: (i) can the metric distinguish between different randomization strategies? (ii) does an exposure-based relevance metric measure something different from a static ranking metric based on the same user model? 5.1 Randomizing a Deterministic Policy The focus of this paper is on evaluation. However, we were interested in studying our metrics for stochastic rankers, which are not readily available outside of specialized online learning environments. As such, we developed several stochastic rankers for our experiments based on post-processing a precomputed set of retrieval scores. Plackett-Luce (PL). Our first randomization strategy uses PlackettLuce sampling to sample a permutation [22, 28]. To do this, we create a multinomial distribution \ud835\udc5d(\ud835\udc51|\ud835\udc5e) over the corpus using the \f\u21131 normalization of the retrieval scores. The Plackett-Luce model samples a permutation by first sampling the document at position 0 using \ud835\udc5d(\ud835\udc51|\ud835\udc5e). We then set the probability of the selected document to 0, renormalize, and sample the document at position 1 from this modified distribution. We continue this process until we exhaust the scored documents. In order to control the randomness of the process, we use a modified sampling distribution, \ud835\udc5d(\ud835\udc51|\ud835\udc5e) = y\ud835\udefc \ud835\udc51 \u00cd \ud835\udc51\u2032 y\ud835\udefc \ud835\udc51\u2032 where \ud835\udefc\u22650. When \ud835\udefc= 0, all permutations are equally likely and EE-D is minimized; as \ud835\udefcincreases \ud835\udf0bconcentrates around original static ranking and disparity degrades. We refer to this as the Plackett-Luce (PL) policy. Rank Transpositions (RT). Our second randomization strategy ignores the retrieval scores and samples permutations by shuffling the original ranked list. We shuffle by repeatedly sampling pairs of positions and swapping the documents. Such a process takes 1 2\ud835\udc5blog\ud835\udc5b+\ud835\udc50\ud835\udc5biterations to converge to sampling a random permutation [11]. This is precisely a random walk on \ud835\udc46\ud835\udc5bwhere permutations are connected by pairwise transpositions. As such, we can introduce a \u2018restart probability\u2019 to teleport the random walker back to the original ranked list. If this probability is \ud835\udf03, then the number of steps of the random walk follows a geometric distribution with support [0, \u221e). Our randomization strategy then first samples the number of steps \ud835\udc58from the geometric distribution and then conducts \ud835\udc58 random transpositions. We refer to this as the rank transposition (RT) policy. These two methods are intentionally constructed to perform differently. The PL policy takes a deterministic policy\u2019s scores into consideration and will, therefore, be more conservative in removing high-scoring items from the top of the ranked list. The RT policy, on the other hand, randomly swaps pairs, regardless of score or position. As a result, we suspect that the PL policy should outperform the RT policy, given a fixed base deterministic policy. 5.2 Method We analyze the behavior of expected exposure metrics using the postprocessing of deterministic policies in two domains. The first is based on archival TREC submissions focus in information retrieval conditions. The Robust2004 dataset consists of 440 runs submitted to the TREC 2004 Robust track which evaluated systems on a set of 249 queries and binary relevance labels. We adopt this dataset because it has been well-studied in the context of evaluation metrics. Our second dataset, MovieLens25M, is a movie recommendation dataset consisting of 25 million ratings of 59 thousand movies by 163 thousand users [16]. We used LensKit 0.8.4 [14] to generate runs representing binary implicit-feedback matrix factorization (IMF) [27] and Bayesian personalized ranking (BPR) [34].3 We adopt implicit feedback instead of ratings in order to study the behavior of expected exposure under binary relevance. We use a \ud835\udefe= 0.50 for all of our experiments, as consistent with standard TREC evaluation protocol. RBP and ERR are evaluated at depth 20. For stochastic rankers, we sample 50 rankings during evaluation to estimate expected exposure metrics. We found that 3BPR is implemented by the implicit package (https://github.com/benfred/implicit). this was sufficient to converge to appropriate expected metric values. Experiments randomizing deterministic policies rerank the top 100 documents from the original static ranking. 5.3 Results Before analyzing our metrics in aggregate, we present our metrics on an example run from Robust2004. In Figure 2, we show the behavior of our randomization model for EE-R and EE-D, under both the ERR and RBP user models. We compare these metrics to RBP and ERR, two classic static ranking metrics. We also measure the generalized entropy of exposure on the relevant set of documents [41]; this allows us to assess the disparity amongst relevant items. Comparing classic metrics and EE-R in the first and second rows, we observe correlated behavior as randomization changes. Across a sample of runs, we found that the expected RBP and EE-R were strongly correlated (\ud835\udc5f= 0.99, \ud835\udc5d< 0.01); a perfect correlation was observed between expected ERR and EE-R with an ERR model. This is unsurprising given that the relevance factor in the expected exposure metric is precisely the expectation of the static ranking metric. The imperfect correlation for RBP is due to normalization term in the classic RBP model. Comparing generalized entropy and EE-D, we also observe correlated behavior across both RBP (\ud835\udc5f= 0.47, \ud835\udc5d< 0.01) and ERR user models (\ud835\udc5f= 0.65, \ud835\udc5d< 0.01). Because the generalized entropy is computed over only relevant documents, this suggests that EE-D is sensitive to changes in expected exposure to relevant documents, not the dominant, nonrelevant set. Comparing the behavior of EE-D and EE-R in Figure 2, we notice the disparity-relevance tradeoff mentioned in \u00a73. In order to visualize this tradeoff more clearly, we present example disparityrelevance curves for randomization of an arbitrary Robust2004 run and our two recommender systems on MovieLens25M in Figure 3. Disparity-relevance curves, when randomizing the same base policy, will have the same value of EE-R for EE-D = 1 because this recovers the original static ranking. Similarly, all disparity-relevance curves begin with EE-R = 0 at EE-D = 0 because a completely random ranker will achieve minimal relevance by dint of the number of nonrelevant documents in the corpus (i.e. a random shuffle will mean that, in expectation, every document receives a tiny amount of attention). Turning to the randomization policies being studied, across both domains and multiple runs, PL randomization policies dominate RT policies across all disparity points, confirming our intuition that incorporating score information improves postprocessing performance. This provides us with the ability to test the ability of exposure to distinguish between stochastic policies. Given two stochastic rankers, we are interested in understanding whether our exposure metrics can more accurately identify the superior algorithm compared to a metric based on a static ranking. To that end, we randomly assigned the runs for the Robust2004 dataset to either PL or RT randomization. This provided us with EE-AUC for each run as well as an RBP value for its base deterministic policy. We ordered runs by the RBP and then inspected the EE-AUC for the associated run. In Figure 4, we can see that, while RBP, a metric based on a static ranking, can approximately order runs for a fixed randomization policy (\ud835\udf0f= 0.89, \ud835\udc5d< 0.05), it cannot distinguish between the PL and RT policies. \f1 2 4 8 static 0.0 0.2 0.4 0.6 0.8 1.0 RBP@20 1 2 4 8 static 0.0 0.2 0.4 0.6 0.8 1.0 ERR@20 1 2 4 8 static 0.0 0.2 0.4 0.6 0.8 1.0 EE-R (RBP) 1 2 4 8 static 0.0 0.2 0.4 0.6 0.8 1.0 EE-R (ERR) 1 2 4 8 static 0.2 0.4 0.6 0.8 1.0 EE-D (RBP) 1 2 4 8 static 0.2 0.4 0.6 0.8 1.0 EE-D (ERR) 1 2 4 8 static 0.0 0.2 0.4 0.6 0.8 1.0 generalized entropy (RBP) 1 2 4 8 static 0.0 0.2 0.4 0.6 0.8 1.0 generalized entropy (ERR) Figure 2: Behavior of expected exposure metrics for a deterministic run from the Robust2004 dataset randomized using the Plackett-Luce model. Each horizontal axis indicates the value of \ud835\udefc, where lower values indicate more randomization. Each vertical axis reflects the performance of policies on static ranking relevance metrics (top row), expected exposure relevance metrics (second row), expected exposure disparity metrics (third row), and generalized entropy on the relevant set of documents (fourth row) using two browsing models (left: RBP; right: ERR). 6 OPTIMIZING FOR EXPECTED EXPOSURE In the previous section, we introduced post-processing techniques to build stochastic rankers. Given a model that is perfectly able to predict relevance, Plackett-Luce randomization should perform optimally, especially for binary relevance. As such, a classic pointwise learning to rank model [10] with Plackett-Luce randomization may 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Robust disparity relevance PL RT 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ML25M disparity relevance IMF-PL IMF-RT BPR-PL BPR-RT Figure 3: Disparity-relevance tradeoff curve for a random Robust2004 run and our two recommendation runs on MovieLens25M with Placket-Luce randomization and rank transposition randomization. 0 20 40 60 80 100 0.1 0.2 0.3 0.4 0.5 rank (RBP) AUC PL RT Figure 4: Sorting PL and RT runs by RBP. Half of the runs submitted to Robust2004 were subjected to PL randomization and half to RT randomization. Runs were ranked by the RBP of the original, static ranking. EE-AUC for the randomized runs, according to its treatment, is plotted on the vertical axis. be an effective approach for expected exposure. Moreover, calibration of relevance does not happen with pairwise learning to rank models [5] and so we would expect these models, even if perfect, to perform worse than pointwise models, even with Plackett-Luce randomization. However, learning to rank models are not perfect estimators of relevance. Therefore, we believe there should be some advantage to optimizing directly for expected exposure. In this section, we will examine the relationship between the performance of these approaches in the context of graded relevance as well as demographic parity (\u00a74.2). We focused on a shared model architecture with varying loss functions in order to measure differences due to the objective alone, instead of artifacts resulting from the functional form of the models. We begin by describing how we optimize for expected exposure before proceeding to our empirical results. \f6.1 Algorithm Although optimizing for pointwise or pairwise loss has been wellstudied in the information retrieval community, directly optimizing for a metric based on a distribution over rankings has received less attention. We begin by defining an appropriate loss function for our model. Turning to Equation 1, we can drop the constant term and add a hyperparameter to balance between disparity and relevance, \u2113\ud835\udf06(\ud835\udf16,\ud835\udf16\u2217) = \ud835\udf06\u2225\ud835\udf16\u22252 2 \u2212(1 \u2212\ud835\udf06)\ud835\udf16T\ud835\udf16\u2217 (6) where \ud835\udf16\u2217is based on graded relevance (\u00a73.2). Let \ud835\udc53\ud835\udf03: D \u2192\u211cbe an item scoring function parameterized by \ud835\udf03. Given a query, y is a \ud835\udc5b\u00d7 1 vector of item scores for the entire collection such that, y\ud835\udc51= \ud835\udc53\ud835\udf03(\ud835\udc51). Using a Plackett-Luce model, we can translate the raw scores into sampling probabilities, \ud835\udc5d(\ud835\udc51) = exp (y\ud835\udc51) \u00cd \ud835\udc51\u2032\u2208D exp (y\ud835\udc51\u2032) This allows us to construct a ranking \ud835\udf0eby sampling items sequentially. Unfortunately, this sampling process is non-differentiable and, therefore, prohibitive to a large class of models, including those that learn by gradient descent. We address this by adopting the method proposed by Bruch et al. [4]. To construct a sampled ranking \ud835\udf0e, we reparameterize the probability distribution by adding independently drawn noise samples\ud835\udc3afrom the Gumbel distribution [23] to y and sorting items by the \u201cnoisy\u201d probability distribution \u02dc \ud835\udc5d, \u02dc \ud835\udc5d(\ud835\udc51\ud835\udc56) = exp \u0000y\ud835\udc51\ud835\udc56+ \ud835\udc3a\ud835\udc56 \u0001 \u00cd \ud835\udc51\ud835\udc57\u2208D exp \u0010 y\ud835\udc51\ud835\udc57+ \ud835\udc3a\ud835\udc57 \u0011 (7) Given the perturbed probability distribution \u02dc \ud835\udc5d, we compute each document\u2019s smooth rank [30, 44] as, \ud835\udf0e\ud835\udc51= \u2211\ufe01 \ud835\udc51\u2032\u2208D/\ud835\udc51 \u0012 1 + exp \u0012 \u02dc \ud835\udc5d(\ud835\udc51) \u2212\u02dc \ud835\udc5d(\ud835\udc51\u2032) \ud835\udf0f \u0013\u0013\u22121 (8) The smooth rank is sensitive to the temperature \ud835\udf0f. At high temperatures the smooth rank is a poor approximation of the true rank and at low temperatures may result in vanishing gradients. To rectify this issue, we employ the straight-through estimator [1] to compute the true ranks in forward pass but differentiating the gradients with respect to the smooth ranks during backpropagation. Using the estimated ranks and a specified user model we compute the exposure for each document. For example, assuming RBP as the user model the exposure of document \ud835\udc51from a single ranking \ud835\udf0eis given by \ud835\udf16\ud835\udc51= \ud835\udefe\ud835\udf0e\ud835\udc51. We compute expected exposure by averaging over \ud835\udc5btrain different rankings\u2014each generated by independently sampling different Gumbel noise in Equation 7. We use this expected exposure vector \ud835\udf16in Equation 6 to compute the loss that we minimize through gradient descent. The relevance grades are not used for training beyond computing target exposure. We set \ud835\udf0fin Equation 8 to 0.1. We can adapt this model to optimize group-level exposure metrics like demographic parity (\u00a74.2). To do so, we replace \u2225\ud835\udf16\u22252 2 with \u2225\ud835\udf09\u22252 2 in Equation 6 to define an optimization objective that tradesoff relevance and demographic parity. \u2113group,\ud835\udf06= \ud835\udf06\u2225\ud835\udf09\u22252 2 \u2212(1 \u2212\ud835\udf06)\ud835\udf16T\ud835\udf16\u2217 (9) This loss function assumes that the ideal policy distributes exposure equally across all demographics. 6.2 Experiment Models. We restrict our choice of baselines to neural networks so that the exposure-based optimization can be compared to baseline ranking loss functions with respect to the same model. Our base model consists of a fully-connected neural network with two hidden layers of size 256 nodes per layer and rectified linear unit for activation function. We choose a learning rate of 0.001 and a dropout rate of 0.1 and perform early-stopping for all models based on validation sets. Stochastic rankings are then derived by employing Plackett-Luce sampling over these deterministic policies (i.e. pointwise and pairwise models), with varying softmax temperatures to obtain different trade-off points between disparity and relevance. We set \ud835\udc5btrain to 20 for our model and \ud835\udc5btest to 50 for all models. Objectives. We consider three models in our experiments. The pointwise model minimizes the squared error between the model prediction and true relevance. The pairwise model minimizes misclassified preferences using a cross-entropy loss. The expected exposure model minimizes the loss in Equation 6 and, in our demographic parity experiments, Equation 9. Data. Our experiments use the MSLR-WEB10k dataset [29], a learning-to-rank dataset containing ten thousand queries. We perform five-fold cross validation (60/20/20 split between training, validation, and testing sets). Each query-document pair is represented by a 136-dimensional feature vector and graded according to relevance on a five point scale. For the demographic parity experiments, we discretize the PageRank feature in the ranges <1000, 1000\u201310000, and \u226510000 and treat it as a demographic attribute. We confirm that this discretization scheme is reasonable as roughly 70% of the queries have at least one document corresponding to each demography with a relevance grade greater than one. 6.3 Results We present the results of our experiments in Table 1. In terms of expected exposure, we did not observe a difference in performance between pointwise and pairwise models. However, directly optimizing for expected exposure resulted in a 3.9% improvement in EE-AUC over the pointwise and pairwise models. We confirm that the difference in EE-AUC follows a normal distribution and accordingly perform a paired student\u2019s t-test to check their statistical significance. The EE-AUC differences between our proposed method and the baselines are statistically significant (\ud835\udc5d< 0.01). In terms of demographic parity, we observe a difference in performance between pointwise and pairwise models. Moreover, directly optimizing for expected exposure results in improved performance while directly optimizing for demographic parity further boosts performance. The gap in EE-AUC between all pairs of models are statistically significant (\ud835\udc5d< 0.01) in this case. \fTable 1: Results for optimizing towards expected exposure and demographic parity using different ranking objectives. We report average EE-AUC for both tasks and highlight the best performance for each in bold. Optimizing directly for expected exposure and demographic parity using our proposed method achieves best performance in both cases. Loss function AUC Expected exposure Demographic parity Pointwise loss 0.229 0.112 Pairwise loss 0.229 0.108 Our methods Expected exposure 0.238 0.141 Demographic parity 0.178 7 DISCUSSION Our theoretical results draw clear connections to several areas of information retrieval research. We believe, moreover, that our empirical results suggest that expected exposure metrics capture important aspects of a retrieval system that are not currently measured in information retrieval evaluation. Our experiments furthermore demonstrated that these metrics are not only effective for distinguishing systems with varying degrees of expected exposure but also that they can be optimized toward. Although previously studied in the context of algorithmic fairness, we have demonstrated that there are deep connections to existing core areas of information retrieval research. These results warrant revisiting algorithms and results in classic tasks such as ad hoc retrieval, legal search, and diversity-sensitive retrieval. Beyond relevance, fairness, and diversity, we believe this approach to evaluation opens avenues for studying probabilistic search systems in probabilistic way. Many search systems are defined as probabilistic models, capable of handling uncertainty about document relevance [49], sometimes using online learning to refine scoring and ranking models and adapt to changing information needs. These models produce rankings in accordance with a probabilistic policy, so they naturally result in a distribution over rankings associated with each query. Expected exposure, along with computing expected values of other information retrieval metrics, provides a way to evaluate these models and study the effects of uncertainty. Moreover, modern search engines also randomize their rankings to reduce bias in feedback data [18]. Although these systems are often evaluated log data and off-policy evaluation techniques, in the case of pre-launch batch evaluation, we can explicitly model the impact of randomization by evaluating the distribution over rankings. Randomization and improving equal expected exposure may also help with user retention. In search systems, we often want to make sure that we do not overemphasize dominant intents, which can often homogenize populations [17, 24]. As such, randomization can allow us to balance exposure across heterogeneous intents. Exposure balancing may also prevent churn caused by starvation of producers in two-sided economy systems such as ride-sharing platforms [42]. Our exposure model is flexible enough to incorporate more elaborate browsing models. Several exist others beyond RBP and ERR exist in the literature for rankings which deserve exploration. Furthermore, as searchers begin to interact with interfaces that are not based on rankings (e.g. two-dimensional grids, three-dimensional environments), alternative user models will need to be developed and incorporated. We would also like to note possible limitations of this approach. First, the impact of randomization on user satisfaction is still an active area of research and we believe cumulative effects of randomization may be a novel extension to explore in the future work [38]. Second, from an evaluation perspective, stochastic policies introduce logistical constraints on distribution representation and permutation sampling. Centralized evaluations like TREC would need to support a method for either interrogating a stochastic policy or requiring a large pool of samples, incurring data storage costs. Third, although we have focused on randomization in order to increase exposure, we believe that drawing a connection to sequential decision-making scenarios like amortized evaluation are exciting areas of future work. Notwithstanding these limitations, evaluation through expected exposure, when coupled with stochastic policies, opens a new perspective for the study, understanding, and design of information retrieval systems. 8 ACKNOWLEDGEMENTS Michael Ekstrand\u2019s contribution to this work was supported by the National Science Foundation under Grant No. IIS 17-51278." + }, + { + "url": "http://arxiv.org/abs/1605.07891v2", + "title": "Query Expansion with Locally-Trained Word Embeddings", + "abstract": "Continuous space word embeddings have received a great deal of attention in\nthe natural language processing and machine learning communities for their\nability to model term similarity and other relationships. We study the use of\nterm relatedness in the context of query expansion for ad hoc information\nretrieval. We demonstrate that word embeddings such as word2vec and GloVe, when\ntrained globally, underperform corpus and query specific embeddings for\nretrieval tasks. These results suggest that other tasks benefiting from global\nembeddings may also benefit from local embeddings.", + "authors": "Fernando Diaz, Bhaskar Mitra, Nick Craswell", + "published": "2016-05-25", + "updated": "2016-06-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Continuous space embeddings such as word2vec [29] or GloVe [33] project terms in a vocabulary to a dense, lower dimensional space. Recent results in the natural language processing community demonstrate the e\ufb00ectiveness of these methods for analogy and word similarity tasks. In general, these approaches provide global representations of words; each word has a \ufb01xed representation, regardless of any discourse context. While a global representation provides some advantages, language use can vary dramatically by topic. For example, ambiguous terms can easily be disambiguated given local information in immediately surrounding words [17, 49]. The window-based training of word2vec style algorithms exploits this distributional property. A global word embedding, even when trained using local windows, risks capturing only coarse representations of those topics dominant in the corpus. While a particular embedding may be appropriate for a speci\ufb01c word within a sentence-length context globally, it may be entirely inappropriate within a speci\ufb01c topic. Gale et al. refer to this as the \u2018one sense per discourse\u2019 property [15]. Previous work by Yarowsky demonstrates that this property can be successfully combined with information from nearby terms for word sense disambiguation [50]. Our work extends this approach to word2vec-style training in the context word similarity. For many tasks that require topic-speci\ufb01c linguistic analysis, we argue that topic-speci\ufb01c representations should outperform global representations. Indeed, it is di\ufb03cult to imagine a natural language processing task that would not bene\ufb01t from an understanding of the local topical structure. Our work focuses on a query expansion, an information retrieval task where we can study di\ufb00erent lexical similarity methods with an extrinsic evaluation metric (i.e. retrieval metrics). Recent work has demonstrated that similarity based on global word embeddings can be used to outperform classic pseudo-relevance feedback techniques [40, 2]. We propose that embeddings be learned on topically-constrained corpora, instead of large topically-unconstrained corpora. In a retrieval scenario, this amounts to retraining an embedding on documents related to the topic of the query. We present local embeddings which capture the nuances of topic-speci\ufb01c language better than global embeddings. There is substantial evidence that global methods underperform local methods for information retrieval tasks such as query expansion [48], latent semantic analysis [20, 37, 39], cluster-based retrieval [41, 42, 47], and term clustering [5]. We demonstrate that the same holds true when using word embeddings for text retrieval. 2. MOTIVATION For the purpose of motivating our approach, we will restrict ourselves to word2vec although other methods behave similarly [26]. These algorithms involve discriminatively training a neural network to predict a word given small set of context words. More formally, given a target word w and observed context c, the instance loss is de\ufb01ned as, \u2113(w, c) = log \u03c3(\u03c6(w) \u00b7 \u03c8(c)) + \u03b7 \u00b7 Ew\u223c\u03b8C [log \u03c3(\u2212\u03c6(w) \u00b7 \u03c8(w))] where \u03c6 : V \u2192\u211ck projects a term into a k-dimensional embedding space, \u03c8 : Vm \u2192\u211ck projects a set of m terms into a k-dimensional embedding space, and w is a randomly sampled \u2018negative\u2019 context. The parameter \u03b7 controls the sampling of random negative terms. These matrices are estimated over a set of contexts sampled from a large corpus and minimize the expected loss, Lc = Ew,c\u223cpc [\u2113(w, c)] (1) where pc is the distribution of word-context pairs in the training corpus and can be estimated from corpus statistics. While using corpus statistics may make sense absent any other information, oftentimes we know that our analysis will be topically constrained. For example, we might be analyzing the \u2018sports\u2019 documents in a collection. The language in this domain is more specialized and the distribution over word-context pairs is unlikely to be similar to pc(w, c). In fact, prior work in information retrieval suggests that documents on subtopics in a collection have very di\ufb00erent unigram distributions compared to the whole corpus [9]. Let pt(w, c) be the probability of observing a word-context pair conditioned on the topic t. The expected loss under this distribution is [38], arXiv:1605.07891v2 [cs.IR] 23 Jun 2016 \flog(weight) -1 0 1 2 3 4 5 0 50 100 150 Figure 1: Importance weights for terms occurring in documents related to \u2018argentina pegging dollar\u2019 relative to frequency in gigaword. Lt = Ew,c\u223cpc \u0014 pt(w, c) pc(w, c)\u2113(w, c) \u0015 (2) In general, if our corpus consists of su\ufb03ciently diverse data (e.g. Wikipedia), the support of pt(w, c) is much smaller than and contained in that of pc(w, c). The loss, \u2113, of a context that occurs more frequently in the topic, will be ampli\ufb01ed by the importance weight \u03c9 = pt(w,c) pc(w,c). Because topics require specialized language, this is likely to occur; at the same time, these contexts are likely to be underemphasized in training a model according to Equation 1. In order to quantify this, we took a topic from a TREC ad hoc retrieval collection (see Section 5 for details) and computed the importance weight for each term occurring in the set of on-topic documents. The histogram of weights \u03c9 is presented in Figure 1. While larger probabilities are expected since the size of a topic-constrained vocabulary is smaller, there are a non-trivial number of terms with much larger importance weights. If the loss, \u2113(w), of a word2vec embedding is worse for these words with low pc(w), then we expect these errors to be exacerbated for the topic. Of course, these highly weighted terms may have a low value for pt(w) but a very high value relative to the corpus. We can adjust the weights by considering the pointwise Kullback-Leibler divergence for each word w, Dw(pt\u2225pc) = pt(w) log pt(w) pc(w) (3) Words which have a much higher value of pt(w) than pc(w) and have a high absolute value of pt(w) will have high pointwise KL divergence. Figure 2 shows the divergences for the top 100 most frequent terms in pt(w). The higher ranked terms (i.e. good query expansion candidates) tend to have much higher probabilities than found in pc(w). If the loss on those words is large, this may result in poor embeddings for the most important words for the topic. A dramatic change in distribution between the corpus and the topic has implications for performance precisely because of the objective used by word2vec (i.e. Equation 1). The KL 0.00 0.05 0.10 0.15 rank Figure 2: Pointwise Kullback-Leibler divergence for terms occurring in documents related to \u2018argentina pegging dollar\u2019 relative to frequency in gigaword. global local cutting tax squeeze de\ufb01cit reduce vote slash budget reduction reduction spend house lower bill halve plan soften spend freeze billion Figure 3: Terms similar to \u2018cut\u2019 for a word2vec model trained on a general news corpus and another trained only on documents related to \u2018gasoline tax\u2019. training emphasizes word-context pairs occurring with high frequency in the corpus. We will demonstrate that, even with heuristic downsampling of frequent terms in word2vec, these techniques result in inferior performance for speci\ufb01c topics. Thus far, we have sketched out why using the corpus distribution for a speci\ufb01c topic may result in undesirable outcomes. However, it is even unclear that pt(w|c) = pc(w|c). In fact, we suspect that pt(w|c) \u0338= pc(w|c) because of the \u2018one sense per discourse\u2019 claim [15]. We can qualitatively observe the di\ufb00erence in pc(w|c) and pt(w|c) by training two word2vec models: the \ufb01rst on the large, generic Gigaword corpus and the second on a topically-constrained subset of the gigaword. We present the most similar terms to \u2018cut\u2019 using both a global embedding and a topic-speci\ufb01c embedding in Figure 3. In this case, the topic is \u2018gasoline tax\u2019. As we can see, the \u2018tax cut\u2019 sense of \u2018cut\u2019 is emphasized in the topic-speci\ufb01c embedding. 3. LOCAL WORD EMBEDDINGS The previous section described several reasons why a global embedding may result in overgeneral word embeddings. In order to perform topic-speci\ufb01c training, we need a set of topic-speci\ufb01c documents. In information retrieval scenarios users rarely provide the system with examples of topicspeci\ufb01c documents, instead providing a small set of keywords. \fFortunately, we can use information retrieval techniques to generate a query-speci\ufb01c set of topical documents. Specifically, we adopt a language modeling approach to do so [8]. In this retrieval model, each document is represented as a maximum likelihood language model estimated from document term frequencies. Query language models are estimated similarly, using term frequency in the query. A document score then, is the Kullback-Leibler divergence between the query and document language models, D(pq\u2225pd) = X w\u2208V pq(w) log pq(w) pd(w) (4) Documents whose language models are more similar to the query language model will have a lower KL divergence score. For consistency with prior work, we will refer to this as the query likelihood score of a document. The scores in Equation 4 can be passed through a softmax function to derive a multinomial over the entire corpus [25], p(d) = exp(\u2212D(pq\u2225pd)) P d\u2032 exp(\u2212D(pq\u2225pd\u2032)) (5) Recall in Section 2 that training a word2vec model weights word-context pairs according to the corpus frequency. Our query-based multinomial, p(d), provides a weighting function capturing the documents relevant to this topic. Although an estimation of the topic-speci\ufb01c documents from a query will be imprecise (i.e. some nonrelevant documents will be scored highly), the language use tends to be consistent with that found in the known relevant documents. We can train a local word embedding using an arbitrary optimization method by sampling documents from p(d) instead of uniformly from the corpus. In this work, we use word2vec, although any method that operates on a sample of documents can be used. 4. QUERY EXPANSION WITH WORD EMBEDDINGS When using language models for retrieval, query expansion involves estimating an alternative to pq. Speci\ufb01cally, when each expansion term is associated with a weight, we normalize these weights to derive the expansion language model, pq+. This language model is then interpolated with the original query model, p1 q(w) = \u03bbpq(w) + (1 \u2212\u03bb)pq+(w) (6) This interpolated language model can then be used with Equation 4 to rank documents [1]. We will refer to this as the expanded query score of a document. Now we turn to using word embeddings for query expansion. Let U be an |V| \u00d7 k term embedding matrix. If q is a |V| \u00d7 1 column term vector for a query, then the expansion term weights are UUTq. We then take the top k terms, normalize their weights, and compute pq+(w). We consider the following alternatives for U. The \ufb01rst approach is to use a global model trained by sampling documents uniformly. The second approach, which we propose in this paper, is to use a local model trained by sampling documents from p(d). 5. METHODS Table 1: Corpora used for retrieval and local embedding training. docs words queries trec12 469,949 438,338 150 robust 528,155 665,128 250 web 50,220,423 90,411,624 200 news 9,875,524 2,645,367 wiki 3,225,743 4,726,862 5.1 Data To evaluate the di\ufb00erent retrieval strategies described in Section 3, we use the following datasets. Two newswire datasets, trec12 and robust, consist of the newswire documents and associated queries from TREC ad hoc retrieval evaluations. The trec12 corpus consists of Tipster disks 1 and 2; and the robust corpus consists of Tipster disks 4 and 5. Our third dataset, web, consists of the ClueWeb 2009 Category B Web corpus. For the Web corpus, we only retain documents with a Waterloo spam rank above 70.1 We present corpus statistics in Table 1. We consider several publicly available global embeddings. We use four GloVe embeddings of di\ufb00erent dimensionality trained on the union of Wikipedia and Gigaword documents.2 We use one publicly available word2vec embedding trained on Google News documents.3 We also trained a global embedding for trec12 and robust using the entire corpus. Instead of training a global embedding on the large web collection, we use a GloVe embedding trained on Common Crawl data.4 We train local embeddings with word2vec using one of three retrieval sources. First, we consider documents retrieved from the target corpus of the query (i.e. trec12, robust, or web). We also consider training a local embedding by performing a retrieval on large auxiliary corpora. We use the Gigaword corpus as a large auxiliary news corpus. We hypothesize that retrieving from a larger news corpus will provide substantially more local training data than a target retrieval. We also use a Wikipedia snapshot from December 2014. We hypothesize that retrieving from a large, high \ufb01delity corpus will provide cleaner language than that found in lower \ufb01delity target domains such as the web. Table 1 shows the relative magnitude of these auxiliary corpora compared to the target corpora. All corpora in Table 1 were stopped using the SMART stopword list5 and stemmed using the Krovetz algorithm [23]. We used the Indri implementation for indexing and retrieval.6 5.2 Evaluation We consider several standard retrieval evaluation metrics, including NDCG@10 and interpolated precision at standard recall points [22, 45]. NDCG@10 provides insight into performance speci\ufb01cally at higher ranks. An interpolated precision recall graph describes system performance throughout 1https://plg.uwaterloo.ca/\u02dcgvcormac/clueweb09spam/ 2http://nlp.stanford.edu/data/glove.6B.zip 3https://code.google.com/archive/p/word2vec/ 4http://nlp.stanford.edu/data/glove.840B.300d.zip 5http://jmlr.csail.mit.edu/papers/volume5/lewis04a/ a11-smart-stop-list/english.stop 6http://www.lemurproject.org/indri/ \fthe entire ranked list. 5.3 Training All retrieval experiments were conducted by performing 10-fold cross-validation across queries. Speci\ufb01cally, we crossvalidate the number of expansion terms, k \u2208[5 \u2212500], and interpolation weight, \u03bb \u2208[0, 1]. For local word2vec training, we cross-validate the learning rate \u03b1 \u2208{0.1, 0.01, 0.001}. All word2vec training used the publicly available word2vec cbow implementation.7 When training the local models, we sampled 1000 documents from p(d) with replacement. To compensate for the much smaller corpus size, we ran word2vec training for 80 iterations. Local word2vec models use a \ufb01xed embedding dimension of 400 although other choices did not signi\ufb01cantly a\ufb00ect our results. Unless otherwise noted, default parameter settings were used. In our experiments, expanded queries rescore the top 1000 documents from an initial query likelihood retrieval. Previous results have demonstrated that this approach results in performance nearly identical with an expanded retrieval at a much lower cost [11]. Because publicly available embeddings may have tokenization inconsistent with our target corpora, we restricted the vocabulary of candidate expansion terms to those occurring in the initial retrieval. If a candidate term was not found in the vocabulary of the embedding matrix, we searched for the candidate in a stemmed version of the embedding vocabulary. In the event that the candidate term was still not found after this process, we removed it from consideration. 6. RESULTS We present results for retrieval experiments in Table 2. We \ufb01nd that embedding-based query expansion outperforms our query likelihood baseline across all conditions. When using the global embedding, the news corpora bene\ufb01t from the various embeddings in di\ufb00erent situations. Interestingly, for trec12, using an embedding trained on the target corpus signi\ufb01cantly outperforms all other global embeddings, despite using substantially less data to estimate the model. While this performance may be due to the embedding having a tokenization consistent with the target corpus, it may also come from the fact that the corpus is more representative of the target documents than other embeddings which rely on online news or are mixed with non-news content. To some extent this supports our desire to move training closer to the target distribution. Across all conditions, local embeddings signi\ufb01cantly outperform global embeddings for query expansion. For our two news collections, estimating the local model using a retrieval from the larger Gigaword corpus led to substantial improvements. This e\ufb00ect is almost certainly due to the Gigaword corpus being similar in writing style to the target corpus but, at the same time, providing signi\ufb01cantly more relevant content [12]. As a result, the local embedding is trained using a larger variety of topical material than if it were to use a retrieval from the smaller target corpus. An embedding trained with a retrieval from Wikipedia tended to perform worse most likely because the language is dissimilar from news content. Our web collection, on the other hand, bene\ufb01tted more from embeddings trained using retrievals from the general Wikipedia corpus. The Gigaword corpus was less 7https://code.google.com/p/word2vec/ Table 3: Kendall\u2019s \u03c4 and Spearman\u2019s \u03c1 between improvement in NDCG@10 and local KL divergence with the corpus language model. The improvement is measured for the best local embedding over the best global embedding. \u03c4 \u03c1 trec12 0.0585 0.0798 robust 0.0545 0.0792 web 0.0204 0.0283 useful here because news-style language is almost certainly not representative of general web documents. Figure 4 presents interpolated precision-recall curves comparing the baseline, the best global query expansion method, and the best local query expansion method. Interestingly, although global methods achieve strong performance for NDCG@10, these improvements over the baseline are not re\ufb02ected in our precision-recall curves. Local methods, on the other hand, almost always strictly dominate both the baseline and global expansion across all recall levels. The results support the hypothesis that local embeddings provide better similarity measures than global embeddings for query expansion. In order to understand why, we \ufb01rst compare the performance di\ufb00erences between local and global embeddings. Figure 2 suggests that we should adopt a local embedding when the local unigram language model deviates from the corpus language model. To test this, we computed the KL divergence between the local unigram distribution, P d p(w|d)p(d), and the corpus unigram language model [9]. We hypothesize that, when this value is high, the topic language is di\ufb00erent from the corpus language and the global embedding will be inferior to the local embedding. We tested the rank correlation between this KL divergence and the relative performance of the local embedding with respect to the global embedding. These correlations are presented in Table 3. Unfortunately, we \ufb01nd that the correlation is low, although it is positive across collections. We can also qualitatively analyze the di\ufb00erences in the behavior of the embeddings. If we have access to the set of documents labeled relevant to a query, then we can compute the frequency of terms in this set and consider those terms with high frequency (after stopping and stemming) to be good query expansion candidates. We can then visualize where these terms lie in the global and local embeddings. In Figure 5, we present a two-dimensional projection [44] of terms for the query \u2018ocean remote sensing\u2019, with those good candidates highlighted. Our projection includes the top 50 candidates by frequency and a sample of terms occurring in the query likelihood retrieval. We notice that, in the global embedding, the good candidates are spread out amongst poorer candidates. By contrast, the local embedding clusters the candidates in general but also situates them closely around the query. As a result, we suspect that the similar terms extracted from the local embedding are more likely to include these good candidates. 7. DISCUSSION The success of local embeddings on this task should alarm natural language processing researchers using global embeddings as a representational tool. For one, the approach of learning from vast amounts of data is only e\ufb00ective if the \fTable 2: Retrieval results comparing query expansion based on various global and local embeddings. Bolded numbers indicate the best expansion in that class of embeddings. Wilcoxon signed rank test between bolded numbers indicates statistically signi\ufb01cant improvements (p < 0.05) for all collections. global local wiki+giga gnews target target giga wiki QL 50 100 200 300 300 400 400 400 400 trec12 0.514 0.518 0.518 0.530 0.531 0.530 0.545 0.535 0.563* 0.523 robust 0.467 0.470 0.463 0.469 0.468 0.472 0.465 0.475 0.517* 0.476 web 0.216 0.227 0.229 0.230 0.232 0.218 0.216 0.234 0.236 0.258* 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 trec12 recall precision QL global local 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 robust recall precision QL global local 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 web recall precision QL global local Figure 4: Interpolated precision-recall curves for query likelihood, the best global embedding, and the best local embedding from Table 2. data is appropriate for the task at hand. And, when provided, much smaller high-quality data can provide much better performance. Beyond this, our results suggest that the approach of estimating global representations, while computationally convenient, may overlook insights possible at query time, or evaluation time in general. A similar local embedding approach can be adopted for any natural language processing task where topical locality is expected and can be estimated. Although we used a query to re-weight the corpus in our experiments, we could just as easily use alternative contextual information (e.g. a sentence, paragraph, or document) in other tasks. Despite these strong results, we believe that there are still some open questions in this work. First, although local embeddings provide e\ufb00ectiveness gains, they can be quite ine\ufb03cient compared to global embeddings. We believe that there is opportunity to improve the e\ufb03ciency by considering o\ufb04ine computation of local embeddings at a coarser level than queries but more specialized than the corpus. If the retrieval algorithm is able to select the appropriate embedding at query time, we can avoid training the local embedding. Second, although our supporting experiments (Table 3, Figure 5) add some insight into our intuition, the results are not strong enough to provide a solid explanation. Further theoretical and empirical analysis is necessary. 8. RELATED WORK Topical adaptation of models. The shortcomings of learning a single global vector representation, especially for polysemic words, have been pointed out before [36]. The problem can be addressed by training a global model with multiple vector embeddings per word [35, 19] or topic-speci\ufb01c embeddings [27]. The number of senses for each word may be \ufb01xed [32], or determined using class labels [43]. However, to the best of our knowledge, this is the \ufb01rst time that training topic-speci\ufb01c word embeddings has been explored. Several methods exist in the language modeling community for topic-dependent adaptation of language models [6]. These can lead to performance improvements in tasks such as machine translation [51] and speech recognition [31]. Topicspeci\ufb01c data may be gathered in advance, by identifying corpus of topic-speci\ufb01c documents. It may also be gathered during the discourse, using multiple hypotheses from N-best lists as a source of topic-speci\ufb01c language. Then a topic-speci\ufb01c language model is trained (or the global model is adapted) online using the topic-speci\ufb01c training data. A topic-dependent model may be combined with the global model using linear interpolation [21] or other more sophisticated approaches [14, 24]. Similarly to the adaptation work, we use topic-speci\ufb01c documents to train a topicspeci\ufb01c model. In our case the documents come from a \ufb01rst round of retrieval for the user\u2019s current query, and the word embedding model is trained based on sentences from the topic-speci\ufb01c document set. Unlike the past work, we do not focus on interpolating the local and global models, although this is a promising area for future work. In the current study we focus on a direct comparison between the local-only and global-only approach, for improving retrieval performance. Word embeddings for IR. Information Retrieval has a long history of learning representations of words that are lowdimensional dense vectors. These approaches can be broadly classi\ufb01ed into two families based on whether they are learnt \fglobal local Figure 5: Global versus local embedding of highly relevant terms. Each point represents a candidate expansion term. Red points have high frequency in the relevant set of documents. White points have low or no frequency in the relevant set of documents. The blue point represents the query. Contours indicate distance from the query. based on a term-document matrix or term co-occurence data. Using the term-document matrix for embedding leads to several well-studied approaches such as LSA [10], PLSA [18], and LDA [7, 46]. The performance of these models varies depending on the task, for example they are known to perform poorly for retrieval tasks unless combined with lexical features [3]. Term-cooccurence based embeddings, such as word2vec [29, 28] and [34], have recently been remarkably popular for many natural language processing and logical reasoning tasks. However, there are relatively less known successful applications of these models in IR. Ganguly et. al. [16] used the word similarity in the word2vec embedding space as a way to estimate term transformation probabilities in a language modelling setting for retrieval. More recently, Nalisnick et. al. [30] proposed to model document aboutness by computing the similarity between all pairs of query and document terms using dual embedding spaces. Both these approaches estimate the semantic relatedness between two terms as the cosine distance between them in the embedding space(s). We adopt a similar notion of term relatedness but focus on demonstrating improved retrieval performance using locally trained embeddings. Local latent semantic analysis. Despite the mathematical appeal of latent semantic analysis, several experiments suggest that its empirical performance may be no better than that of ranking using standard term vectors [10, 13, 4]. In order to address the coarseness of corpus-level latent semantic analysis, Hull proposed restricting analysis to the documents relevant to a query [20]. This approach signi\ufb01cantly improved over corpus-level analysis for routing tasks, a result that has been reproduced in consequent research [37, 39]. Our work can be seen as an extension of these results to more recent techniques such as word2vec. 9." + }, + { + "url": "http://arxiv.org/abs/1507.03928v1", + "title": "Pseudo-Query Reformulation", + "abstract": "Automatic query reformulation refers to rewriting a user's original query in\norder to improve the ranking of retrieval results compared to the original\nquery. We present a general framework for automatic query reformulation based\non discrete optimization. Our approach, referred to as pseudo-query\nreformulation, treats automatic query reformulation as a search problem over\nthe graph of unweighted queries linked by minimal transformations (e.g. term\nadditions, deletions). This framework allows us to test existing performance\nprediction methods as heuristics for the graph search process. We demonstrate\nthe effectiveness of the approach on several publicly available datasets.", + "authors": "Fernando Diaz", + "published": "2015-07-14", + "updated": "2015-07-14", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Most information retrieval systems operate by performing a single retrieval in response to a query. E\ufb00ective results sometimes require several manual reformulations by the user [6, 25, 22] or semi-automatic reformulations assisted by the system [21, 36, 23]. Although the reformulation process can be important to the user (e.g. in order to gain perspective about the domain of interest), the process can also lead to frustration and abandonment [14]. In many ways, the core information retrieval problem is to improve the initial ranking and user satisfaction and, as a result, reduce the need for reformulations, manual or semiautomatic. While there have been several advances in learning to rank given a \ufb01xed query representation [29], there has been somewhat less attention, from a formal modeling perspective, given to automatically reformulating the query before presenting the user with the retrieval results. One notable exception is pseudo-relevance feedback (PRF), the technique of using terms found in the top retrieved documents to conduct a second retrieval [1, 9]. PRF is known to be a very strong baseline. However, it incurs a very high computational cost because it issues a second, much longer query for retrieval. In this paper, we present an approach to automatic query reformulation which combines the iterated nature of human query reformulation with the automatic behavior of PRF. We refer to this process as pseudo-query reformulation (PQR). Figure 1 graphically illustrates the intuition behind PQR. In this \ufb01gure, each query and its retrieved results are depicted as nodes in a graph. An edge exists between two nodes, qi and qj, if there is a simple reformulation from qi to qj; for example, a single term addition or deletion. This simulates the incremental query modi\ufb01cations a user might conduct during a session. The results in this \ufb01gure are colq0 q1 q2 q3 q4 q5 q6 q* Figure 1: Query reformulation as graph search. Nodes represent queries and associated retrieved results. Relevant documents are highlighted in red. Edges exist between nodes whose queries are simple reformulations of each other. The goal of pseudo-query reformulation is to, given a seed query q0 by a user, automatically navigate to a better query. ored so that red documents re\ufb02ect relevance. If we assume that a user is following a good reformulation policy, then, starting at q0, she will select reformulations (nodes) which incrementally increase the number of relevant documents. This is depicted as the path of shaded nodes in our graph. We conjecture that a user navigates from qi to qj by using insights from the retrieval results of qi (e.g. qj includes a highly discriminative term in the results for qi) or by incorporating some prior knowledge (e.g. qj includes a highly discriminative term in general). PQR is an algorithm which behaves in the same way: issuing a query, observing the results, inspecting possible reformulations, selecting a reformulation likely to be e\ufb00ective, and then iterating. Several properties make PQR attractive. First, PQR directly optimizes performance for short, unweighted keyword interaction. This is important for scenarios where a searcher, human or arti\ufb01cial, is constrained by an API such as those found in many search services provided by general web search engines or social media sites. This constraint prevents the use of massive query expansion techniques such as PRF. Even if very long queries were supported, most modern systems are optimized (in terms of e\ufb03ciency and e\ufb00ectiveness) for short queries, hurting the performance of massive query expansion. Second, our experiments demonstrate that PQR signi\ufb01cantly outperforms several baselines, including PRF. Finally, PQR provides a framework in which to evaluate performance prediction methods in a grounded retrieval task. arXiv:1507.03928v1 [cs.IR] 14 Jul 2015 \f2. RELATED WORK Pseudo-query reformulation draws together three areas of information retrieval: pseudo-relevance feedback, iterative query rewriting, and performance prediction. Previous research has combined elements of these, but not in the way described in our work. Kurland et al. present several heuristics for iteratively re\ufb01ning a language model query by navigating document clusters in a retrieval system [24]. The technique leverages specialized data structures storing document clusters derived from large scale corpus analysis. While related, the solution proposed by these authors violates assumptions in our problem de\ufb01nition. First, their solution assumes weighted language model style queries not supported by backends in our scenario. Second, their solution assumes access to the entire corpus as opposed to a search API. Using performance predictors in order to improve ranking has also been studied previously, although in a di\ufb00erent context. Sheldon et al. demonstrate how to use performance predictors in order to better merge result lists from pairs of reformulated queries [40]. This is, in spirit, quite close to our work and is a special case of PQR which considers only two candidate queries and a single iteration instead of hundreds of candidates over several iterations. In the context of learning to rank, performance predictors have been incorporated as ranking signals and been found to be useful [32]. From the perspective of query weighting, Lv and Zhai explored using performance predictors in order to set the optimal interpolation weight in pseudo-relevance feedback [31]. Similarly Xue and Croft have demonstrated how to use performance predictors in order to improve concept weighting in an inference network model[44, 43]. Again, while similar to our work in the use of performance predictors for query reformulation, we focus on the discrete, iterated representation. The work of Xue and Croft focuses on a single iteration and a weighted representation. More generally, there has been some interest in detecting the importance of query terms in a long queries or in expanded queries [3, 7, 27, 46, 2]. Representing related queries as graphs has been studied extensively. Early work by Mooers proposed treating the entire space of unweighted queries (i.e. length |V| boolean vectors) as a lattice [35]. In the context of web search, Boldi et al. studied within-session query reformulations as a graph [5]. Other work, such as spreading activation and inference networks as well as term-only graphs are less related although they use a similar formalism. 3. MOTIVATION As mentioned earlier, users often reformulate an initial query in response to the system\u2019s ranking [6, 25, 41, 22]. Reformulation actions include adding, deleting, and substituting query words, amongst other transformations. There is evidence that manual reformulation can improve the quality of a ranked list for a given information need [22, Table 5]. However, previous research has demonstrated that humans are not as e\ufb00ective as automatic methods in this task [15, 33, 39]. In order to estimate an upper bound on the potential improvement from reformulation, we propose a simulation of an optimal user\u2019s reformulation behavior. Our simulator is based on query-document relevance judgments, referred to as qrels. Previous research has used similar techniques to examine the optimality of human reformulation behavior [15, 33, 39]. In this section, we revisit these results with contemporary test collections and retrieval methods. Unlike this prior work, though, we are not interested in determining the human (in)ability to achieve optimal performance but in gauging the upper bound for PQR. We sketch our query reformulation simulator in Figure 2. The simulator is inspired by a model of optimal human search behavior and should not be considered model of any real user. Our recursive search algorithm uses as input: a reference query q (e.g. a TREC \u2018title\u2019 query), a set of qrels, r for q, a current depth, d, and a maximum depth, dmax. The process can be considered a depth-limited graph search by a oracle on the space of queries depicted in Figure 1. The simulated search begins by generating a set of candidate reformulations, Qq, from an initial query, q. The next step in our simulation selects the best reformulation from this set of candidates. We assume that the oracle can measure the performance \u00b5 of the set of candidate reformulations by running each query against the retrieval system and compute a metric such as NDCG with r. After selecting this query, we rerun the process on the best reformulation, q\u2217. Our search terminates after it reaches a speci\ufb01ed depth, dmax. We introduce dmax in order to limit computation and resource usage. Before describing this experiment and results in more detail, we want to make the assumptions of our model clear. First, the e\ufb00ectiveness of the query found by this simulation is constrained by the query representation. For example, if our query is an unweighted term vector, then, even if we could exhaustively evaluate all 2|V| possible queries, we may not \ufb01nd a query achieving the upper bound of the metric (i.e. 1 for most information retrieval metrics). Therefore, we refer to the representational upper bound as the best performance possible using a \ufb01xed query representation. The upper bound found by this simulation is also constrained by the fact that we are performing a local search. As such, we assume that a better query is reachable from q0 through a series of query reformulations. We do not want to claim that the representational upper bound is reachable or even that a very good query is reachable, only that a better query than q0 is reachable. Fortunately, the previously cited work in human and automatic query reformulation supports this claim. More subtly, we assume that these \u2018better queries\u2019 are reachable through a series of reformulations with increasing performance. If the better queries are reachable but cannot be navigated to by progressively getting better results, then we will not be able to attain better performance using relevance information. Unfortunately, this assumption has less justi\ufb01cation and we must take it as is. Note that this assumption does not claim that all reformulations Qq0 are better than q0; only that there exists a better query that is \u2018closer\u2019 to even better queries. Because of these added constraints, we refer to the outcome of this process as the search-restricted representational upper bound. For a random sample of 50 judged training queries, we ran the simulator described in Figure 2 using the following methods. The set of candidates consists of all one word deletions and 10 one word additions taken from the 10 ten most frequent words in the retrieval results for q. We considered two implementations of ScoreQueries: oracle prediction and random prediction. Oracle prediction computes NDCG@30. We select this high-precision measure for two \fQRSim(q, d, dmax, r) q \u0003 current query d \u0003 current depth dmax \u0003 maximum depth r \u0003 relevance judgments 1 if d = dmax 2 then 3 return q 4 Qq \u2190GenerateCandidateReformulations(q) 5 \u00b5 \u2190ScoreQueries(Qq, r) 6 q\u2217\u2190argmaxqi\u2208Qq\u00b5qi 7 if (q\u2217= q) 8 then 9 return q 10 else 11 return QRSim(q\u2217, d + 1, dmax, r) Figure 2: Reformulation simulator. Given a query q and query-document relevance judgments r, this algorithm will perform gradient ascent on query performance, \u00b5, over the space of query reformulations, Q. The oracle policy uses r to compute true reformulation performance in ScoreQueries. The random policy uses a random number generator for this function. trec12 robust web QL 0.4011 0.4260 0.1628 RM3 0.4578 0.4312 0.1732 random 0.3162 0.2765 0.0756 PQR\u2217 0.6482 0.6214 0.3053 Table 1: NDCG@30 for random (random) and optimal (PQR\u2217) pseudo-query reformulation compared to query likelihood (QL) and relevance model (RM3). Datasets are described in Section 7.1. reasons. First, our simulation needs to operate quickly and retrieving shorter lists is much more e\ufb03cient. Second, NDCG is superior at distinguishing high precision runs compared to other measures such as mean average precision [37]. Random prediction scores reformulation candidates using a random scalar in the unit range. Starting at q0, we search up to a depth of four. Further details of our corpora and queries can be found in Section 7. The results of these experiments (Table 1) demonstrate the range of performance for PQR. Our oracle simulator performs quite well, even given the limited depth of our search. Performance is substantially better than the baseline, with relative improvements greater than those in published literature. To some extent this should be expected since the oracle can leverage relevance information. Surprisingly, though, the algorithm is able to achieve this performance increase by adding and removing a small set of up to four terms. The poor performance of the random policy suggests that oracle is not just using the terms selected by the initial retrieval to get its boost in performance. Keeping this search-restricted representational upper bound in mind, we would like to develop algorithms that can approximate the behavior of our optimal policy without having access to any qrels or an oracle. The closer our automatic reformulation is to oracle, the better our performance. 4. PROBLEM DEFINITION Let Q be the entire set of queries submittable to a retrieval system. In the case of unweighted keyword queries, this is all boolean vectors of dimension |V|. For each query q, we de\ufb01ne a set of reformulation candidates, Qq, consisting of all queries reachable by a single term addition or deletion. For example, the reformulation candidate set for the query [hello world] would include [hello], [world], [hello world program], [hello world song], amongst the O(|V|) other queries resulting from a single term addition. Our problem can be stated as follows: given an initial query, q0, and access to the candidate generation function, \ufb01nd a query q+ that performs better than q0. Performance here is measured by submitting a query to a \ufb01xed retrieval system and evaluating results according to a \ufb01xed metric (e.g. NDCG@30). As mentioned earlier, this can be considered a graph search problem where queries are nodes and edges exist between q and Qq. Importantly, our algorithm has access to the unweighted keyword retrieval system in order to generate features, but it never has access to any true relevance information or performance metric. Such retrieval services can be found in search APIs such as those provided by major search engines, social media sites, and distributed information retrieval services. 5. ALGORITHMS Conceptually, PQR follows the framework of the simulator from Figure 2. That is, the algorithm recursively performs candidate generation and candidate scoring within each recursion. In this section, we will describe candidate set generation (Section 5.1) and candidate scoring (Section 5.2) along with the graph search algorithm (Section 5.3). 5.1 Generating Candidates Our entire search space can be represented by a very large lattice of queries. Even if we were performing local graph search, the O(|V|) edges incident to any one node would make a single iteration computationally intractable. As a result, we need a method for pruning the full set of reformulation candidates to a smaller set that we can analyze in more detail. Fortunately, in many cases, we can establish heuristics so that we only consider those reformulations likely to improve performance. For example, reformulating the query [Master theorem] into [Master theorem yak] seems unlikely to improve performance if we believe yak is unlikely to occur in documents relevant to [Master theorem]. In our case, given qt, we consider the following candidates, a) all single term deletions from qt, and b) all single term additions from the n terms with the highest probability of occurring in relevant documents. Since we do not have access to the relevant documents at runtime, we approximate this distribution using the terms occurring in the retrieval for qt. Speci\ufb01cally, we select the top n terms in the relevance model, \u03b8Rt, associated with qt [26]. The relevance model is the retrieval score-weighted linear interpolation of retrieved document languages models. We adopt this approach for its computational ease and demonstrated e\ufb00ectiveness in pseudo-relevance feedback. 5.2 Scoring Candidates The candidate generation process described in Section 5.1 provides a crude method for pruning the search space. Based \fon our observations with the random and oracle policies in Section 3, we know that inaccurately scoring reformulation candidates can signi\ufb01cantly degrade the performance of a scoring algorithm. In this section, we model the oracle using established performance prediction signals. 5.2.1 Performance Prediction Signals Performance prediction refers to the task of ordering a set of queries without relevance information so that the better performing queries are ordered above worse performing queries. With some exception, the majority of work in this area has focused on ranking queries coming from di\ufb00erent information needs (i.e. one query per information need). We are interested in the slightly di\ufb00erent task of ranking many queries for a single information need. Despite the di\ufb00erence in problem setting, we believe that, with some modi\ufb01cations discussed in Section 5.2.2, performance predictors can help model the oracle or, more accurately, the true performance of the reformulation. A complete treatment of related work is beyond the scope of this paper but details of approaches can be found in published surveys (e.g. [16]). The set of performance predictors we consider can be broken into three sets: query signals, result set signals, and drift signals. Throughout this section, we will be describing signals associated with a candidate query q. Query signals refer to properties of the terms in q alone. These signals are commonly referred to as \u2018pre-retrieval\u2019 signals since they can be computed without performing a costly retrieval. Previous research has demonstrated that queries including non-discriminative terms may retrieve nonrelevant results. The inverse document frequency is one way to measure the discrimination ability of a term and has been used in previous performance prediction work [18]. Over all query terms in q, we consider the mean, maximum, and minimum IDF values. In addition to IDF, we use similarlymotivated signals such as Simpli\ufb01ed Clarity (SC) and Query Scope (QS) [19]. Result set signals measure the quality of the documents retrieved by the query. These signals are commonly referred to as \u2018post-retrieval\u2019 signals. These features include the well-known Query Clarity (QC) measure, de\ufb01ned as the Kullback-Leibler divergence between the language model estimated from the retrieval results, \u03b8Rt, and the corpus language model, \u03b8C [10]. In our work, we use B(Rt, \u03b8C), the Bhattacharyya correlation between the corpus language model and the query language model [4], de\ufb01ned as B(\u03b8i, \u03b8j) = X w\u2208V p p(w|\u03b8i) \u00d7 p(w|\u03b8j) (1) This measure is in the unit interval and with low values for dissimilar pairs of language models and high values for similar pairs of language models. The Bhattacharyya correlation has been used e\ufb00ectively other other retrieval tasks [12]. We use the Bhattacharyya correlation between these two distributions instead of the Kullback-Leibler divergence because the measure is bounded and, as a result, does not need to be rescaled across queries. We also use the score autocorrelation (SA), a measure of the consistency of scores of semantically related documents [11]. In our implementation, we again use the Bhattacharyya correlation to measure the similarity between all pairs of documents in Rt, as represented by their maximum likelihood language models. Drift signals compare the current query qt with its parent q0 qt-1 qt \u2022\u2022\u2022 (a) Initial Query q0 qt-1 qt \u2022\u2022\u2022 (b) Parent Query Figure 3: Drift signal classes. Signals for qt include comparisons with reference queries qt\u22121 and q0 to prevent query drift. qt\u22121 and the initial query q0 (Figure 3). These signals can serve to anchor our prediction and avoid query drift, situations where a reformulation candidate appears to be high quality but is topically very di\ufb00erent from the desired information need. One way to measure drift is to compute the di\ufb00erence in the query signals for these pairs. Speci\ufb01cally, we measure the aggregate IDF, SC, and QS values of the deleted, preserved, and introduced keywords. We also generate two signals comparing the results sets of these pairs of queries. The \ufb01rst measures the similarity of the ordering of retrieved documents. In order to do this, we compute the \u03c4-AP between the rankings [45]. The \u03c4-AP computes a position-sensitive version of Kendall\u2019s \u03c4 suitable for information retrieval tasks. The ranking of results for a reformulation candidate with a very high \u03c4-AP will be indistinguishable from those of the reference query; the ranking of results for a reformulation candidate with a very low \u03c4-AP will be quite di\ufb00erent from the reference query. Our second result set signal measures drift by inspecting the result set language models. Speci\ufb01cally, it computes B(\u03b8Rt\u22121, \u03b8Rt), the Bhattacharyya correlation between the result sets. 5.2.2 Performance Prediction Model With some exception, the majority of performance prediction work has studied predictors independently, without looking at a combinations of signals. Several approaches to combine predictors focus on regressing against the the absolute performance for a set of training queries [13, 17]. This is appropriate when the task is to rank queries from di\ufb00erent information needs but it may not be when the task is to predict the performance for reformulation candidates related to the same information need. In order to demonstrate the problem with regressing against the uncalibrated performance metric for all queries, it is worth inspecting the training data for such an algorithm. In Figure 4a, we overlay the distributions of performance metric values for 28 information needs. Each distribution is a kernel density estimate based on the performance metric values observed when following the graph search algorithm in Section 3. The \ufb01gure shows that the relative importance of a reformulation candidate depends strongly on the information need. Di\ufb00erent information needs\u2013as represented by di\ufb00erent initial queries\u2013have di\ufb00erent mean performance values and, at times, variances. In fact, the diversity of performance ranges varies dramatically based on the information need, its representation in the corpus, and its complexity; \f0.0 0.2 0.4 0.6 0.8 1.0 \u00b5i (a) uncentered scores -1.0 -0.5 0.0 0.5 1.0 \u00b5i \u2212\u00b50 (b) centered scores Figure 4: Distribution of NDCG@30 values for queries visited by the oracle policy for 28 training information needs. Note that the data for the \ufb01rst plot comes from the oracle policy while the data for the second plot comes from a pseudo-query reformulation policy. a good value for one information need may be terrible for another. Consider the situation where we need to rank a set of reformulation candidates. The actual value of the metric is less important than the relative value. One way to address the poorly-calibrated values is to center all performance metric values by subtracting the value of the original query. The result, a distribution over the relative improvements over q0, is presented in Figure 4b. This transform is reasonable for our task since it simpli\ufb01es the regression problem to one of predicting a relative improvement over the baseline as opposed to wasting modeling e\ufb00ort on predicting the absolute performance metric value. In addition, if the model is accurate, it could provide a convenient method for pruning large areas of the search space predicted to be inferior to q0. Inspecting Figure 4b, though, also suggests why a regression against relative performance which minimizes the mean squared error may be undesirable. The distribution is very peaked around the center and a model will be penalized for poor predictions of reformulation candidates with little or no impact on performance. In the worst case, the model will predict values close to zero for all reformulation candidates. Although binning or other techniques can be used to address this situation, we can address this unbalance by simplifying our problem further. Recall that we really only need a relative ordering of reformulation candidates. Therefore, we treat this as an ordinal regression problem. That is, we estimate a model which learns the correct ordering of reformulation candidates for a given information need. In practice, we train this model using true performance values of -1.0 -0.5 0.0 0.5 1.0 \u00b5i \u2212\u00b50 \u03c0\u2217 \u03c0 (a) oracle -1.0 -0.5 0.0 0.5 1.0 \u00b5i \u2212\u00b50 \u03c0random \u03c0 (b) random Figure 5: Score distribution for di\ufb00erent data-gathering policies. The shaded area re\ufb02ect the distribution with respect to the exploration policy. The dashed line re\ufb02ects the distribution with respect to an example solution. The black area re\ufb02ects the over-representation by the exploration policy. candidates encountered throughout a search process started at q0; running this process over a number of training q0\u2019s results in a large set of training candidates. Precisely how this training set is collected will be described in the next section. Even though we are interested in \ufb01nding high-performing queries, we will not be biasing our pairwise loss toward the top of the ranked list of candidate queries. This is because our search algorithm is iterative and observes batches of reformulation candidates at a time, perhaps including highly performing queries, but often not. We need a model which is accurate for all reformulation candidates, not just the top performing ones. We are agnostic about the precise functional form of our model and opt for a linear ranking support vector machine [28] due to its training and evaluation speed, something we found necessary when conducting experiments at scale. 5.3 Searching Candidates Considering the reformulation graph in Figure 1, the previous two sections explained how to represent the edges (candidate generation) and predict the value of nodes (candidate scoring). We still need to describe a process for searching for queries starting from q0. We approach this process as a heuristic search problem, using the predicted performance as our heuristic. Unfortunately, algorithms such as A\u2217cannot be reliably used because our heuristic is not admissible. Similarly, the noise involved in our performance prediction causes greedy algorithms such as beam search or best \ufb01rst search to su\ufb00er from local maxima. \fQuerySearch(q, d, b, dmax, m) q \u0003 current query d \u0003 current depth b \u0003 search breadth dmax \u0003 maximum depth m \u0003 number of return reformulations 1 if d = dmax 2 then 3 return q 4 Qq \u2190GenerateCandidates(q) 5 \u02dc \u00b5 \u2190PredictPerformance(Qq) 6 \u02dc Qq \u2190TopQueries(Qq, \u02dc \u00b5, b) 7 \u02c6 Qq \u2190TopQueries(Qq, \u02dc \u00b5, m) 8 for qi \u2208\u02dc Qq 9 do 10 \u02c6 Qq \u2190\u02c6 Qq \u222aQuerySearch(qi, d + 1, b, dmax, m) 11 \u02c6 \u00b5 \u2190PredictPerformance( \u02c6 Qq) 12 return TopQueries( \u02c6 Qq, \u02c6 \u00b5, m) (a) Query reformulation procedure. .35 .45 .30 .34 .40 .75 .40 .36 .45 .36 .61 .80 .55 .57 .43 .48 .20 .31 .35 .20 .19 .40 .43 .25 .30 .15 .20 .38 .33 (b) Illustration of the search process. Figure 6: The search procedure recursively explores the reformulation graph and returns the top m highest scoring reformulations inspected. In the illustration, numbers re\ufb02ect a query\u2019s predicted score. The bold nodes represent those nodes selected for expansion. The highlighted numbers represent the top m candidates visited throughout the search. Motivated by our search simulator (Figure 2), we propose an algorithm that recursively inspects n reformulation candidates at each qi up to a certain depth, dmax. We present this algorithm in Figure 6a. The algorithm di\ufb00ers from our simulation insofar as it executes several reformulation sessions simultaneously, keeping track of those reformulations with the highest predicted e\ufb00ectiveness. One attractive aspect of our algorithm is the broad coverage of the reformulation space unlikely to be visited in greedier algorithms. At termination, the algorithm selects a small number (m) of candidate queries visited for \ufb01nal retrieval. These m retrievals are merged using a Borda count algorithm with constituent rankings weighted by predicted performance. This process allows the algorithm to be more robust to errors in performance prediction. The total number of candidates evaluated (Line 4 of Figure 6a) throughout the search process is approximately, |C| \u2248 \u0016bdmax \u22121 b \u22121 \u0017 n (2) where the approximation error comes from varying initial query length. 6. TRAINING The e\ufb00ectiveness of the search algorithm (Section 5.3) critically depends on the reliability of the performance predictor (Section 5.2.2). Conversely, the distribution of instances supplied to the performance predictor depends on the decisions made by the search algorithm. Therefore, in order to train the performance prediction model, we need to gather example instances by executing a search and visiting nodes. Note that, for practical reasons, we cannot possibly gather training signals and targets for all queries in our search space. Even if we could, this set would probably not be representative of the instances the performance prediction model would observe in practice. For the same reason, we cannot use an arbitrary search policy in order to gather a smaller sample of instances. To see why this is the case, consider gathering instances for every reformulation candidate inspected by the oracle algorithm described in Section 3. Even though there will be poorly performing queries in this set of examples, the distribution would over-represent e\ufb00ective queries because the oracle is guiding the search towards those reformulations. We demonstrate in Figure 5a where we plot the distribution of centered performance metric values of queries inspected by the oracle compared to a distribution of those inspected by a model used in our experiments. As expected, the oracle visits a larger number of e\ufb00ective queries on average compared to our example solution. A model trained on unrepresentative data may be less performant than one trained on data more representative of the queries it will encounter during testing. At the same time, although sampling with a random policy seems attractive, the distribution of queries inspected here will have the opposite problem. As shown in Figure 5b, these queries are will be overrepresent less e\ufb00ective than those visited by the example solution. The solution is to make gather a set of training instances for the performance prediction model which are representative of those visited by the search at test time. We accomplish this by gathering training instances using a datagathering policy that approximates the behavior of our \ufb01nal graph search. The training operates as follows. We \ufb01rst partition our training queries into several subsets, {t0, . . . , t5}; we also partition our validation queries into two subsets {v0, v1}; our testing queries are left aside for evaluation (Figure 7). We then iterate through the training subsets in order. For each subset, ti, we execute the search algorithm \ft0 t1 t2 t3 t4 t5 v1 T v0 Figure 7: Partitioning of training (ti), validation (vi), and testing data (T). in Figure 6a using the existing performance prediction model (or the oracle policy if i = 0). During the search, we record the feature vector and true performance of any encountered query. This set of |C| \u00d7 |ti| instances from ti, can then be used to train a performance prediction model. The regularization parameter of the SVM is tuned to select the model with the best performance on the validation set, v0. After this step, we move on to the next training subset, ti+1, using newly trained performance prediction model. As a result of this process, we iteratively accumulate a large set of training instance for the performance predictor representative of instances encountered during the search. Throughout the process we monitor performance on our second validation partition v1. This method of gathering training representative training data has previously been used in robotics [38, Algorithm 3.1] and natural language processing [20, Algorithm 2]. We found that making several passes over the training splits improved the model performance on v1. Therefore, we made several passes over the training splits and selected the model which performed best on v1 for \ufb01nal evaluation. However, reformulating exactly the same queries in ti may result in over\ufb01tting. To address this, after the \ufb01rst pass over ti, we deformed the queries using the following procedure. With equal probability, terms were randomly added or dropped from the original query. The source of added terms was the true relevance model for the training query. We applied these perturbations until the Jaccard correlation between the top ten results of the perturbed and unperturbed queries was less than 0.50 and while performance was no less than 75% of the performance of the unperturbed query. These conditions ensured that the query was di\ufb00erent (in terms of results) but still comparably performant with the unperturbed query. Similar perturbation processes have been used for computing query-dependent term similarity [8] and expanding digit recognition data [30]. 7. METHODS 7.1 Data We use three standard retrieval corpora for our experiments (Table 2). Two news corpora, trec12 and robust, consist of large archives of news articles. The trec12 dataset consists of the Tipster disks 1 and 2 with TREC ad hoc topics 51-200. The robust dataset consists of Tipster disks 4 and 5 with TREC ad hoc topics 301-450 and 601-700. Our web corpus consists of the Category B section of the Clue Web 2009 dataset with TREC Web topics 1-200. We tokenized all corpora on whitespace and then applied Krovetz stemming and removed words in the SMART stopword list.1 We further pruned the web corpus of all documents with a Waterloo spam score less than 70.2 We use TREC title queries 1ftp://ftp.cs.cornell.edu/pub/smart/english.stop 2https://plg.uwaterloo.ca/~gvcormac/clueweb09spam/ documents queries trec12 469,949 51-200 robust 528,155 301-450,601-700 web 29,038,227 1-200 Table 2: Experiment corpora and query sets. Documents marked as spam removed from web before indexing. in all of our experiments. We randomly partitioned the queries into three sets: 60% for training, 20% for validation, and 20% for testing. We repeated this random split procedure \ufb01ve times and present results averaged across the test set queries. 7.2 Implementation All indexing and retrieval was conducted using indri 5.7.3 Our SVM models were trained using liblinear 1.95.4 We evaluated \ufb01nal retrievals using NIST trec eval 9.0.5 In order to support large parameter sweeps, each query reformulation in PQR performed a re-ranking of the documents retrieved by q0 instead of a re-retrieval from the full index. Pilot experiments found that the e\ufb00ectiveness of re-retrieval was comparable with that of re-ranking though re-retrieval incurred much higher latency. 7.3 Parameters Aside from the performance prediction model, our algorithm has the following free parameters: the number of termaddition candidates per query (n), the number of candidates to selection per query (b), and the maximum search depth (dmax). Combined, the automatic reformulation and the multi-pass training resulted in computationally expensive processes whose runtime is sensitive to these parameters. Consequently, we \ufb01xed our parameter settings at relatively modest numbers (n = 10, b = 3, dmax = 4) and leave a more thorough analysis of sensitivity for an extended manuscript. Although these numbers may seem small, we remind the reader that this results in roughly |C| \u2248800 reformulations considered within the graph search for a single q0 (Equation 2). The number of candidates to merge (m) is tuned throughout training on the validation set v0 and ranges from \ufb01ve to twenty. The query likelihood baseline used Dirichlet smoothing with parameter tuned on the full training set using a range of values from 500 through 5000. The parameters of the relevance model baseline (RM3) were also tuned on the full training set. The range of feedback terms considered was {5, 10, 25, 50, 75, 100}; the range of feedback documents was {5, 25, 50, 75, 100}; the range of \u03bb was [0, 1] with a step size of 0.1. All runs, including baselines, optimized NDCG@30. 8. RESULTS We present the results for our experiments in Table 3. Our \ufb01rst baseline, query likelihood (QL) re\ufb02ects the performance of q0 alone and represents an algorithm which is representationally comparable with PQR insofar as it also 3http://www.lemurproject.org/indri/ 4http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/ #large_scale_ranksvm 5http://trec.nist.gov/trec_eval/ \fTable 3: Comparison of PQR to query likelihood (QL) and relevance model (RM3) baselines for our datasets. Statistically signi\ufb01cant di\ufb00erence with respect to QL (\u25a0: better; \u25a1: worse) and RM3 (\u2666: better; \u2666: worse) using a Student\u2019s paired t-test (p < 0.05 with a Bonferroni correction). The best performing run is presented in bold. All runs have parameters tuned for NDCG@30 on the validation set. NDCG@5 NDCG@10 NDCG@20 NDCG@30 NDCG MAP trec12 QL 0.5442 0.5278 0.5066 0.4835 0.5024 0.2442 RM3 0.6465\u25a0 0.6113\u25a0 0.5796\u25a0 0.5627\u25a0 0.5300\u25a0 0.2983\u25a0 random 0.5690 \u2666 0.5563 \u2666 0.5257 \u2666 0.5089 \u2666 0.5120\u25a0\u2666 0.2653\u25a0\u2666 PQR 0.6112\u25a0\u2666 0.5907\u25a0 0.5630\u25a0 0.5419\u25a0\u2666 0.5216\u25a0\u2666 0.2819\u25a0\u2666 robust QL 0.4874 0.4559 0.4306 0.4172 0.5419 0.2535 RM3 0.4888 0.4553 0.4284 0.4176 0.5462 0.2726\u25a0 random 0.4240\u25a1\u2666 0.3967\u25a1\u2666 0.3675\u25a1\u2666 0.3588\u25a1\u2666 0.5143\u25a1\u2666 0.2352\u25a1\u2666 PQR 0.5009 0.4713\u25a0\u2666 0.4438\u25a0\u2666 0.4315\u25a0\u2666 0.5498\u25a0 0.2736\u25a0 web QL 0.2206 0.2250 0.2293 0.2315 0.3261 0.1675 RM3 0.2263 0.2273 0.2274 0.2316 0.3300\u25a0 0.1736\u25a0 random 0.1559\u25a1\u2666 0.1562\u25a1\u2666 0.1549\u25a1\u2666 0.1537\u25a1\u2666 0.2790\u25a1\u2666 0.1157\u25a1\u2666 PQR 0.2528\u25a0\u2666 0.2501\u25a0\u2666 0.2493\u25a0\u2666 0.2435\u25a0 0.3300 0.1690 retrieves using a short, unweighted query. Our second baseline, the relevance model (RM3) re\ufb02ects the performance of a strong algorithm that also uses the retrieval results to improve performance, although with much richer representational power (the optimal number of terms often hover near 75-100). As expected, RM3 consistently outperforms QL in terms of MAP. And while the performance is superior across all metrics for trec12, RM3 is statistically indistinguishable from QL for higher precision metrics on our other two data sets. The random policy, which replaces our performance predictor with random scores, consistently underperforms both baselines for robust and web. Interestingly, this algorithm is statistically indistinguishable from QL for trec12, suggesting that this corpus may be easier than others. Next, we turn to the performance of PQR. Across all corpora and across almost all metrics, PQR signi\ufb01cantly outperforms QL. While this baseline might be considered low, it is a representationally fair comparison with PQR. So, this result demonstrates the ability of PQR to \ufb01nd more e\ufb00ective reformulations than q0. The underperformance of the random algorithm signi\ufb01es that the e\ufb00ectiveness of PQR is attributable to the performance prediction model as opposed to a merely walking on the reformulation graph. That said, PQR is statistically indistinguishable from QL for higher recall metrics on the web corpus (NDCG and MAP). In all likelihood, this results from the optimization of NDCG@30, as opposed to higher recall metrics. This outcome is ampli\ufb01ed when we compare PQR to RM3. For the robust and web datasets, we notice PQR signi\ufb01cantly outperforming RM3 for high precision metrics but showing weaker performance for high recall metrics. We point out that PQR performs weaker than RM3 for trec12. This might be explained by the easier nature of the corpus combined with the richer representation of the RM3 model. We can inspect the coe\ufb03cient values to determine the Table 4: Top \ufb01ve highest weighted signals for each experiment. For each run in each experiment, we ranked all signals by the magnitude of their associated weight in the linear model. We aggregated these rankings and present the signals ranked by frequency in the top \ufb01ve signals across runs. trec12 robust web B(\u03b8R0, \u03b8Rt) B(\u03b8R0, \u03b8Rt) \u03c4AP(R0, Rt) B(\u03b8Rt\u22121, \u03b8Rt) B(\u03b8Rt\u22121, \u03b8Rt) B(\u03b8R0, \u03b8Rt) \u03c4AP(R0, Rt) Clarity \u03c4AP(Rt\u22121, Rt) \u03c4AP(Rt\u22121, Rt) \u03c4AP(Rt\u22121, Rt) B(\u03b8Rt\u22121, \u03b8Rt) Clarity maxIDF Clarity importance of individual signals in performance prediction. In Table 4, we present the most important signals for each of our experiments. Because our results are averaged over several runs, we selected the signals most often occurring amongst the highest weighted in these runs, using the \ufb01nal selected model (see Section 6). Interestingly, many of the top ranked signals are our drift features which compare the language models and rankings of the candidate result set with those of its parent and the \ufb01rst query. This suggests that the algorithm is successfully preventing query drift by promoting candidates that retrieve results similar to the original and parent queries. On the other hand, the high weight for Clarity suggests that PQR is simultaneously balancing ranked list re\ufb01nement with ranked list anchoring. 9. DISCUSSION Although QL is the appropriate baseline for PQR, comparing PQR performance to that of RM3 helps us understand where improvements may be originating. The e\ufb00ectiveness of RM3 on trec12 is extremely strong, demonstrat\fing statistically superior performance to PQR on many metrics. At the same time, the absolute metrics for QL on these runs is also higher than on the other two collections. This suggests that part of the e\ufb00ectiveness of RM3 results from the strong initial retrieval (i.e. QL). As mentioned earlier, the strength of the random run separately provides evidence of the initial retrieval\u2019s strength. Now, if the initial retrieval uncovered signi\ufb01cantly more relevant documents, then RM3 will estimate a language model very close to the true relevance model, boosting performance. Since RM3 allows a long, rich, weighted query, it follows that it would outperform PQR\u2019s constrained representation. That said, it is remarkable that PQR achieves comparable performance to RM3 on many metrics with at most |q0| + dmax words. The weaker performance for high-recall metrics was somewhat disappointing but should be expected given our optimization target (NDCG@30). Post-hoc experiments demonstrated that optimizing for MAP boosted the performance of PQR to 0.1728 on web, resulting in statistically indistinguishable performance with RM3. Nevertheless, we are not certain that human query reformulation of the type encountered in general web search would improve high recall metrics since users in that context rarely inspect deep into the ranked list. One of the biggest concerns with PQR is e\ufb03ciency. Whereas our QL baseline ran in a 100-200 milliseconds, PQR ran in 10-20 seconds, even using the re-ranking approach. However, because of this approach, our post-retrieval costs scale modestly as corpus size grows, especially compared to massive query expansion techniques like RM3. To understand this observation, note that issuing a long RM3 query results in a huge slowdown in performance due to the number of postings lists that need to be evaluated and merged. We found that for the web collection, RM3 performed quite slow, often taking minutes to complete long queries. PQR, on the other hand, has the same overhead as RM3 in terms of an initial retrieval and fetching document vectors. After this step, though, PQR only needs to access the index for term statistic information, not a re-retrieval. Though even with our speedup, PQR is unlikely to be helpful for realtime, low-latency retrieval. However, there are several situations where such a technique may be permissible. For example, \u2018slow search\u2019 refers to search situations where users tolerate latency in order to receive better results [42]. Another situation is document \ufb01ltering, where the user has a standing query for a certain topic and the system can optimize its query representation during indexing lulls. More generally, this technique is also valuable for any distributed information retrieval problem with APIs constrained to unweighted queries. 10." + } + ], + "Bhaskar Mitra": [ + { + "url": "http://arxiv.org/abs/2104.09393v1", + "title": "Improving Transformer-Kernel Ranking Model Using Conformer and Query Term Independence", + "abstract": "The Transformer-Kernel (TK) model has demonstrated strong reranking\nperformance on the TREC Deep Learning benchmark -- and can be considered to be\nan efficient (but slightly less effective) alternative to other\nTransformer-based architectures that employ (i) large-scale pretraining (high\ntraining cost), (ii) joint encoding of query and document (high inference\ncost), and (iii) larger number of Transformer layers (both high training and\nhigh inference costs). Since, a variant of the TK model -- called TKL -- has\nbeen developed that incorporates local self-attention to efficiently process\nlonger input sequences in the context of document ranking. In this work, we\npropose a novel Conformer layer as an alternative approach to scale TK to\nlonger input sequences. Furthermore, we incorporate query term independence and\nexplicit term matching to extend the model to the full retrieval setting. We\nbenchmark our models under the strictly blind evaluation setting of the TREC\n2020 Deep Learning track and find that our proposed architecture changes lead\nto improved retrieval quality over TKL. Our best model also outperforms all\nnon-neural runs (\"trad\") and two-thirds of the pretrained Transformer-based\nruns (\"nnlm\") on NDCG@10.", + "authors": "Bhaskar Mitra, Sebastian Hofstatter, Hamed Zamani, Nick Craswell", + "published": "2021-04-19", + "updated": "2021-04-19", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG" + ], + "main_content": "INTRODUCTION In the inaugural year of the TREC Deep Learning track [10], ranking models using Transformers [57] demonstrated substantial improvements over traditional information retrieval (IR) methods. Several of these approaches\u2014e.g., [62, 65]\u2014employ BERT [17], with largescale pretraining, as their core architecture. Diverging from this trend, Hofst\u00e4tter et al. [21] propose the Transformer-Kernel (TK) model with few key distinctions: (i) TK uses a shallower model with Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Woodstock \u201918, June 03\u201305, 2018, Woodstock, NY \u00a9 2018 Association for Computing Machinery. ACM ISBN 978-1-4503-XXXX-X/18/06...$15.00 https://doi.org/10.1145/1122445.1122456 only two Transformer layers, (ii) there are no computation-intensive pretraining, and (iii) TK independently encodes the query and document allowing for offline precomputations for faster response times. Consequently, TK achieves competitive performance at a fraction of the training and inference cost of its BERT-based peers. Notwithstanding these efficiency gains, the TK model shares two critical drawbacks with other Transformer-based models. Firstly, the memory complexity of the self-attention layers is quadratic O(\ud835\udc5b2) with respect to the length \ud835\udc5bof the input sequence. This restricts the number of document terms we can inspect under fixed GPU memory budget. A trivial workaround involves inspecting only the first \ud835\udc58terms of the document. This approach can negatively impact retrieval quality and has been shown to under-retrieve longer documents [20]. Secondly, in any real IR system, it is impractical to exhaustively evaluate every document in the collection for every query\u2014and therefore these systems typically enforce some sparsity property to drastically narrow down the set of candidates. TK employs a nonlinear matching function over query-document pairs which makes it difficult to enforce such sparsity before model inference. This restricts TK\u2019s scope of application to late stage reranking of smaller candidate sets as identified by simpler retrieval models. So, in this work, we extend TK in the following ways: (1) To scale to long text, we replace the Transformer layers with novel Conformer layers whose memory complexity is O(\ud835\udc5b\u00d7 \ud835\udc51key), instead of O(\ud835\udc5b2), (2) To enable fast retrieval with TK, we incorporate query term independence (QTI) [42], and finally, (3) we complement TK\u2019s latent matching with lexical term matching as suggested previously by Mitra et al. [40, 41], which is known to be effective for full retrieval [19, 28, 41, 60]. We study the impact of aforementioned changes under the strictlyblind evaluation setting of the TREC 2020 Deep Learning track. 2 RELATED WORK Scaling self-attention to long text. The self-attention layer, as proposed by Vaswani et al. [57], can be described as follows: Self-Attention(\ud835\udc44, \ud835\udc3e,\ud835\udc49) = \u03a6(\ud835\udc44\ud835\udc3e\u22ba \u221a\ufe01 \ud835\udc51\ud835\udc58 ) \u00b7 \ud835\udc49 (1) Where, \ud835\udc44\u2208R\ud835\udc5b\u00d7\ud835\udc51key, \ud835\udc3e\u2208R\ud835\udc5b\u00d7\ud835\udc51key, and \ud835\udc49\u2208R\ud835\udc5b\u00d7\ud835\udc51value are the query, key, and value matrices\u2014and \ud835\udc51key and \ud835\udc51value are the dimensions of the key and value embeddings, respectively. Here, \ud835\udc5bis the length of the input sequence and \u03a6 denotes a softmax along the last tensor dimension. The quadratic O(\ud835\udc5b2) memory complexity of self-attention arXiv:2104.09393v1 [cs.IR] 19 Apr 2021 \fWoodstock \u201918, June 03\u201305, 2018, Woodstock, NY Bhaskar Mitra, Sebastian Hofst\u00e4tter, Hamed Zamani, and Nick Craswell is a direct consequence of the component \ud835\udc44\ud835\udc3e\u22bathat produces a \ud835\udc5b\u00d7\ud835\udc5b matrix. Recently, several approaches have been proposed to mitigate this quadratic complexity that broadly fall under: (i) Restricting self-attention to smaller local windows over the input [15, 49, 54, 63], or (ii) operating under the assumption that the attention matrix is low rank \ud835\udc5f[27, 52, 55, 59] and hence finding alternatives to explicitly computing the \ud835\udc44\ud835\udc3e\u22bamatrix, or (iii) hybrid approaches [3, 7, 61]. In IR, recently Hofst\u00e4tter et al. [20] extended TK to longer text using local self-attention. Other more general approaches to reducing the memory footprint, such as model parallelization [53] and gradient checkpointing [3] have also been explored. Full retrieval with deep models. Efficient retrieval using deep models is an important challenge in IR [36, 37]. One approach involves the dual encoder architecture where the query and document are encoded independently, and efficient retrieval is achieved by approximate nearest-neighbour search [1, 5, 25, 26, 29] or by employing inverted-index over latent representations [66]. Precise matching of terms or concepts may be difficult using query-independent latent document representations [30], and therefore these models are often combined with explicit term matching [40, 43]. An alternative approach assumes QTI in the design of the neural ranking model [42]. In these models, the estimated relevance score \ud835\udc46\ud835\udc5e,\ud835\udc51= \u00cd \ud835\udc61\u2208\ud835\udc5e\ud835\udc60\ud835\udc61,\ud835\udc51is the sum of the document scores w.r.t. individual query terms. Readers should note that QTI is already baked into several classical IR models, like BM25 [50]. Relevance models with QTI can be used to offline precompute all term-document scores, and subsequently efficient search is performed using inverted-index. Several recent neural IR models [12\u201314, 32, 33, 42] that incorporate QTI have obtained promising results under the full retrieval setting. Document expansion based methods [46, 48], using large neural language models, can also be classified as part of this approach, assuming the subsequent retrieval step employs a traditional QTI model like BM25. In all these cases, the focus of the deep model is to estimate the relevance of the document w.r.t. individual terms in the vocabulary that can be precomputed during indexing. Another approach may involve neural query reformulation [31, 45, 56], although these methods typically underperform compared to the methods considered here. 3 CONFORMER-KERNEL WITH QTI Conformer. The quadratic memory complexity of self-attention layers w.r.t. the input length is a direct result of explicitly computing the attention matrix \ud835\udc44\ud835\udc3e\u22ba\u2208R\ud835\udc5b\u00d7\ud835\udc5b. In this work, we propose a new separable self-attention layer that avoids instantiating the full termterm attention matrix. Separable-Self-Attention(\ud835\udc44, \ud835\udc3e,\ud835\udc49) = \u03a6(\ud835\udc44) \u00b7 \ud835\udc34 (2) Where, \ud835\udc34= \u03a6(\ud835\udc3e\u22ba) \u00b7 \ud835\udc49. As previously, \u03a6 denotes softmax along the last dimension of the input tensor. Note that, however, in this separable self-attention mechanism, the softmax operation is employed twice: (i) \u03a6(\ud835\udc44) computes the softmax along the \ud835\udc51key dimension, and (ii) \u03a6(\ud835\udc3e\u22ba) computes the softmax along the \ud835\udc5bdimension. By computing \ud835\udc34\u2208R\ud835\udc51key\u00d7\ud835\udc51value first, we avoid explicitly computing the full term-term attention matrix. The memory complexity of the separable self-attention layer is O(\ud835\udc5b\u00d7 \ud835\udc51key), which is a significant improvement when\ud835\udc51key \u226a\ud835\udc5b. We modify the standard Transformer block as follows: (i) We replace the standard self-attention layer with the more memory efficient separable self-attention layer, and (ii) we apply grouped convolution before the separable self-attention layers to better capture the local context based on the window of neighbouring terms. We refer to this combination of grouped convolution and Transformer with separable self-attention as a Conformer. We incorporate Conformers into TK as a direct replacement for the Transformer layers and name the new architecture as a Conformer-Kernel (CK) model. In relation to handling long input sequences, we also replace the standard Kernel-Pooling with windowed Kernel-Pooling [20] in our proposed architecture. Query term independence. To incorporate QTI into CK, we make two simple modifications. Firstly, we simplify the query encoder by getting rid of the Transformer layers and only considering the non-contextualized embeddings for the query terms. Secondly, instead of applying the aggregation function over the full interaction matrix, we apply it to each row individually, which corresponds to individual query terms. The scalar outputs from the aggregation function are linearly combined to produce the final query-document score. Fig 1b shows the proposed CK-QTI architecture. Explicit term matching. We adopt the Duet [38\u201340, 44] framework wherein the term-document score is a linear combination of outputs from a latent and and an explicit matching models. \ud835\udc60\ud835\udc61,\ud835\udc51= \ud835\udc641 \u00b7 BN(\ud835\udc60(latent) \ud835\udc61,\ud835\udc51 ) + \ud835\udc642 \u00b7 BN(\ud835\udc60(explicit) \ud835\udc61,\ud835\udc51 ) + \ud835\udc4f (3) Where, {\ud835\udc641,\ud835\udc642,\ud835\udc4f} are learnable parameters and BN(\ud835\udc65) = (\ud835\udc65\u2212 E[\ud835\udc65])/( \u221a\ufe01 Var[\ud835\udc65]) denotes the BatchNorm operation [22]. We employ CK and define a new lexical matching function modeled on BM25 to compute \ud835\udc60(latent) \ud835\udc61,\ud835\udc51 and \ud835\udc60(explicit) \ud835\udc61,\ud835\udc51 , respectively. \ud835\udc60(explicit) \ud835\udc61,\ud835\udc51 = IDF\ud835\udc61\u00b7 BS(TF\ud835\udc61,\ud835\udc51) BS(TF\ud835\udc61,\ud835\udc51) + ReLU(\ud835\udc64dlen \u00b7 BS(|\ud835\udc51|) + \ud835\udc4fdlen) + \ud835\udf16 (4) Where, IDF\ud835\udc61, TF\ud835\udc61,\ud835\udc51, and |\ud835\udc51| denote the inverse-document frequency of the term \ud835\udc61, the term-frequency of \ud835\udc61in document \ud835\udc51, and the length of the document, respectively. The \ud835\udc64dlen and \ud835\udc4fdlen are the only two leanrable parameters of this submodel and \ud835\udf16is a small constant added to prevent a divide-by-zero error. The BatchScale (BS) operation is defined as BS(\ud835\udc65) = \ud835\udc65/(E[\ud835\udc65] + \ud835\udf16). 4 EXPERIMENT DESIGN TREC 2020 Deep Learning Track. We evaluate CK under the strictly-blind TREC benchmarking setting1 by participating in the 2020 edition of the Deep Learning track [11], which: (a) provides stronger protection against overfitting that may result from the experimenter running multiple evaluations against the test set, and (b) is fairer to dramatically new approaches that may surface additional relevant documents not covered by pre-collected labels [64]. The 2020 track [11] uses the same training data as the previous year [10] originally derived from the MS MARCO dataset [2]. However, the track provides a new blind test set for the second year. We only focus on the document ranking task and point the reader to [11] for further benchmarking details. We report NDCG@10 [23], NCG@100 [51], AP [67], and RR [8] against this blind set. 1We exclude group name and run IDs here to anonymize for the blind-review process. \fImproving Transformer-Kernel Ranking Model Using Conformer and Query Term Independence Woodstock \u201918, June 03\u201305, 2018, Woodstock, NY d1 d2 d3 d4 d5 Embed Stacked Transformers q1 q2 q3 Stacked Transformers Aggregator with Kernel-Pooling Embed Embed Embed Embed Embed Embed Embed (a) Transformer-Kernel (TK) d1 d2 d3 d4 d5 Stacked Conformers q1 q2 q3 Aggregator with Windowed Kernel-Pooling Aggregator with Windowed Kernel-Pooling Aggregator with Windowed Kernel-Pooling + Embed Embed Embed Embed Embed Embed Embed Embed (b) NDRM1 variant of Conformer-Kernel (CK) with QTI Figure 1: A comparison of the TK and the proposed CK-with-QTI architectures. In addition to replacing the Transformer layers with Conformers, the latter also simplifies the query encoding to non-contextualized term embedding lookup and incorporates a windowed Kernel-Pooling based aggregation that is employed independently per query term. Model variants. We compare several variants of our model. The NDRM1 variant incorporates Conformer layers and QTI into TK [21]. Figure 1 visualizes the NDRM1 architecture. The NDRM2 model is a simple QTI-compliant explicit-term-matching model as described by Equation 4. A linear combination of NDRM1 and NDRM2 gives us the NDRM3 model. Because of the limit on the number of run submission to TREC, we only evaluate NDRM1 and NDRM3, although we confirm on the TREC 2019 test set that NDRM2 is competitive with a well-tuned BM25 baseline. The TREC 2020 Deep Learning track provided participants with a click log dataset called ORCAS [9]. We use clicked queries in the ORCAS data [9] as additional meta description for corresponding documents to complement the intrinsic document content (URL, title, and body). Unlike previous work [66] on fielded document representations, we simply concatenate the different fields. We test each variant under both the rerank and the fullrank settings. Model training. We consider the first 20 terms for every query and the first 4000 terms for every document. We pretrain the word embeddings using the word2vec [35] implementation in FastText [24]. We use a concatenation of the IN and OUT embeddings [41, 43] from word2vec to initialize the embedding layer parameters. The document encoder uses 2 Conformer layers and we set all hidden layer sizes to 256. We set the window size for the grouped convolution layers to 31 and the number of groups to 32. Correspondingly, we also set the number of attention heads to 32. We set the number of kernels \ud835\udc58to 10. For windowed Kernel-Pooling, we set the window size to 300 and the stride to 100. Finally, we set the dropout rate to 0.2. For further details, please refer to the publicly released model implementation in PyTorch.2 All models are trained on four Tesla P100 GPUs, with 16 GB memory each, using data parallelism. We train the model using the RankNet objective [4]. For every positively labeled query-document pair in the training data, we randomly sample one negative document from the provided top 100 candidates corresponding to the query and two negative documents from the full collection. In addition to making pairs between the positively labeled document and the three negative documents, we also create pairs between the negative document sampled from 2We will add a link to the repo here after completion of the blind-reviewing process. Table 1: Official TREC 2020 results. All metrics are computed at rank 100, except for NDCG which is computed at rank 10. Best and median runs are selected based on NDCG@10. Run description Subtask NDCG NCG AP RR Other TREC runs for comparison Best \u201ctrad\u201d run fullrank 0.5629 0.6299 0.3829 0.9195 Best TKL run rerank 0.5852 0.6283 0.3810 0.9296 Median \u201cnnlm\u201d run fullrank 0.5907 0.6669 0.4259 0.8916 Best \u201cnnlm\u201d run fullrank 0.6934 0.7718 0.5422 0.9476 Our models NDRM1 fullrank 0.5991 0.6280 0.3858 0.9333 NDRM1 rerank 0.6161 0.6283 0.4150 0.9333 NDRM3 rerank 0.6162 0.6283 0.4122 0.9333 NDRM3 fullrank 0.6162 0.6626 0.4069 0.9333 NDRM3 + ORCAS rerank 0.6217 0.6283 0.4194 0.9241 NDRM3 + ORCAS fullrank 0.6249 0.6764 0.4280 0.9444 the top 100 candidates and those sampled from the full collection, treating the former as more relevant. This can be interpreted as incorporating a form of weak supervision [16] as the candidates were previously generated using a traditional IR function. 5 RESULTS RQ1. Does CK-QTI improve reranking quality over TKL? According to the taxonomy proposed by Craswell et al. [10], CK-QTI and TKL runs are the only \u201cnn\u201d runs\u2014i.e., neural models that do not use pretrained transformers\u2014submitted to TREC 2020 Deep Learning track. TKL has previously been shown to outperform TK [20], and we confirmed with the submitting group that they considered these as well-tuned TKL runs. We also confirm that the related hyperparameters are comparable between the TKL runs and ours. Table 1 shows that in the same rerank setting, both NDRM1 and NDRM3 improve NDCG@10 over the best TKL run by 5.3%. The improvement from NDRM1 over TKL is statistically significant according to student\u2019s t-test (\ud835\udc5d< 0.05). However, similarly large improvement from NDRM3 over TKL is not stat. sig. likely due to small test set size. Even if we consider TK and CK to be comparable \fWoodstock \u201918, June 03\u201305, 2018, Woodstock, NY Bhaskar Mitra, Sebastian Hofst\u00e4tter, Hamed Zamani, and Nick Craswell 0 2500 5000 7500 10000 12500 15000 0 500 1000 1500 2000 2500 3000 3500 4000 4500 Document length Transformer Conformer Figure 2: Comparison of peak GPU Memory Usage in MB, across all four GPUs, when employing Transformers vs. Conformers. in results quality, the key motivation behind Conformers is their reduced GPU memory usage which we discuss next. RQ2. Does CK-QTI improve train-time GPU memory requirement over TKL? To demonstrate how the GPU memory consumption scales with respect to input sequence length, we plot the peak memory, across all four GPUs, for our proposed architecture using Transformer and Conformer layers, respectively, keeping all other hyperparameters and architecture choices fixed. Fig 2 shows the GPU memory requirement grows linearly with increasing sequence length for the Conformer, while quadratically when Transformers are employed. This is a significant improvement in GPU memory requirement over TK for longer text that could be further operationalized to improve training time convergence using larger batches or to incorporate longer input representations of documents. RQ3. How does CK-QTI perform in the full retrieval setting? To enable retrieval from the full collection, we incorporate two changes in TK: QTI and explicit term matching. QTI allows for precomputation of term-document scores and consequently fast retrieval using inverted-index data structures. The explicit term matching is expected to help with result quality under the full retrieval setting. In Table 1, we find that the NDRM3 variant\u2014that incorporates explicit term matching\u2014does indeed achieve 2.9% better NDCG@10 compared to the NDRM1 variant and 5.5% improvement in both AP and NCG@100. In contrast, both models achieve similar performance under the rerank setting. The candidate documents for reranking were generated by a first-stage BM25 ranker and hence explicit term matching signal is already part of this retrieval pipeline which may explain why we find no benefit from explicit term matching in reranking. These observations are supported by Kuzi et al. [28], who find that exact term matching are important for the fullrank setting. Also, NDRM1, in the absence of explicit term matching, achieves a lower NDCG@10 under the fullrank setting compared to the rerank setting. However, when explicit term matching is incorporated (i.e., NDRM3), the metrics are comparable under both settings. Interestingly, when we include the ORCAS data in the document representation, we see improvements under the fullrank setting compared to reranking across all metrics: 2.2% for RR, 2.1% for AP, and 0.5% for NDCG@10. We confirm that the NDCG@10 improvement from fullrank over rerank setting under the NDRM3 + ORCAS configuration is stat. sig. based on a student\u2019s 0.3 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best nnlm run best ck-qti run best tkl run best trad run nnlm ck-qti tkl trad (a) NDCG@10 0.5 0.6 0.7 0.8 0.9 NCG@100 best nnlm run best trad run best ck-qti run rerank runs nnlm trad ck-qti tkl (b) NCG@100 Figure 3: Comparing CK-QTI runs with runs submitted by other groups. The runs in each plot are sorted independently based on the corresponding metric. t-test (\ud835\udc5d< 0.05). Based on qualitative inspection of the queries, we find that exact term matching may be important for queries containing named entities\u2014e.g., \u201cwho is aziz hashim\u201d and \u201cwhy is pete rose banned from hall of fame\u201d\u2014where it is necessary to ensure that the retrieved documents are about the correct entity. Finally, with respect to the full retrieval setting, we note that NDRM3 with ORCAS improves NCG@100 by 7.7% over the provided candidates for the reranking setting, which puts it among the 10 top performing runs according to NCG@100 as seen in Fig 3. RQ4. How does CK-QTI compare to \u201ctrad\u201d and \u201cnnlm\u201d runs? In adhoc retrieval, a common strategy involves sequentially cascading multiple rank-and-prune stages [6, 18, 34, 47, 58] for better effectiveness-efficiency trade-offs. The multiple stages can improve result quality at additional computation costs. However, in our experiments under the full retrieval setting, we employ CK-QTI as a single stage retriever. Despite of this straightforward and efficient setup, we find that all three runs NDRM1, NDRM3, and NDRM3 + ORCAS achieve better NDCG@10 compared to the best non-neural (i.e., \u201ctrad\u201d) run. The improvements from NDRM3, both with and without the ORCAS-based document representation, is stat. sig. compared to the best \u201ctrad\u201d run based on student\u2019s t-test (\ud835\udc5d< 0.05). Additionally, NDRM3, with and without ORCAS, outperforms two-thirds of the \u201cnnlm\u201d runs that employ costly pretraining of Transformers. The \u201cnnlm\u201d runs that outperform CK-QTI not only employ cascades of multiple rank-and-prune stages but sometimes \fImproving Transformer-Kernel Ranking Model Using Conformer and Query Term Independence Woodstock \u201918, June 03\u201305, 2018, Woodstock, NY multiple of those stages employ costly models like BERT. In contrast, CK-QTI retrieves from the full collection in one-shot and its performance can be likely improved by additional reranking stages. 6" + }, + { + "url": "http://arxiv.org/abs/2012.11685v2", + "title": "Neural Methods for Effective, Efficient, and Exposure-Aware Information Retrieval", + "abstract": "Neural networks with deep architectures have demonstrated significant\nperformance improvements in computer vision, speech recognition, and natural\nlanguage processing. The challenges in information retrieval (IR), however, are\ndifferent from these other application areas. A common form of IR involves\nranking of documents--or short passages--in response to keyword-based queries.\nEffective IR systems must deal with query-document vocabulary mismatch problem,\nby modeling relationships between different query and document terms and how\nthey indicate relevance. Models should also consider lexical matches when the\nquery contains rare terms--such as a person's name or a product model\nnumber--not seen during training, and to avoid retrieving semantically related\nbut irrelevant results. In many real-life IR tasks, the retrieval involves\nextremely large collections--such as the document index of a commercial Web\nsearch engine--containing billions of documents. Efficient IR methods should\ntake advantage of specialized IR data structures, such as inverted index, to\nefficiently retrieve from large collections. Given an information need, the IR\nsystem also mediates how much exposure an information artifact receives by\ndeciding whether it should be displayed, and where it should be positioned,\namong other results. Exposure-aware IR systems may optimize for additional\nobjectives, besides relevance, such as parity of exposure for retrieved items\nand content publishers. In this thesis, we present novel neural architectures\nand methods motivated by the specific needs and challenges of IR tasks.", + "authors": "Bhaskar Mitra", + "published": "2020-12-21", + "updated": "2021-03-19", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction 23 1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.2 Evaluation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.2.1 Ad hoc retrieval . . . . . . . . . . . . . . . . . . . . . . . . 28 1.2.2 Question-answering . . . . . . . . . . . . . . . . . . . . . 29 1.3 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 1.4 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2 Motivation 35 2.1 Desiderata of IR models . . . . . . . . . . . . . . . . . . . . . . . 35 2.1.1 Semantic matching . . . . . . . . . . . . . . . . . . . . . . 36 2.1.2 Robustness to rare inputs . . . . . . . . . . . . . . . . . . . 37 2.1.3 Robustness to variable length text . . . . . . . . . . . . . . 38 2.1.4 Ef\ufb01ciency . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.1.5 Parity of exposure . . . . . . . . . . . . . . . . . . . . . . 40 2.1.6 Sensitivity to context . . . . . . . . . . . . . . . . . . . . . 40 2.1.7 Robustness to corpus variance . . . . . . . . . . . . . . . . 40 2.2 Designing neural models for IR . . . . . . . . . . . . . . . . . . . . 42 3 Background 45 3.1 IR Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.1.1 Traditional IR models . . . . . . . . . . . . . . . . . . . . 45 3.1.2 Anatomy of neural IR models . . . . . . . . . . . . . . . . 49 \f16 Contents 3.2 Unsupervised learning of term representations . . . . . . . . . . . . 52 3.2.1 A tale of two representations . . . . . . . . . . . . . . . . . 52 3.2.2 Notions of similarity . . . . . . . . . . . . . . . . . . . . . 55 3.2.3 Observed feature spaces . . . . . . . . . . . . . . . . . . . 57 3.2.4 Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.3 Term embeddings for IR . . . . . . . . . . . . . . . . . . . . . . . 65 3.3.1 Query-document matching . . . . . . . . . . . . . . . . . . 67 3.3.2 Query expansion . . . . . . . . . . . . . . . . . . . . . . . 74 3.4 Supervised learning to rank . . . . . . . . . . . . . . . . . . . . . . 75 3.4.1 Input features . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.2 Loss functions . . . . . . . . . . . . . . . . . . . . . . . . 76 3.5 Deep neural networks . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.5.1 Input text representations . . . . . . . . . . . . . . . . . . . 89 3.5.2 Architectures . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.5.3 Neural toolkits . . . . . . . . . . . . . . . . . . . . . . . . 98 3.6 Deep neural models for IR . . . . . . . . . . . . . . . . . . . . . . 98 3.6.1 Document auto-encoders . . . . . . . . . . . . . . . . . . . 100 3.6.2 Siamese networks . . . . . . . . . . . . . . . . . . . . . . . 101 3.6.3 Interaction-based networks . . . . . . . . . . . . . . . . . . 103 3.6.4 Lexical matching networks . . . . . . . . . . . . . . . . . . 104 3.6.5 BERT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4 Learning to rank with Duet networks 107 4.1 The Duet network . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.1.1 Local subnetwork . . . . . . . . . . . . . . . . . . . . . . . 112 4.1.2 Distributed subnetwork . . . . . . . . . . . . . . . . . . . . 114 4.1.3 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . 115 4.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4.2.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4.2.2 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 \fContents 17 4.2.3 Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.2.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.4 Further improvements . . . . . . . . . . . . . . . . . . . . . . . . . 125 4.4.1 Duet on MS MARCO . . . . . . . . . . . . . . . . . . . . 126 4.4.2 Duet on TREC Deep Learning track . . . . . . . . . . . . . 130 4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 5 Retrieve, not just rerank, using deep neural networks 141 5.1 Query term independence assumption . . . . . . . . . . . . . . . . 142 5.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 5.3 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 5.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 5.4.1 Task description . . . . . . . . . . . . . . . . . . . . . . . 148 5.4.2 Baseline models . . . . . . . . . . . . . . . . . . . . . . . 149 5.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 6 Stochastic learning to rank for target exposure 153 6.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 6.2 Expected exposure metrics . . . . . . . . . . . . . . . . . . . . . . 155 6.3 Optimizing for target exposure . . . . . . . . . . . . . . . . . . . . 158 6.3.1 Individual exposure parity . . . . . . . . . . . . . . . . . . 158 6.3.2 Group exposure parity . . . . . . . . . . . . . . . . . . . . 160 6.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 6.4.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 6.4.2 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 6.4.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 162 6.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 \f18 Contents 7 Learning to Rank for Query Auto-Completion 165 7.1 Query Auto-Completion for Rare Pre\ufb01xes . . . . . . . . . . . . . . 165 7.1.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . 166 7.1.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 7.1.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 7.1.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 171 7.1.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 7.1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 175 7.2 Session Context Modelling for Query Auto-Completion . . . . . . . 175 7.2.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . 177 7.2.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 7.2.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 184 7.2.4 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 7.2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 7.2.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 190 7.2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 195 8 Benchmarking for neural IR 197 8.1 TREC Deep Learning track . . . . . . . . . . . . . . . . . . . . . . 198 8.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 8.3 Results and analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 202 8.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 9 General Conclusions 215 9.1 A summary of our contributions . . . . . . . . . . . . . . . . . . . 217 9.2 The Future of neural IR . . . . . . . . . . . . . . . . . . . . . . . . 218 Appendices 221 A Published work 221 Bibliography 225 \fList of Figures 2.1 Zip\ufb01an distributions in search data . . . . . . . . . . . . . . . . . . 37 2.2 Document length distribution . . . . . . . . . . . . . . . . . . . . . 39 3.1 Anatomy of a neural IR model . . . . . . . . . . . . . . . . . . . . 49 3.2 Taxonomy of different neural approaches to IR . . . . . . . . . . . 50 3.3 Local and distributed representations of terms . . . . . . . . . . . . 52 3.4 Feature-based distributed representations . . . . . . . . . . . . . . . 53 3.5 Geometric interpretation of vector space representations . . . . . . 55 3.6 Notions of similarity in vector representations . . . . . . . . . . . . 58 3.7 Analogies using vector algebra . . . . . . . . . . . . . . . . . . . . 59 3.8 Architecture of the word2vec model . . . . . . . . . . . . . . . . . 62 3.9 Architecture of the paragraph2vec model . . . . . . . . . . . . . . . 66 3.10 Evidence of relevance from non-query terms . . . . . . . . . . . . . 67 3.11 Strengths and weaknesses of term embedding based matching . . . 72 3.12 Global vs. query-speci\ufb01c embeddings in query expansion . . . . . . 73 3.13 A simple neural network . . . . . . . . . . . . . . . . . . . . . . . 87 3.14 Demonstration of the need for hidden layers . . . . . . . . . . . . . 88 3.15 Input representations of text for DNNs . . . . . . . . . . . . . . . . 90 3.16 Shift-invariant neural architectures . . . . . . . . . . . . . . . . . . 92 3.17 Auto-encoder and Siamese Networks . . . . . . . . . . . . . . . . . 95 3.18 Variational autoencoder . . . . . . . . . . . . . . . . . . . . . . . . 96 3.19 Interaction matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 3.20 Lexical and semantic term matching for ranking . . . . . . . . . . . 104 \f20 List of Figures 4.1 Word importance for the Duet . . . . . . . . . . . . . . . . . . . . 108 4.2 Architecture of the Duet network . . . . . . . . . . . . . . . . . . . 111 4.3 Interaction matrix of query-document exact matches . . . . . . . . 113 4.4 Effect of judged vs. random negatives on the Duet model . . . . . . 124 4.5 Duet with multiple \ufb01elds (DuetMF) . . . . . . . . . . . . . . . . . 132 4.6 Performance of Duet by segments . . . . . . . . . . . . . . . . . . 135 4.7 Principal component analysis of IR models . . . . . . . . . . . . . 136 4.8 Effect of training data on the performance of Duet . . . . . . . . . . 138 5.1 Incorporating QTI assumption in black-box models . . . . . . . . . 147 6.1 Optimizing for target exposure . . . . . . . . . . . . . . . . . . . . 161 7.1 CDSSM architecture . . . . . . . . . . . . . . . . . . . . . . . . . 167 7.2 Candidate generation for QAC . . . . . . . . . . . . . . . . . . . . 170 7.3 QAC performance by pre\ufb01x popularity . . . . . . . . . . . . . . . . 174 7.4 Visualization similar intent transitions from search logs . . . . . . . 180 7.5 Popularity of intent transitions in the search logs . . . . . . . . . . . 182 7.6 CDSSM performance by query length . . . . . . . . . . . . . . . . 191 7.7 CDSSM performance by embedding size . . . . . . . . . . . . . . . 192 7.8 Exploring or struggling? . . . . . . . . . . . . . . . . . . . . . . . 193 8.1 Growth of neural IR papers . . . . . . . . . . . . . . . . . . . . . . 197 8.2 Comparison of nnlm, nn, and trad runs . . . . . . . . . . . . . . . . 204 8.3 Per query comparison for document retrieval task . . . . . . . . . . 206 8.4 Per query comparison for passage retrieval task . . . . . . . . . . . 207 8.5 Visualizing inter-run similarity using t-SNE . . . . . . . . . . . . . 208 8.6 Analysis of \u201cfullrank\u201d vs. \u201crerank\u201d settings . . . . . . . . . . . . . 209 8.7 Metrics agreement by group . . . . . . . . . . . . . . . . . . . . . 210 8.8 Metrics agreement for the document retrieval task . . . . . . . . . . 211 8.9 Metrics agreement for the passage retrieval task . . . . . . . . . . . 212 9.1 Duckupine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 \fList of Tables 1.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.1 Toy corpus of short documents . . . . . . . . . . . . . . . . . . . . 56 3.2 Different notions of similarity in the word2vec embedding space . . 69 3.3 Different notions of similarity in the DSSM embedding space . . . . 102 4.1 Statistics of the datasets for the document ranking task . . . . . . . 117 4.2 Performance of the Duet model on the document ranking task . . . . 123 4.3 Performance of Duet on TREC CAR . . . . . . . . . . . . . . . . . 125 4.4 Duet on MS MARCO . . . . . . . . . . . . . . . . . . . . . . . . . 129 4.5 Duet on TREC 2019 Deep Learning track . . . . . . . . . . . . . . 134 5.1 Deep models with QTI assumption for re-ranking . . . . . . . . . . 150 5.2 Duet with QTI for full retrieval . . . . . . . . . . . . . . . . . . . . 151 6.1 Optimizing for exposure parity . . . . . . . . . . . . . . . . . . . . 163 7.1 Suf\ufb01x recommendation based QAC . . . . . . . . . . . . . . . . . 166 7.2 Typical and Topical CDSSM . . . . . . . . . . . . . . . . . . . . . 168 7.3 Popular query suf\ufb01xes . . . . . . . . . . . . . . . . . . . . . . . . 169 7.4 Results on QAC for rare pre\ufb01xes . . . . . . . . . . . . . . . . . . . 173 7.5 Clusters of intent transitions from search logs . . . . . . . . . . . . 181 7.6 Analogies using query embeddings . . . . . . . . . . . . . . . . . . 184 7.7 CDSSM results on session modelling for QAC . . . . . . . . . . . . 187 7.8 CDSSM performance by length of history . . . . . . . . . . . . . . 190 7.9 Win-loss analysis of CDSSM on session modelling for QAC . . . . 194 \f22 List of Tables 8.1 TREC 2019 Deep Learning track datasets . . . . . . . . . . . . . . 202 8.2 TREC 2019 Deep Learning track runs . . . . . . . . . . . . . . . . 202 \fChapter 1 Introduction Over the last decade, there have been dramatic improvements in performance in computer vision, speech recognition, and machine translation tasks, witnessed in research and in real-world applications [20\u201324]. These breakthroughs were largely fuelled by recent advances in neural network models, usually with multiple hidden layers, known as deep architectures combined with the availability of large datasets [25] and cheap compute power for model training. Exciting novel applications, such as conversational agents [26, 27], have also emerged, as well as gameplaying agents with human-level performance [28, 29]. Work has now begun in the information retrieval (IR) community to apply these neural methods, leading to the possibility of advancing the state of the art or even achieving breakthrough performance as in these other \ufb01elds. Retrieval of information can take many forms [30]. Users can express their information need in the form of a text query\u2014by typing on a keyboard, by selecting a query suggestion, or by voice recognition\u2014or the query can be in the form of an image, or in some cases the need can be implicit. Retrieval can involve ranking existing pieces of content, such as documents or short-text answers, or composing new responses incorporating retrieved information. Both the information need and the retrieved results may use the same modality (e.g., retrieving text documents in response to keyword queries), or be different (e.g., image search using text queries). The information within the document text may be semi-structured, and the organization scheme may be shared between groups of documents in the collection\u2014e.g., \f24 Chapter 1. Introduction web pages from the same domain [31]. If the query is ambiguous, retrieval system may consider user history, physical location, temporal changes in information, or other context when ranking results. IR systems may also help users formulate their intent (e.g., via query auto-completion or query suggestion) and can extract succinct summaries of results that take the user\u2019s query into account. Neural IR refers to the application of shallow or deep neural networks to these retrieval tasks. We note that many natural language processing (NLP) tasks exist that are not IR. Machine translation of text from one human language to another is not an IR task. However, translation could be used in an IR system, to enable cross-language retrieval on a multilingual corpus [32]. Inferring attributes of a named entity [33], from text or graph-structured data, is not an IR task in itself. However, an IR system could use inferred entity attributes to enhance its performance on IR tasks. In general, many NLP tasks do not involve information access and retrieval, so are not IR tasks, but some can still be useful as part of a larger IR system. In this thesis, we focus on neural methods that employ deep architectures to retrieve and rank documents in response to a query, an important IR task. A search query may typically contain a few terms, while the document length, depending on the scenario, may range from a few terms to hundreds of sentences or more. Neural models for IR use learned latent representations of text, and usually contain a large number of parameters that need to be tuned. ML models with large set of parameters typically bene\ufb01t from large quantity of training data [34\u201338]. Unlike traditional learning to rank (LTR) approaches [39] that train ML models over a set of hand-crafted features, recent neural models for IR typically accept the raw text of a query and document as input. Learning suitable representations of text also demands large-scale datasets for training [7]. Therefore, unlike classical IR models, these neural approaches tend to be data hungry, with performance that improves with more training data. In other \ufb01elds, the design of neural network models has been informed by characteristics of the application and data. For example, the datasets and successful architectures are quite different in visual object recognition, speech recognition, \f1.1. Contributions 25 and game playing agents. While IR shares some common attributes with the \ufb01eld of NLP, it also comes with its own set of unique challenges. Effective IR systems must deal with query-document vocabulary mismatch problem, by modeling relationships between different query and document terms and how they indicate relevance. Models should also consider lexical matches when the query contains rare terms\u2014such as a person\u2019s name or a product model number\u2014not seen during training, and to avoid retrieving semantically related but irrelevant results. In many real-life IR tasks, the retrieval involves extremely large collections\u2014such as the document index of a commercial Web search engine\u2014containing billions of documents. Ef\ufb01cient IR methods should take advantage of specialized IR data structures, such as inverted index, to ef\ufb01ciently retrieve from large collections. Given an information need, the IR system also mediates how much exposure an information artifact receives by deciding whether it should be displayed, and where it should be positioned, among other results. Exposure-aware IR systems may optimize for additional objectives, besides relevance, such as parity of exposure for retrieved items and content publishers. In our work, we focus on methods using deep neural networks for document ranking, and to a lesser extent other retrieval tasks. We identify key challenges and principles which motivate our design of novel neural approaches to ranking. We study these proposed methods with respect to retrieval quality, query response time, and exposure disparity. In Section 1.1, we summarize our key contributions. The remainder of this chapter is dedicated to describing the problem formulation. In Section 1.2, we provide an overview of the IR tasks that we use for evaluation. In Section 1.3, we describe a set of common notations that we use in the remainder of this thesis. Finally, we describe relevant IR metrics in Section 1.4. 1.1 Contributions In this thesis, we explore neural network based approaches to IR. This section summarizes the key research contributions of this thesis by chapter. Where appropriate, we also cite the publications that forms the basis of that chapter. \f26 Chapter 1. Introduction \u2022 Chapter 1-3 are based on the book by Mitra and Craswell [1]\u2014and corresponding tutorials [2\u20136]. The current chapter introduces key IR tasks, evaluation metrics, and mathematical notations that are referenced throughout in this thesis. Chapter 2 presents our motivation for exploring neural IR methods. Chapter 3 provides a survey of existing literature on neural and traditional non-neural approaches to IR. Key concepts related to IR models and neural representation learning are explained. These three chapters have no novel theoretical or algorithmic contributions, but provides a detailed overview of the \ufb01eld that also serves as the background for the remaining sections. \u2022 Chapter 4 is based on [7\u201310] and emphasizes the importance of incorporating evidence based on both patterns of exact query term matches in the document as well as the similarity between query and document text based on learned latent representations for retrieval. We operationalize this principle by proposing a deep neural network architecture, called Duet, that jointly learns two deep neural networks focused on matching using lexical and latent representations of text, respectively. We benchmark the proposed model on: (i) Bing document ranking task, (ii) TREC Complex Answer Retrieval task, (iii) MS MARCO passage ranking task, and (iv) TREC 2019 Deep Learning track document and passage ranking tasks and demonstrate that estimating relevance by inspecting both lexical and latent matches performs better than considering only one of those aspects for retrieval. \u2022 Chapter 5 is based on [13, 40] and studies neural methods in the context of retrieval from the full collection, instead of just reranking. In particular, we study the impact of incorporating the query term independence (QTI) assumption in neural architectures. We \ufb01nd that incorporating QTI assumption in several deep neural ranking models results in minimal (or no) degradation in ranking effectiveness. However, under the QTI assumption, the learned ranking functions can be combined with specialised IR data structures, e.g., inverted index, for fast and scalable candidate generation in the \ufb01rst stage of retrieval. We benchmark on: (i) MS MARCO passage ranking task and \f1.1. Contributions 27 (ii) TREC 2019 Deep Learning track to demonstrate that neural methods can be employed for more effective but also ef\ufb01cient candidate generation. \u2022 Chapter 6 is based on [14] and studies learning to rank with neural networks in the context of stochastic ranking. Due to presentation bias, a static ranked list of results may cause large difference in exposure of items with similar relevance. We present a stochastic ranking framework that can optimize towards exposure targets under different constraints\u2014e.g., individual and group exposure parity. While the original study [14] is a collaborative project focusing on the expected exposure metric, this chapter summarizes our key contributions related to the framework of model optimization for individual and groupwise parity of expected exposure. \u2022 Chapter 7, based on [18, 19], looks at the application of neural IR beyond ad hoc retrieval\u2014to the query auto-completion (QAC) task. The ranking task, in case of QAC, poses challenges that are different from those in querydocument or query-passage matching. In this chapter, we study two applications of deep models for QAC: (i) Recommending completions for rare query pre\ufb01xes and (ii) modeling query reformulations for session context-aware QAC. \u2022 Chapter 8 summarizes \ufb01ndings from our recent efforts on large-scale benchmarking of deep neural IR methods at TREC [15]. The TREC Deep Learning track [15, 16] provides a strict blind evaluation for IR methods that take advantage of large supervised training datasets, and have been instrumental in demonstrating the superior retrieval quality for many recent neural methods proposed by the research community. \u2022 Finally, in Chapter 9, we conclude with a discussion on the future of neural IR research. In this chapter, we re\ufb02ect on the progress we have already made as a \ufb01eld and provide some personal perspectives on the road ahead. \f28 Chapter 1. Introduction 1.2 Evaluation tasks We focus on text retrieval in IR, where the user enters a text query and the system returns a ranked list of search results. Search results may be passages of text or full text documents. The system\u2019s goal is to rank the user\u2019s preferred search results at the top. This problem is a central one in the IR literature, with well-understood challenges and solutions. Text retrieval methods for full text documents and for short text passages have application in ad hoc retrieval systems and question answering systems, respectively. We describe these two tasks in this section. 1.2.1 Ad hoc retrieval Ranked document retrieval is a classic problem in information retrieval, as in the main task of the Text Retrieval Conference [41], and performed by commercial search engines such as Google, Bing, Baidu, and Yandex. TREC tasks may offer a choice of query length, ranging from a few terms to a few sentences, whereas search engine queries tend to be at the shorter end of the range. In an operational search engine, the retrieval system uses specialized index structures to search potentially billions of documents. The results ranking is presented in a search engine results page (SERP), with each result appearing as a summary and a hyperlink. The engine can instrument the SERP, gathering implicit feedback on the quality of search results such as click decisions and dwell times. A ranking model can take a variety of input features. Some ranking features may depend on the document alone, such as how popular the document is with users, how many incoming links it has, or to what extent document seems problematic according to a Web spam classi\ufb01er. Other features depend on how the query matches the text content of the document. Still more features match the query against document metadata, such as referred text of incoming hyperlink anchors, or the text of queries from previous users that led to clicks on this document. Because anchors and click queries are a succinct description of the document, they can be a useful source of ranking evidence, but they are not always available. A newly created document would not have much link or click text. Also, not every document is popular \f1.2. Evaluation tasks 29 enough to have past links and clicks, but it still may be the best search result for a user\u2019s rare or tail query. In such cases, when text metadata is unavailable, it is crucial to estimate the document\u2019s relevance primarily based on its text content. In the text retrieval community, retrieving documents for short-text queries by considering the long body text of the document is an important challenge. These ad hoc retrieval tasks have been an important part of the Text REtrieval Conference (TREC) [42], starting with the original tasks searching newswire and government documents, and later with the Web track1 among others. The TREC participants are provided a set of, say \ufb01fty, search queries and a document collection containing 500-700K newswire and other documents. Top ranked documents retrieved for each query from the collection by different competing retrieval systems are assessed by human annotators based on their relevance to the query. Given a query, the goal of the IR model is to rank documents with better assessor ratings higher than the rest of the documents in the collection. In Section 1.4, we describe standard IR metrics for quantifying model performance given the ranked documents retrieved by the model and the corresponding assessor judgments for a given query. 1.2.2 Question-answering Question-answering tasks may range from choosing between multiple choices (typically entities or binary true-or-false decisions) [43\u201346] to ranking spans of text or passages [47\u201351], and may even include synthesizing textual responses by gathering evidence from one or more sources [52, 53]. TREC question-answering experiments [47] has participating IR systems retrieve spans of text, rather than documents, in response to questions. IBM\u2019s DeepQA [51] system\u2014behind the Watson project that famously demonstrated human-level performance on the American TV quiz show, \u201cJeopardy!\u201d\u2014also has a primary search phase, whose goal is to \ufb01nd as many potentially answer-bearing passages of text as possible. With respect to the question-answering task, the scope of this thesis is limited to ranking answer containing passages in response to natural language questions or short query texts. Retrieving short spans of text pose different challenges than ranking docu1http://www10.wwwconference.org/cdrom/papers/317/node2.html \f30 Chapter 1. Introduction ments. Unlike the long body text of documents, single sentences or short passages tend to be on point with respect to a single topic. However, answers often tend to use different vocabulary than the one used to frame the question. For example, the span of text that contains the answer to the question \u201cwhat year was Martin Luther King Jr. born?\u201d may not contain the term \u201cyear\u201d. However, the phrase \u201cwhat year\u201d implies that the correct answer text should contain a year\u2014such as \u20181929\u2019 in this case. Therefore, IR systems that focus on the question-answering task need to model the patterns expected in the answer passage based on the intent of the question. The focus of this thesis is on ad hoc retrieval, and to a lesser extent on questionanswering. However, neural approaches have shown interesting applications to other existing retrieval scenarios, including query recommendation [54], modelling diversity [55], modelling user click behaviours [56], entity ranking [57, 58], knowledge-based IR [59], and even optimizing for multiple IR tasks [60]. In addition, recent trends suggest that advancements in deep neural networks methods are also fuelling emerging IR scenarios such as proactive recommendations [61\u201363], conversational IR [64, 65], and multi-modal retrieval [66]. Neural methods may have an even bigger impact on some of these other IR tasks. To demonstrate that neural methods are useful in IR\u2014beyond the document and passage ranking tasks\u2014 we also present, in this thesis, a brief study on employing deep models for the QAC task in Chapter 7. 1.3 Notation We adopt some common notation for this thesis shown in Table 1.1. We use lowercase to denote vectors (e.g., \u20d7 x) and upper-case for tensors of higher dimensions (e.g., X). The ground truth relq(d) in Table 1.1 may be based on either manual relevance annotations or be implicitly derived from user behaviour on SERP (e.g., from clicks). \f1.4. Metrics 31 Table 1.1: Notation used in this thesis. Meaning Notation Single query q Single document d Set of queries Q Collection of documents D Term in query q tq Term in document d td Full vocabulary of all terms T Set of ranked results retrieved for query q Rq Result tuple (document d at rank i) \u27e8i,d\u27e9, where \u27e8i,d\u27e9\u2208Rq Relevance label of document d for query q relq(d) di is more relevant than dj for query q relq(di) > relq(d j), or di \u227b q d j Frequency of term t in document d t f(t,d) Number of documents that contain term t d f(t) Vector representation of text z \u20d7 vz Probability function for an event E p(E ) 1.4 Metrics A large number of IR studies [67\u201374] have demonstrated that users of retrieval systems tend to pay attention mostly to top-ranked results. IR metrics, therefore, focus on rank-based comparisons of the retrieved result set R to an ideal ranking of documents, as determined by manual judgments or implicit feedback from user behaviour data. These metrics are typically computed at a rank position, say k, and then averaged over all queries in the test set. Unless otherwise speci\ufb01ed, R refers to the top-k results retrieved by the model. Next, we describe a few standard metrics used in IR evaluations. Precision and recall Precision and recall both compute the fraction of relevant documents retrieved for a query q, but with respect to the total number of documents in the retrieved set Rq and the total number of relevant documents in the collection D, respectively. Both metrics assume that the relevance labels are binary. \f32 Chapter 1. Introduction Precisionq = \u2211\u27e8i,d\u27e9\u2208Rq relq(d) |Rq| (1.1) Recallq = \u2211\u27e8i,d\u27e9\u2208Rq relq(d) \u2211d\u2208D relq(d) (1.2) Mean reciprocal rank (MRR) Mean reciprocal rank [75] is also computed over binary relevance judgments. It is given as the reciprocal rank of the \ufb01rst relevant document averaged over all queries. RRq = max \u27e8i,d\u27e9\u2208Rq relq(d) i (1.3) Mean average precision (MAP) The average precision [76] for a ranked list of documents R is given by, AveP q = \u2211\u27e8i,d\u27e9\u2208Rq Precisionq,i \u00d7relq(d) \u2211d\u2208D relq(d) (1.4) Where, Precisionq,i is the precision computed at rank i for the query q. The average precision metric is generally used when relevance judgments are binary, although variants using graded judgments have also been proposed [77]. The mean of the average precision over all queries gives the MAP score for the whole set. Normalized discounted cumulative gain (NDCG) There are a few different variants of the discounted cumulative gain (DCGq) metric [78] which can be used when graded relevance judgments are available for a query q\u2014say, on a \ufb01ve-point scale between zero to four. One incarnation of this metric is as follows. DCGq = \u2211 \u27e8i,d\u27e9\u2208Rq gainq(d) log2(i+1) (1.5) \f1.4. Metrics 33 The ideal DCG (IDCGq) is computed the same way but by assuming an ideal rank order for the documents up to rank k. The normalized DCG (NDCGq) is then given by, NDCGq = DCGq IDCGq (1.6) Normalized cumulative gain (NCG) A metric related to NDCG but suitable for evaluating the quality of retrieval for \ufb01rst stage candidate generation methods is NCG\u2014i.e., NDCG without the position discounting. CGq = \u2211 \u27e8i,d\u27e9\u2208Rq gainq(d) (1.7) NCGq = CGq ICGq (1.8) NCG has been employed in the literature [15, 16, 79] to measure how much relevant items are recalled as part of candidate generation without paying attention to the exact order in which the candidates appear in the retrieved set. \f\fChapter 2 Motivation We expect a good retrieval system to exhibit certain general attributes. We highlight some of them in this chapter. The design of any neural methods for IR should be informed by these desired properties. We operationalize these intuitions later in Chapters 4-7. In this chapter, we also introduce a general taxonomy of neural approaches for document ranking by categorizing them based on the step of the retrieval process they in\ufb02uence. This discussion on a general taxonomy should serve as a common lens through which we can inspect both existing neural IR approaches as well as the new deep neural ranking models described in the rest of this thesis. 2.1 Desiderata of IR models For any IR system, the relevance of the retrieved items to the input query is of foremost importance. But to evaluate the effectiveness of an IR system in isolation without considering critical dimensions, such as the ef\ufb01ciency of the system or its robustness to collections with different properties, can be tantamount to a theoretical exercise without practical usefulness. An IR system mediates what information its users are exposed to and consume. It is, therefore, also important to quantify and limit any systematic disparity that the retrieval system may inadvertently cause with respect to exposure of information artifacts of similar relevance, or their publishers. These concerns not only serve as yard sticks for comparing the different neural and non-neural approaches but also guide our model designs. Where appropriate, we connect these motivations to our contributions in this area, some of which form the \f36 Chapter 2. Motivation basis for subsequent chapters in this thesis. 2.1.1 Semantic matching Most traditional approaches to ad hoc retrieval count repetitions of the query terms in the document text. Exact term matching between query and document text, while simple, serves as a foundation for many IR systems. Different weighting and normalization schemes over these counts leads to a variety of TF-IDF models, such as BM25 [80]. However, by only inspecting the query terms the IR model ignores all the evidence of aboutness from the rest of the document. So, when ranking for the query \u201cAustralia\u201d only the occurrences of \u201cAustralia\u201d in the document are considered\u2014although the frequency of other terms like \u201cSydney\u201d or \u201ckangaroo\u201d may be highly informative [81, 82]. In the case of the query \u201cwhat channel are the seahawks on today\u201d, the query term \u201cchannel\u201d provides hints to the IR model to pay attention to the occurrences of \u201cESPN\u201d or \u201cSky Sports\u201d in the document text\u2014none of which appears in the query itself. For IR tasks, such as QAC, the lexical similarity between the input (e.g., the query pre\ufb01x) and candidate items (e.g., the possible completions) is minimal. In such scenarios, understanding the relationship between the query pre\ufb01x and suf\ufb01x requires going beyond inspecting lexical overlap. Semantic understanding, however, goes further than mapping query terms to document terms [83]. A good IR model may consider the terms \u201chot\u201d and \u201cwarm\u201d related, as well as the terms \u201cdog\u201d and \u201cpuppy\u201d\u2014but must also distinguish that a user who submits the query \u201chot dog\u201d is not looking for a \u201cwarm puppy\u201d [84]. At the more ambitious end of the spectrum, semantic understanding would involve logical reasoning by the IR system\u2014so for the query \u201cconcerts during SIGIR\u201d it associates a speci\ufb01c edition of the conference (the upcoming one) and considers both its location and dates when recommending concerts nearby during the correct week. These examples motivate that IR models should have some latent representations of intent as expressed by the query and of the different topics in the document text\u2014so that inexact matching can be performed that goes beyond lexical term counting. \f2.1. Desiderata of IR models 37 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 log10(query ID) log10(query frequency) (a) Distribution of query impressions \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 log10(document ID) log10(document frequency) (b) Distribution of document clicks Figure 2.1: A Log-Log plot of frequency versus rank for query impressions and document clicks in the AOL query logs [85]. The plots highlight that these quantities follow a Zip\ufb01an distribution. 2.1.2 Robustness to rare inputs Query frequencies in most IR tasks follow a Zip\ufb01an distribution [86] (see Figure 2.1). In the publicly available AOL query logs [85], for example, more than 70% of the distinct queries are seen only once in the period of three months from which the queries are sampled. In the same dataset, more than 50% of the distinct documents are clicked only once. Downey et al. [87] demonstrate that typical web search engines may struggle to retrieve these infrequently searched-for documents and perform poorer on queries containing terms that appear extremely rarely in its historical search logs. Improving the robustness of retrieval systems in the context of rare queries and documents is an important challenge in IR. Many IR models that learn latent representations of text from data often naively assume a \ufb01xed size vocabulary. These models perform poorly when the query consists of terms rarely (or never) seen during training. Even if the model does not assume a \ufb01xed vocabulary, the quality of the latent representations may depend heavily on how often the terms under consideration appear in the training dataset. In contrast, exact matching models, like BM25 [80], can take advantage of the speci\ufb01city [88\u201393] of rare terms to retrieve the few documents from the index that contains these terms. Kangassalo et al. [94] demonstrated that users of search systems also tend to consider term speci\ufb01city in their query formulation process and may add rare terms to help discriminate between relevant and nonrelevant documents. \f38 Chapter 2. Motivation Semantic understanding in an IR model cannot come at the cost of poor retrieval performance on queries containing rare terms. When dealing with a query such as \u201cpekarovic land company\u201d the IR model will bene\ufb01t from considering exact matches of the rare term \u201cpekarovic\u201d. In practice an IR model may need to effectively trade-off exact and inexact matching for a query term. However, the decision of when to perform exact matching can itself be informed by semantic understanding of the context in which the terms appear in addition to the terms themselves. 2.1.3 Robustness to variable length text Depending on the task, the IR system may be expected to retrieve documents, passages, or even short sequences consisting of only a few terms. The design of a retrieval model for long documents is likely to share some similarities to a passage or short text retrieval system, but also be different to accommodate distinct challenges associated with retrieving long text. For example, the challenge of vocabulary mismatch, and hence the importance of semantic matching, may be ampli\ufb01ed when retrieving shorter text [95\u201397]. Similarly, when matching the query against longer text, it is informative to consider the positions of the matches [98\u2013100], but may be less so in the case of short text matching. When speci\ufb01cally dealing with long text, the compute and memory requirements may be signi\ufb01cantly higher for machine learned systems (e.g., [101]) and require careful design choices for mitigation. Typical text collections contain documents of varied lengths (see Figure 2.2). Even when constrained to document retrieval, a good IR system must be able to deal with documents of different lengths without over-retrieving either long or short documents [102, 103]. Relevant documents may also contain irrelevant sections, and the relevant content may either be localized, or spread over multiple sections in the document [104]. Document length normalization is well-studied in the context of IR models (e.g., [92, 93, 102, 105, 106]), and this existing research should inform the design of any new IR models. \f2.1. Desiderata of IR models 39 0\u221210K 10\u221220K 20\u221230K 30\u221240K 40\u221250K 50\u221260K 60\u221270K 70\u221280K 80\u221290K 90\u2212100K 100\u2212110K 110\u2212120K 120\u2212130K 130\u2212140K 140\u2212150K 150\u2212160K 160\u2212170K 170\u2212180K 180\u2212190K 190\u2212210K 210\u2212220K 220\u2212240K 240\u2212250K 250\u2212260K 0 200 400 600 800 Page length in bytes Number of articles Figure 2.2: Distribution of document length (in bytes) of Wikipedia featured articles as of June 30, 2014. Source: https://en.wikipedia.org/wiki/ Wikipedia:Featured_articles/By_length. 2.1.4 Ef\ufb01ciency Ef\ufb01ciency is one of the salient points of any retrieval system [107\u2013109]. A typical commercial Web search engine may deal with tens of thousands of queries per second1\u2014retrieving results for each query from an index containing billions of documents. Search engines typically involve specialised data structures, such as inverted index, and large multi-tier architectures\u2014and the retrieval process generally consists of multiple stages of pruning the candidate set of documents [110\u2013112]. The IR model at the bottom of this telescoping setup may need to sift through billions of documents\u2014while the model at the top may only need to re-rank between tens of promising documents. The retrieval approaches that are suitable at one level of the stack may be highly impractical at a different step\u2014models at the bottom need to be fast but mostly focus on eliminating irrelevant or junk results, while models at the top tend to develop more sophisticated notions of relevance, and focus on distinguishing between documents that are much closer on the relevance scale. So far, much of the focus on neural IR approaches have been limited to re-ranking top-n documents which considerably constrains the impact of these methods. 1http://www.internetlivestats.com/one-second/#google-band \f40 Chapter 2. Motivation 2.1.5 Parity of exposure IR systems mediate what information its users are exposed to. Under presentation bias [67, 68, 72, 73, 113, 114], a static ranking may disproportionately distribute exposure between items of similar relevance raising concerns about producer-side fairness [115\u2013118]. Exposure optimization has been proposed as a means of achieving fairness in ranking for individuals [116] or groups de\ufb01ned by sensitive attributes such as gender or race [117, 118]. Stochastic ranking policies that optimize for individual or group parity of exposure in expectation may be more appropriate under these settings. 2.1.6 Sensitivity to context Retrieval in the wild can leverage many implicit and explicit context information [119\u2013132]. The query \u201cweather\u201d may refer to the weather in Seattle or in London depending on where the user is located. An IR model may retrieve different results for the query \u201cdecorations\u201d depending on the current season. The query \u201cgiants match highlights\u201d may be better disambiguated if the system knows whether the user is a fan of baseball or American football, whether she is located on the East or the West coast of USA, or if the model has knowledge of recent sport \ufb01xtures. In conversational IR systems [133], the correct response to the question \u201cWhen did she become the prime minister?\u201d would depend on disambiguating the correct entity based on the context of references made in the previous turns of the conversation. In proactive retrieval scenarios [134\u2013137], the retrieval can even be triggered based solely on implicit context without any explicit query submission from the user. Relevance in many applications is, therefore, situated in the user and task context, and is an important consideration in the design of IR systems. 2.1.7 Robustness to corpus variance An interesting consideration for IR models is how well they perform on corpora whose distributions are different from the data that the model was trained on. Models like BM25 [80] have very few parameters and often demonstrate reasonable performance \u201cout of the box\u201d on new corpora with little or no additional tuning of \f2.1. Desiderata of IR models 41 parameters. Supervised deep learning models containing millions (or even billions) of parameters, on the other hand, are known to be more sensitive to distributional differences between training and evaluation data, and have been shown to be especially vulnerable to adversarial inputs [138]. The application of unsupervised term embeddings on collections and tasks that are different from the original data the representations were trained on is common in the literature. While these can be seen as examples of successful transfer learning, we also \ufb01nd evidence [139] that term embeddings trained on collections distributionally closer to the test samples perform signi\ufb01cantly better. Some of the variances in performance of deep models on new corpora is offset by better retrieval on the test corpus that is distributionally closer to the training data, where the model may have picked up crucial corpus speci\ufb01c patterns. For example, it may be understandable if a model that learns term representations based on the text of Shakespeare\u2019s Hamlet is effective at retrieving passages relevant to a search query from The Bard\u2019s other works, but performs poorly when the retrieval task involves a corpus of song lyrics by Jay-Z. However, the poor performances on new corpus can also be indicative that the model is over\ufb01tting, or suffering from the Clever Hans2 effect [140]. For example, an IR model trained on recent news corpus may learn to associate \u201cTheresa May\u201d with the query \u201cuk prime minister\u201d and as a consequence may perform poorly on older TREC datasets where the connection to \u201cJohn Major\u201d may be more appropriate. ML models that are hyper-sensitive to corpus distributions may be vulnerable when faced with unexpected changes in distributions in the test data. This can be particularly problematic when the test distributions naturally evolve over time due to underlying changes in the user population or behaviour. The models may need to be re-trained periodically or designed to be invariant to such changes (e.g., [141]). While this list of desired attributes of an IR model is in no way complete, it serves as a reference for comparing many of the neural and non-neural approaches described in the rest of this thesis. 2https://en.wikipedia.org/wiki/Clever_Hans \f42 Chapter 2. Motivation 2.2 Designing neural models for IR In the previous section, we discuss several important desiderata of IR models. These expectations inform the design of neural architectures described in this thesis. Machine learning models\u2014including neural networks\u2014are employed for learning to rank [39] in IR, as we discuss in Section 3.4. However, unlike traditional LTR methods that depend on manually crafted features, the focus of our work is on neural ranking models that accept raw text as input\u2014and focus on learning latent representations of text appropriate for the ranking task. Learning good representations of text is key to effective semantic matching in IR and is a key ingredient for all methods proposed in Chapters 4-7. Section 2.1.2 highlights the importance of exact matching when dealing with rare terms. In Chapter 4 and Section 7.1, we operationalize this intuition and demonstrate that neural methods that combine lexical and semantic matching achive more robustness to rare inputs for different retrieval tasks. An important IR task, in the context of this thesis, is ranking documents that may be hundreds of sentences long. As far as we are aware, the Duet model\u2014 described in Chapter 4\u2014is the \ufb01rst to consider deep neural network based representation learning to rank documents. The different shift-invariant architectures discussed in Section 3.5.2 may also be appropriate for dealing with documents of different lengths. In more recent work [103], we have speci\ufb01cally emphasized on the challenges of dealing with long document text and demonstrated that without careful design neural models can under-retrieve longer documents. In most real IR tasks\u2014such as Web search\u2014retrieval involves collections with billions of documents. In traditional IR, ef\ufb01cient data structures such as inverted index [142] or pre\ufb01x-trees [143] are commonly employed. When designing neural ranking models, it is important to consider how they may interact with traditional IR data structures, such as inverted index. In Chapter 5, We propose a strategy that allows using deep networks\u2014in combination with standard inverted index\u2014to retrieve from the full collection using predominantly of\ufb02ine precomputation without sacri\ufb01cing fast query response time. We show that this strategy generalizes effec\f2.2. Designing neural models for IR 43 tively to several recent state-of-the-art deep architectures for IR. The learning to rank literature has traditionally focused on generating static rankings of items given user intent. In Chapter 6, we argue that stochastic ranking policies are crucial when optimizing for fair distribution of exposure over items (or groups of items) of similar relevance. We demonstrate that learning to rank models can be trained towards exposure parity objectives. Neural models can incorporate side-information on the task or user context. In Section 7.2, we explore neural representation learning in the context of session modeling for more effective QAC. Generalizing neural models across different corpora continues to be an important open problem. Neural models with large number of learnable parameters risk over\ufb01tting to the distributions observed in the training data. These models may underperform when the properties of the test dataset is signi\ufb01cantly different from the training corpus. While we do not discuss this particular topic in this thesis, we refer the interested reader to our recent work [141, 144] related to regularization of neural ranking models. \f\fChapter 3 Background In this chapter, we introduce the fundamentals of neural IR, in context of traditional retrieval research, with visual examples to illustrate key concepts and a consistent mathematical notation for describing key models. Section 3.1 presents a survey of IR models. Section 3.2 introduces neural and non-neural methods for learning term embeddings, without the use of supervision from IR labels, and with a focus on the notion of similarity. Section 3.3 surveys some speci\ufb01c approaches for incorporating such embeddings in IR. Section 3.4 introduces supervised learning to rank models. Section 3.5 introduces the fundamentals of deep models\u2014including standard architectures and toolkits\u2014before Section 3.6 surveys some speci\ufb01c approaches for incorporating deep neural networks (DNNs) in IR. 3.1 IR Models 3.1.1 Traditional IR models In this section, we introduce a few of the traditional IR approaches. The decades of insights from these IR models not only inform the design of our new neural based approaches, but these models also serve as important baselines for comparison. They also highlight the various desiderata that we expect the neural IR models to incorporate. BM25 There is a broad family of statistical functions in IR that consider the number of occurrences of each query term in the document\u2014i.e., term-frequency (TF)\u2014and the corresponding inverse document frequency (IDF) of the same terms in the full \f46 Chapter 3. Background collection (as an indicator of the informativeness of the term). One theoretical basis for such formulations is the probabilistic model of IR that yielded the BM25 [80] ranking function. BM25(q,d) = \u2211 tq\u2208q id f(tq)\u00b7 t f(tq,d)\u00b7(k1 +1) t f(tq,d)+k1 \u00b7 \u0010 1\u2212b+b\u00b7 |d| avgdl \u0011 (3.1) Where, avgdl is the average length of documents in the collection D, and k1 and b are parameters that are usually tuned on a validation dataset. In practice, k1 is sometimes set to some default value in the range [1.2,2.0] and b as 0.75. The id f(t) is computed as, id f(t) = log |D|\u2212d f(t)+0.5 d f(t)+0.5 (3.2) BM25 aggregates the contributions from individual terms but ignores any phrasal or proximity signals between the occurrences of the different query terms in the document. A variant of BM25 [145, 146] also considers documents as composed of several \ufb01elds (such as, title, body, and anchor texts). Language modelling (LM) In the language modelling based approach [90, 147, 148], documents are ranked by the posterior probability p(d|q). p(d|q) = p(q|d).p(d) \u2211\u00af d\u2208D p(q| \u00af d).p( \u00af d) (3.3) \u221dp(q|d).p(d) (3.4) = p(q|d) , assuming p(d) is uniform (3.5) = \u220f tq\u2208q p(tq|d) (3.6) \u02c6 p(E ) is the maximum likelihood estimate (MLE) of the probability of event E , and p(q|d) indicates the probability of generating query q by randomly sampling terms \f3.1. IR Models 47 from document d. In its simplest form, we can estimate p(tq|d) by, p(tq|d) = t f(tq,d) |d| (3.7) However, most formulations of language modelling based retrieval typically employ some form of smoothing [148] by sampling terms from both the document d and the full collection D. The two common smoothing methods are: 1. Jelinek-Mercer smoothing [149] p(tq|d) = \u0012 \u03bb t f(tq,d) |d| +(1\u2212\u03bb)\u2211\u00af d\u2208Dt f(tq, \u00af d) \u2211\u00af d\u2208D | \u00af d| \u0013 (3.8) 2. Dirichlet Prior Smoothing [150] p(tq|d) = \u0012 t f(tq,d)+ \u00b5 \u2211\u00af d\u2208Dt f(tq, \u00af d) \u2211\u00af d\u2208D | \u00af d| \u0013 / \u0012 |d|+ \u00b5 \u0013 (3.9) Both TF-IDF and language modelling based approaches estimate document relevance based on the count of only the query terms in the document. The position of these occurrences and the relationship with other terms in the document are ignored. Translation models Berger and Lafferty [151] proposed an alternative method to estimate p(tq|d) in the language modelling based IR approach (Equation 3.6), by assuming that the query q is being generated via a \u201ctranslation\u201d process from the document d. p(tq|d) = \u2211 td\u2208d p(tq|td)\u00b7 p(td|d) (3.10) The p(tq|td) component allows the model to garner evidence of relevance from nonquery terms in the document. Berger and Lafferty [151] propose to estimate p(tq|td) from query-document paired data similar to techniques in statistical machine translation [152, 153]\u2014but other approaches for estimation have also been explored [154]. \f48 Chapter 3. Background Dependence model None of the three IR models described so far consider proximity between query terms. To address this, Metzler and Croft [155] proposed a linear model over proximity-based features. DM(q,d) = (1\u2212\u03bbow \u2212\u03bbuw) \u2211 tq\u2208q log (1\u2212\u03b1d)t f(tq,d) |d| +\u03b1d \u2211\u00af d\u2208Dt f(tq, \u00af d) \u2211\u00af d\u2208D | \u00af d| ! +\u03bbow \u2211 cq\u2208ow(q) log (1\u2212\u03b1d)t f#1(cq,d) |d| +\u03b1d \u2211\u00af d\u2208Dt f#1(cq, \u00af d) \u2211\u00af d\u2208D | \u00af d| ! +\u03bbuw \u2211 cq\u2208uw(q) log (1\u2212\u03b1d)t f#uwN(cq,d) |d| +\u03b1d \u2211\u00af d\u2208Dt f#uwN(cq, \u00af d) \u2211\u00af d\u2208D | \u00af d| ! (3.11) Where, ow(q) and uw(q) are the set of all contiguous n-grams (or phrases) and the set of all bags of terms that can be generated from query q. t f#1 and t f#uwN are the ordered-window and unordered-window operators from Indri [156]. Finally, \u03bbow and \u03bbuw are the tuneable parameters of the model. Pseudo relevance feedback (PRF) PRF-based methods\u2014e.g., Relevance Models (RM) [157, 158]\u2014typically demonstrate strong performance at the cost of executing an additional round of retrieval. The set of ranked documents R1 from the \ufb01rst round of retrieval is used to select expansion terms to augment the query which is used to retrieve a new ranked set of documents R2 that is presented to the user. The underlying approach to scoring a document in RM is by computing the KL divergence [159] between the query language model \u03b8q and the document language model \u03b8d. score(q,d) = \u2212\u2211 t\u2208T p(t|\u03b8q)log p(t|\u03b8q) p(t|\u03b8d) (3.12) Without PRF, p(t|\u03b8q) = t f(t,q) |q| (3.13) \f3.1. IR Models 49 query text generate query representation doc text generate doc representation estimate relevance query vector doc vector point of query representation point of match point of doc representation Figure 3.1: Document ranking typically involves a query and a document representation steps, followed by a matching stage. Neural models can be useful either for generating good representations or in estimating relevance, or both. But under the RM3 [160] formulation the new query language model \u00af \u03b8q is estimated by, p(t| \u00af \u03b8q) = \u03b1 t f(t,q) |q| +(1\u2212\u03b1) \u2211 d\u2208R1 p(t|\u03b8d)p(d)\u220f \u00af t\u2208q p(\u00af t|\u03b8d) (3.14) Besides language models, PRF based query expansion has also been explored in the context of other retrieval approaches (e.g., [161, 162]). By expanding the query using the results from the \ufb01rst round of retrieval PRF based approaches tend to be more robust to the vocabulary mismatch problem plaguing many other traditional IR methods. 3.1.2 Anatomy of neural IR models Document ranking comprises of performing three primary steps\u2014generate a representation of the query that speci\ufb01es the information need, generate a representation of the document that captures the distribution over the information contained, and match the query and the document representations to estimate their mutual rele\f50 Chapter 3. Background query text doc text generate manually designed features deep neural network for matching (a) Learning to rank using manually designed features (e.g., Liu [39]) query text generate query term vector doc text generate doc term vector generate matching patterns query term vector doc term vector deep neural network for matching (b) Estimating relevance from patterns of exact matches (e.g., [163]) query text generate query embedding doc text generate doc embedding cosine similarity query embedding doc embedding (c) Learning query and document representations for matching (e.g., [164, 165]) query text query expansion using embeddings doc text generate doc term vector query likelihood query term vector doc term vector (d) Query expansion using neural embeddings (e.g., [139, 166]) Figure 3.2: Examples of different neural approaches to IR. In (a) and (b) the neural network is only used at the point of matching, whereas in (c) the focus is on learning effective representations of text using neural methods. Neural models can also be used to expand or augment the query before applying traditional IR techniques, as shown in (d). \f3.2. Unsupervised learning of term representations 51 vance. All existing neural approaches to IR can be broadly categorized based on whether they in\ufb02uence the query representation, the document representation, or in estimating relevance. A neural approach may impact one or more of these stages shown in Figure 3.1. Neural networks are useful as learning to rank models as we will discuss in Section 3.4. In these models, a joint representation of query and document is generated using manually designed features and the neural network is used only at the point of match to estimate relevance, as shown in Figure 3.2a. In Section 3.6.4, we will discuss DNN models, such as [7, 163], that estimate relevance based on patterns of exact query term matches in the document. Unlike traditional learning to rank models, however, these architectures (shown in Figure 3.2b) depend less on manual feature engineering and more on automatically detecting regularities in good matching patterns. More recent deep learning methods, such as [167], consume query and document as single concatenated sequence of terms, instead of representing them as separate term vectors. In contrast, many (shallow and deep) neural IR models depend on learning useful low-dimensional vector representations\u2014or embeddings\u2014of query and document text, and using them within traditional IR models or in conjunction with simple similarity metrics (e.g., cosine similarity). These models shown in Figure 3.2c may learn the embeddings by optimizing directly for the IR task (e.g., [164]), or in an unsupervised setting (e.g., [165]). Finally, Figure 3.2d shows IR approaches where the neural models are used for query expansion [139, 166]. While the taxonomy of neural approaches described in this section is rather simple, it does provide an intuitive framework for comparing the different neural approaches in IR and highlights the similarities and distinctions between these different techniques. \f52 Chapter 3. Background banana mango dog (a) Local representation banana mango dog fruit elongate ovate barks has tail (b) Distributed representation Figure 3.3: Under local representations the terms \u201cbanana\u201d, \u201cmango\u201d, and \u201cdog\u201d are distinct items. But distributed vector representations may recognize that \u201cbanana\u201d and \u201cmango\u201d are both fruits, but \u201cdog\u201d is different. 3.2 Unsupervised learning of term representations 3.2.1 A tale of two representations Vector representations are fundamental to both information retrieval and machine learning. In IR, terms are typically the smallest unit of representation for indexing and retrieval. Therefore, many IR models\u2014both non-neural and neural\u2014focus on learning good vector representations of terms. Different vector representations exhibit different levels of generalization\u2014some consider every term as a distinct entity while others learn to identify common attributes. Different representation schemes derive different notions of similarity between terms from the de\ufb01nition of the corresponding vector spaces. Some representations operate over \ufb01xed-size vocabularies, while the design of others obviate such constraints. They also differ on the properties of compositionality that de\ufb01nes how representations for larger units of information, such as passages and documents, can be derived from individual term vectors. These are some of the important considerations for choosing a term representation suitable for a speci\ufb01c task. Local representations Under local (or one-hot) representations, every term in a \ufb01xed size vocabulary T is represented by a binary vector \u20d7 v \u2208{0,1}|T|, where only one of the values in the vector is one and all the others are set to zero. Each position in the vector\u20d7 v corresponds to a term. The term \u201cbanana\u201d, under this representation, is given by a vector that has the value one in the position corresponding to \u201cbanana\u201d and zero everywhere else. Similarly, the terms \u201cmango\u201d and \u201cdog\u201d are represented by setting different positions in the vector to one. Figure 3.3a highlights that under this scheme each term is a unique entity, and \u201cbanana\u201d is as distinct from \u201cdog\u201d as \f3.2. Unsupervised learning of term representations 53 banana Doc 8 Doc 3 Doc 12 (a) In-document features banana like flies a fruit (b) Neighbouring-term features banana fruit-4 a-1 flies-3 like-2 fruit+1 (c) Neighbouring-term w/ distance features banana nan #ba ana na# ban (d) Character-trigraph features Figure 3.4: Examples of different feature-based distributed representations of the term \u201cbanana\u201d. The representations in (a), (b), and (c) are based on external contexts in which the term frequently occurs, while (d) is based on properties intrinsic to the term. The representation scheme in (a) depends on the documents containing the term while the scheme shown in (b) and (c) depends on other terms that appears in its neighbourhood. The scheme (b) ignores inter-term distances. Therefore, in the sentence \u201cTime \ufb02ies like an arrow; fruit \ufb02ies like a banana\u201d, the feature \u201cfruit\u201d describes both the terms \u201cbanana\u201d and \u201carrow\u201d. However, in the representation scheme of (c) the feature \u201cfruit\u22124\u201d is positive for \u201cbanana\u201d, and the feature \u201cfruit+1\u201d for \u201carrow\u201d. it is from \u201cmango\u201d. Terms outside of the vocabulary either have no representation or are denoted by a special \u201cUNK\u201d symbol under this scheme. Distributed representations Under distributed representations every term is represented by a vector \u20d7 v \u2208R|k|. \u20d7 v can be a sparse or a dense vector\u2014a vector of hand-crafted features or a latent representation in which the individual dimensions are not interpretable in isolation. The key underlying hypothesis for any distributed representation scheme, however, is that by representing a term by its attributes allows for de\ufb01ning some notion of similarity between the different terms based on the chosen properties. For example, in Figure 3.3b \u201cbanana\u201d is more similar to \u201cmango\u201d than \u201cdog\u201d because they are both fruits, but yet different because of other properties that are not shared between the two, such as shape. A key consideration in any feature based distributed representation is the \f54 Chapter 3. Background choice of the features themselves. One approach involves representing terms by features that capture their distributional properties. This is motivated by the distributional hypothesis [168] that states that terms that are used (or occur) in similar context tend to be semantically similar. Firth [169] famously purported this idea of distributional semantics1 by stating \u201ca word is characterized by the company it keeps\u201d. However, the distribution of different types of context may model different semantics of a term. Figure 3.4 shows three different sparse vector representations of the term \u201cbanana\u201d corresponding to different distributional feature spaces\u2014documents containing the term (e.g., LSA [170]), neighbouring terms in a window (e.g., HAL [171], COALS [172], and [173]), and neighbouring terms with distance (e.g., [174]). Finally, Figure 3.4d shows a vector representation of \u201cbanana\u201d based on the character trigraphs in the term itself\u2014instead of external contexts in which the term occurs. In Section 3.2.2 we will discuss how choosing different distributional features for term representation leads to different nuanced notions of semantic similarity between them. When the vectors are high-dimensional, sparse, and based on observable features we refer to them as observed (or explicit) vector representations [174]. When the vectors are dense, small (k \u226a|T|), and learnt from data then we instead refer to them as latent vector spaces, or embeddings. In both observed and latent vector spaces, several distance metrics can be used to de\ufb01ne the similarity between terms, although cosine similarity is commonly used. sim(\u20d7 vi,\u20d7 vj) = cos(\u20d7 vi,\u20d7 vj) = \u20d7 v \u22ba i \u20d7 vj \u2225\u20d7 vi\u2225\u2225\u20d7 vj\u2225 (3.15) Most embeddings are learnt from observed features, and hence the discussions in Section 3.2.2 about different notions of similarity are also relevant to the embedding models. In Section 3.2.3 and Section 3.2.4 we discuss observed and latent space representations. In the context of neural models, distributed representations generally 1Readers should take note that while many distributed representations take advantage of distributional properties, the two concepts are not synonymous. A term can have a distributed representation based on non-distributional features\u2014e.g., parts of speech classi\ufb01cation and character trigraphs in the term. \f3.2. Unsupervised learning of term representations 55 banana mango dog Figure 3.5: A vector space representation of terms puts \u201cbanana\u201d closer to \u201cmango\u201d because they share more common attributes than \u201cbanana\u201d and \u201cdog\u201d. refer to learnt embeddings. The idea of \u2018local\u2019 and \u2018distributed\u2019 representations has a speci\ufb01c signi\ufb01cance in the context of neural networks. Each concept, entity, or term can be represented within a neural network by the activation of a single neuron (local representation) or by the combined pattern of activations of several neurons (distributed representation) [175]. Finally, with respect to compositionality, it is important to understand that distributed representations of items are often derived from local or distributed representation of its parts. For example, a document can be represented by the sum of the one-hot vectors or embeddings corresponding to the terms in the document. The resultant vector, in both cases, corresponds to a distributed bag-of-terms representation. Similarly, the character trigraph representation of terms in Figure 3.4d is simply an aggregation over the one-hot representations of the constituent trigraphs. 3.2.2 Notions of similarity Any vector representation inherently de\ufb01nes some notion of relatedness between terms. Is \u201cSeattle\u201d closer to \u201cSydney\u201d or to \u201cSeahawks\u201d? The answer depends on the type of relationship we are interested in. If we want terms of similar type to be closer, then \u201cSydney\u201d is more similar to \u201cSeattle\u201d because they are both cities. However, if we are interested to \ufb01nd terms that co-occur in the same document or passage, then \u201cSeahawks\u201d\u2014Seattle\u2019s football team\u2014should be closer. The former represents a typical, or type-based notion of similarity while the latter exhibits a \f56 Chapter 3. Background Table 3.1: A toy corpus of short documents that we consider for the discussion on different notions of similarity between terms under different distributed representations. The choice of the feature space that is used for generating the distributed representation determines which terms are closer in the vector space, as shown in Figure 3.6. Sample documents doc 01 Seattle map doc 09 Denver map doc 02 Seattle weather doc 10 Denver weather doc 03 Seahawks jerseys doc 11 Broncos jerseys doc 04 Seahawks highlights doc 12 Broncos highlights doc 05 Seattle Seahawks Wilson doc 13 Denver Broncos Lynch doc 06 Seattle Seahawks Sherman doc 14 Denver Broncos Sanchez doc 07 Seattle Seahawks Browner doc 15 Denver Broncos Miller doc 08 Seattle Seahawks Ifedi doc 16 Denver Broncos Marshall more topical sense of relatedness. If we want to compare \u201cSeattle\u201d with \u201cSydney\u201d and \u201cSeahawks based on their respective vector representations, then the underlying feature space needs to align with the notion of similarity that we are interested in. It is, therefore, important for the readers to build an intuition about the choice of features and the notion of similarity they encompass. This can be demonstrated by using a toy corpus, such as the one in Table 3.1. Figure 3.6a shows that the \u201cin documents\u201d features naturally lend to a topical sense of similarity between the terms, while the \u201cneighbouring terms with distances\u201d features in Figure 3.6c gives rise to a more typical notion of relatedness. Using \u201cneighbouring terms\u201d without the inter-term distances as features, however, produces a mixture of topical and typical relationships. This is because when the term distances (denoted as superscripts) are considered in the feature de\ufb01nition then the document \u201cSeattle Seahawks Wilson\u201d produces the bag-offeatures {Seahawks+1,Wilson+2} for \u201cSeattle\u201d which is non-overlapping with the bag-of-features {Seattle\u22121,Wilson+1} for \u201cSeahawks\u201d. However, when the feature de\ufb01nition ignores the term-distances then there is a partial overlap between the bagof-features {Seahawks,Wilson} and {Seattle,Wilson} corresponding to \u201cSeattle\u201d and \u201cSeahawks\u201d, respectively. The overlap increases when a larger window-size over the neighbouring terms is employed pushing the notion of similarity closer to \f3.2. Unsupervised learning of term representations 57 a topical de\ufb01nition. This effect of the windows size on the latent vector space was reported by Levy and Goldberg [176] in the context of term embeddings. Readers should note that the set of all inter-term relationships goes beyond the two notions of typical and topical that we discuss in this section. For example, vector representations could cluster terms closer based on linguistic styles\u2014e.g., terms that appear in thriller novels versus in children\u2019s rhymes, or in British versus American English. However, the notions of typical and topical similarities frequently come up in discussions in the context of many IR and NLP tasks\u2014sometimes under different names such as Paradigmatic and Syntagmatic relations2 [178\u2013181]\u2014and the idea itself goes back at least as far as Saussure [182\u2013185]. 3.2.3 Observed feature spaces Observed feature space representations can be broadly categorized based on their choice of distributional features (e.g., in documents, neighbouring terms with or without distances, etc.) and different weighting schemes (e.g., TF-IDF, positive pointwise mutual information, etc.) applied over the raw counts. We direct the readers to [186, 187] which are good surveys of many existing observed vector representation schemes. Levy et al. [174] demonstrated that explicit vector representations are amenable to the term analogy task using simple vector operations. A term analogy task involves answering questions of the form \u201cman is to woman as king is to ____?\u201d\u2014the correct answer to which in this case happens to be \u201cqueen\u201d. In NLP, term analogies are typically performed by simple vector operations of the following form followed by a nearest-neighbour search, \u20d7 vSeahawks \u2212\u20d7 vSeattle +\u20d7 vDenver \u2248\u20d7 vBroncos (3.16) 2Interestingly, the notion of Paradigmatic (typical) and Syntagmatic (topical) relationships show up almost universally\u2014not just in text. In vision, for example, the different images of \u201cnose\u201d are typically similar to each other, while sharing topical relationship with images of \u201ceyes\u201d and \u201cears\u201d. Curiously, Barthes [177] extended the analogy to garments. Paradigmatic relationships exist between items of the same type (e.g., different style of boots) and the proper Syntagmatic juxtaposition of items from these different Paradigms\u2014from hats to boots\u2014forms a fashionable ensemble. \f58 Chapter 3. Background Seahawks Denver Broncos Doc 02 Doc 01 Seattle Doc 04 Doc 03 Doc 06 Doc 05 Doc 08 Doc 07 Doc 10 Doc 09 Doc 12 Doc 11 Doc 14 Doc 13 Doc 16 Doc 15 (a) \u201cIn-documents\u201d features Seahawks Denver Broncos Denver Seattle Seattle Broncos Seahawks weather map highlights jerseys Sherman Wilson Ifedi Browner Sanchez Lynch Marshall Miller (b) \u201cNeighbouring terms\u201d features Seahawks Denver Broncos Denver-1 Seattle-1 Seattle Broncos+1 Seahawks+1 weather+1 map+1 highlights+1 jerseys+1 Wilson+2 Wilson+1 Sherman+2 Sherman+1 Browner+2 Browner+1 Ifedi+2 Ifedi+1 Lynch+2 Lynch+1 Sanchez+2 Sanchez+1 Miller+2 Miller+1 Marshall+2 Marshall+1 (c) \u201cNeighbouring terms w/ distances\u201d features Figure 3.6: The \ufb01gure shows different distributed representations for the four terms\u2014 \u201dSeattle\u201d, \u201cSeahawks\u201d, \u201cDenver\u201d, and \u201cBroncos\u201d\u2014based on the toy corpus in Table 3.1. Shaded circles indicate non-zero values in the vectors\u2014the darker shade highlights the vector dimensions where more than one vector has a nonzero value. When the representation is based on the documents that the terms occur in then \u201cSeattle\u201d is more similar to \u201cSeahawks\u201d than to \u201cDenver\u201d. The representation scheme in (a) is, therefore, more aligned with a topical notion of similarity. In contrast, in (c) each term is represented by a vector of neighbouring terms\u2014where the distances between the terms are taken into consideration\u2014which puts \u201cSeattle\u201d closer to \u201cDenver\u201d demonstrating a typical, or type-based, similarity. When the inter-term distances are ignored, as in (b), a mix of typical and topical similarities is observed. Finally, it is worth noting that neighbouring-terms based vector representations leads to similarities between terms that do not necessarily occur in the same document, and hence the term-term relationships are less sparse than when only in-document features are considered. \f3.2. Unsupervised learning of term representations 59 Seahawks Denver Broncos Seattle Seahawks \u2013 Seattle + Denver Denver Seattle Broncos Seahawks weather map highlights jerseys Sherman Wilson Ifedi Browner Sanchez Lynch Marshall Miller Figure 3.7: A visual demonstration of term analogies via simple vector algebra. The shaded circles denote non-zero values. Darker shade is used to highlight the non-zero values along the vector dimensions for which the output of\u20d7 vSeahawks \u2212\u20d7 vSeattle + \u20d7 vDenver is positive. The output vector is closest to \u20d7 vBroncos as shown in this toy example. It may be surprising to some readers that the vector obtained by the simple algebraic operations\u20d7 vSeahawks \u2212\u20d7 vSeattle +\u20d7 vDenver produces a vector close to the vector\u20d7 vBroncos. We present a visual intuition of why this works in practice in Figure 3.7, but we refer the readers to [174, 188] for a more rigorous mathematical handling of this subject. 3.2.4 Embeddings While observed vector spaces based on distributional features can capture interesting relationships between terms, they have one big drawback\u2014the resultant representations are highly sparse and high-dimensional. The number of dimensions, for example, may be the same as the vocabulary size, which is unwieldy for most practical tasks. An alternative is to learn lower dimensional representations that retains useful attributes from the observed feature spaces. An embedding is a representation of items in a new space such that the properties of\u2014and the relationships between\u2014the items are preserved. Goodfellow et al. [189] articulate that the goal of an embedding is to generate a simpler representation\u2014where simpli\ufb01cation may mean a reduction in the number of dimensions, a decrease in the sparseness of the representation, disentangling the principle components of the vector space, or a combination of these goals. In the context of term embeddings, the explicit feature vectors\u2014like those discussed in \f60 Chapter 3. Background Section 3.2.3\u2014constitutes the original representation. An embedding trained from these features assimilate the properties of the terms and the inter-term relationships observable in the original feature space. Common approaches for learning embeddings include either factorizing the term-feature matrix (e.g.LSA [170]) or using gradient descent based methods that try to predict the features given the term (e.g., [190, 191]). Baroni et al. [192] empirically demonstrate that these feature-predicting models that learn lower dimensional representations, in fact, also perform better than explicit counting based models on different tasks\u2014possibly due to better generalization across terms\u2014although some counter evidence the claim of better performances from embedding models have also been reported in the literature [193]. The sparse feature spaces of Section 3.2.3 are easier to visualize and leads to more intuitive explanations\u2014while their latent counterparts may be more practically useful. Therefore, it may be useful to think sparse, but act dense in many scenarios. In the rest of this section, we will describe some of these neural and non-neural latent space models. Latent Semantic Analysis (LSA) LSA [170] involves performing singular value decomposition (SVD) [194] on a term-document (or term-passage) matrix X to obtain its low-rank approximation [195]. SVD on X involves solving X = U\u03a3V T, where U and V are orthogonal matrices and \u03a3 is a diagonal matrix.3 X U \u03a3 V \u22ba (\u20d7 d j) (\u20d7 d j) \u2193 \u2193 ( \u20d7 t \u22ba i )\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 x1,1 ... x1,|D| . . . ... . . . x|T|,1 ... x|T|,|D| \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb = ( \u20d7 t \u22ba i )\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u20d7 u1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb ... \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u20d7 ul \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u00b7 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u03c31 ... 0 . . . ... . . . 0 ... \u03c3l \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u00b7 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 h \u20d7 v1 i . . . h \u20d7 vl i \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (3.17) 3The matrix visualization is taken from https://en.wikipedia.org/wiki/Latent_ semantic_analysis. \f3.2. Unsupervised learning of term representations 61 \u03c31,...,\u03c3l,\u20d7 u1,...,\u20d7 ul, and\u20d7 v1,...,\u20d7 vl are the singular values, and the left and the right singular vectors, respectively. The k largest singular values\u2014and corresponding singular vectors from U and V\u2014is the rank k approximation of X (Xk = Uk\u03a3kV T k ) and \u03a3k \u20d7 ti is the embedding for the ith term. While LSA operate on a term-document matrix, matrix factorization based approaches can also be applied to term-term matrices [172, 196, 197]. Probabilistic Latent Semantic Analysis (PLSA) PLSA [198] learns lowdimensional representations of terms and documents by modelling their cooccurrence p(t,d) as follows, p(t,d) = p(d) \u2211 c\u2208C p(c|d)P(t|c) (3.18) Where, C is the set of latent topics\u2014and the number of topics |C| is a hyperparameter of the model. Both p(c|d) and P(t|c) are modelled as multinomial distributions and their parameters are typically learned using the EM algorithm [199]. After learning the parameters of the model, a term ti can be represented as a distribution over the latent topics [p(c0|ti),..., p(c|C|\u22121|ti)]. In a related approach called Latent Dirichlet Allocation (LDA) [200], each document is represented by a Dirichlet prior instead of a \ufb01xed variable. Neural term embedding models are typically trained by setting up a prediction task. Instead of factorizing the term-feature matrix\u2014as in LSA\u2014neural models are trained to predict the term from its features. The model learns dense lowdimensional representations in the process of minimizing the prediction error. These approaches are based on the information bottleneck method [201]\u2014discussed more in Section 3.5.2\u2014with the low-dimensional representations acting as the bottleneck. The training data may contain many instances of the same term-feature pair proportional to their frequency in the corpus (e.g., word2vec [191]), or their counts can be pre-aggregated (e.g., GloVe [202]). \f62 Chapter 3. Background Win Wout ti ti+j (a) Skip-gram Win Wout ti+2 ti+1 ti-2 ti-1 ti* ti (b) Continuous bag-of-words (CBOW) Figure 3.8: The (a) skip-gram and the (b) continuous bag-of-words (CBOW) architectures of word2vec. The architecture is a neural network with a single hidden layer whose size is much smaller than that of the input and the output layers. Both models use one-hot representations of terms in the input and the output. The learnable parameters of the model comprise of the two weight matrices Win and Wout that corresponds to the embeddings the model learns for the input and the output terms, respectively. The skip-gram model trains by minimizing the error in predicting a term given one of its neighbours. The CBOW model, in contrast, predicts a term from a bag of its neighbouring terms. \f3.2. Unsupervised learning of term representations 63 Word2vec For word2vec [191, 203\u2013206], the features for a term are made up of its neighbours within a \ufb01xed size window over the text. The skip-gram architecture (see Figure 3.8a) is a simple one hidden layer neural network. Both the input and the output of the model are one-hot vectors and the loss function is as follows, Lskip\u2212gram = \u22121 |S| |S| \u2211 i=1 \u2211 \u2212c\u2264j\u2264+c,j\u0338=0 log(p(ti+ j|ti)) (3.19) where, p(ti+j|ti) = exp((Wout\u20d7 vti+ j)\u22ba(Win\u20d7 vti)) \u2211 |T| k=1 exp((Wout\u20d7 vtk)\u22ba(Win\u20d7 vti)) (3.20) S is the set of all windows over the training text and c is the number of neighbours we want to predict on either side of the term ti. The denominator for the softmax function for computing p(ti+ j|ti) sums over all the terms in the vocabulary. This is prohibitively costly and in practice either hierarchical-softmax [207] or negative sampling is employed, which we discuss more in Section 3.4.2. Note that the model has two different weight matrices Win and Wout that constitute the learnable parameters of the models. Win gives us the IN embeddings corresponding to the input terms and Wout corresponds to the OUT embeddings for the output terms. Generally, only Win is used and Wout is discarded after training. We discuss an IR application that makes use of both the IN and the OUT embeddings in Section 3.3.1. The continuous bag-of-words (CBOW) architecture (see Figure 3.8b) is similar to the skip-gram model, except that the task is to predict the middle term given all the neighbouring terms in the window. The CBOW model creates a single training sample with the sum of the one-hot vectors of the neighbouring terms as input and the one-hot vector \u20d7 vti\u2014corresponding to the middle term\u2014as the expected output. Contrast this with the skip-gram model that creates 2 \u00d7 c samples by individually pairing each neighbouring term with the middle term. During training, the skipgram model trains slower than the CBOW model [191] because it creates more training samples from the same windows of text. \f64 Chapter 3. Background LCBOW = \u22121 |S| |S| \u2211 i=1 log(p(ti|ti\u2212c,...,ti\u22121,ti+1,...,ti+c)) (3.21) Word2vec gained particular popularity for its ability to perform term analogies using simple vector algebra, similar to what we discussed in Section 3.2.3. For domains where the interpretability of the embeddings is important, Sun et al. [208] introduced an additional constraint in the loss function to encourage more sparseness in the learnt representations. Lsparse\u2212CBOW = Lsparse\u2212CBOW \u2212\u03bb \u2211 t\u2208T \u2225\u20d7 vt\u22251 (3.22) GloVe The skip-gram model trains on individual term-neighbour pairs. If we aggregate all the training samples such that xij is the frequency of the pair \u27e8ti,t j\u27e9in the training data, then the loss function changes to, Lskip\u2212gram = \u2212 |T| \u2211 i=1 |T| \u2211 j=1 xi jlog(p(tj|ti)) (3.23) = \u2212 |T| \u2211 i=1 xi |T| \u2211 j=1 xij xi log(p(tj|ti)) (3.24) = \u2212 |T| \u2211 i=1 xi |T| \u2211 j=1 \u00af p(t j|ti)log(p(t j|ti)) (3.25) = |T| \u2211 i=1 xiH( \u00af p(t j|ti), p(t j|ti)) (3.26) H(...) is the cross-entropy error between the actual co-occurrence probability \u00af p(tj|ti) and the one predicted by the model p(t j|ti). This is similar to the loss function for GloVe [202] if we replace the cross-entropy error with a squared-error and apply a saturation function f(...) over the actual co-occurrence frequencies. \f3.3. Term embeddings for IR 65 LGloVe = \u2212 |T| \u2211 i=1 |T| \u2211 j=1 f(xi j)(log(xij \u2212\u20d7 v \u22ba wi\u20d7 vw j))2 (3.27) (3.28) where, f(x) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 (x/xmax)\u03b1, ifx \u2264xmax 1, otherwise (3.29) GloVe is trained using AdaGrad [209]. Similar to word2vec, GloVe also generates two different (IN and OUT) embeddings, but unlike word2vec it generally uses the sum of the IN and the OUT vectors as the embedding for each term in the vocabulary. Paragraph2vec Following the popularity of word2vec [191, 203], similar neural architectures [181, 210\u2013214] have been proposed that trains on term-document cooccurrences. The training typically involves predicting a term given the ID of a document or a passage that contains the term. In some variants, as shown in Figure 3.9, neighbouring terms are also provided as input. The key motivation for training on term-document pairs is to learn an embedding that is more aligned with a topical notion of term-term similarity\u2014which is often more appropriate for IR tasks. The term-document relationship, however, tends to be more sparse [215]\u2014 including neighbouring term features may compensate for some of that sparsity. In the context of IR tasks, Ai et al. [213, 214] proposed a number of IR-motivated changes to the original Paragraph2vec [210] model training\u2014including, document frequency based negative sampling and document length based regularization. 3.3 Term embeddings for IR Traditional IR models use local representations of terms for query-document matching. The most straight-forward use case for term embeddings in IR is to enable \f66 Chapter 3. Background Wd,in Wt,out dj ti ti+2 ti+1 ti-2 ti-1 Wt,in Figure 3.9: The paragraph2vec architecture as proposed by Le and Mikolov [210] trains by predicting a term given a document (or passage) ID containing the term. By trying to minimize the prediction error, the model learns an embedding for the term as well as for the document. In some variants of the architecture, optionally the neighbouring terms are also provided as input\u2014as shown in the dotted box. inexact matching in the embedding space. In Section 2.1, we argued the importance of inspecting non-query terms in the document for garnering evidence of relevance. For example, even from a shallow manual inspection, it is possible to conclude that the passage in Figure 3.10a is about Albuquerque because it contains \u201cmetropolitan\u201d, \u201cpopulation\u201d, and \u201carea\u201d among other informative terms. On the other hand, the passage in Figure 3.10b contains \u201csimulator\u201d, \u201cinterpreter\u201d, and \u201cAltair\u201d which suggest that the passage is instead more likely related to computers and technology. In traditional term counting based IR approaches these signals are often ignored. Unsupervised term embeddings can be incorporated into existing IR approaches for inexact matching. These approaches can be broadly categorized as those that compare the query with the document directly in the embedding space; \f3.3. Term embeddings for IR 67 Albuquerque is the most populous city in the U.S. state of New Mexico. The high-altitude city serves as the county seat of Bernalillo County, and it is situated in the central part of the state, straddling the Rio Grande. The city population is 557,169 as of the July 1, 2014 population estimate from the United States Census Bureau, and ranks as the 32nd-largest city in the U.S. The Albuquerque metropolitan statistical area (or MSA) has a population of 907,301 according to the United States Census Bureau\u2019s most recently available estimate for 2015. (a) About Albuquerque Allen suggested that they could program a BASIC interpreter for the device; after a call from Gates claiming to have a working interpreter, MITS requested a demonstration. Since they didn\u2019t actually have one, Allen worked on a simulator for the Altair while Gates developed the interpreter. Although they developed the interpreter on a simulator and not the actual device, the interpreter worked \ufb02awlessly when they demonstrated the interpreter to MITS in Albuquerque, New Mexico in March 1975; MITS agreed to distribute it, marketing it as Altair BASIC. (b) Not about Albuquerque Figure 3.10: Two passages both containing exactly a single occurrence of the query term \u201cAlbuquerque\u201d. However, the passage in (a) contains other terms such as \u201cpopulation\u201d and \u201carea\u201d that are relevant to a description of the city. In contrast, the terms in passage (b) suggest that it is unlikely to be about the city, and only mentions the city potentially in a different context. and those that use embeddings to generate suitable query expansion candidates from a global vocabulary and then perform retrieval based on the expanded query. We discuss both these classes of approaches in the remainder of this section. 3.3.1 Query-document matching One strategy for using term embeddings in IR involves deriving a dense vector representation for the query and the document from the embeddings of the individual terms in the corresponding texts. The term embeddings can be aggregated in different ways, although using the average word (or term) embeddings (AWE) is quite common [165, 210, 216\u2013220]. Non-linear combinations of term vectors\u2014 such as using Fisher Kernel Framework [221]\u2014have also been explored, as well as other families of aggregate functions of which AWE has been shown to be a special case [222]. The query and the document embeddings themselves can be compared using a variety of similarity metrics, such as cosine similarity or dot-product. For example, \f68 Chapter 3. Background sim(q,d) = cos(\u20d7 vq,\u20d7 vd) = \u20d7 v \u22ba q\u20d7 vd \u2225\u20d7 vq\u2225\u2225\u20d7 vd\u2225 (3.30) where, \u20d7 vq = 1 |q| \u2211 tq\u2208q \u20d7 vtq \u2225\u20d7 vtq\u2225 (3.31) \u20d7 vd = 1 |d| \u2211 td\u2208d \u20d7 vtd \u2225\u20d7 vtd\u2225 (3.32) An important consideration here is the choice of the term embeddings that is appropriate for the retrieval scenario. While, LSA [170], word2vec [203], and GloVe [202] are commonly used\u2014it is important to understand how the notion of inter-term similarity modelled by a speci\ufb01c vector space may in\ufb02uence its performance on a retrieval task. In the example in Figure 3.10, we want to rank documents that contains related terms\u2014such as \u201cpopulation\u201d or \u201carea\u201d\u2014higher. These terms are topically similar to the query term \u201cAlbuquerque\u201d. Intuitively, a document about \u201cTucson\u201d\u2014which is typically similar to \u201cAlbuquerque\u201d\u2014is unlikely to satisfy the user intent. The discussion in Section 3.2.2 on how input features in\ufb02uence the notion of similarity in the learnt vector space is relevant here. Models, such as LSA [170] and Paragraph2vec [210], that consider termdocument pairs generally capture topical similarities in the learnt vector space. On the other hand, word2vec [203] and GloVe [202] embeddings may incorporate a mixture of topical and typical notions of relatedness. The inter-term relationships modelled in these latent spaces may be closer to type-based similarities when trained with short window sizes or on short text, such as on keyword queries [165, 176]. In Section 3.2.4, we note that the word2vec model learns two different embeddings\u2014IN and OUT\u2014corresponding to the input and the output terms. In retrieval, if a query contains a term ti then\u2014in addition to the frequency of occurrences of ti in the document\u2014we may also consider the presence of a different term t j in the document to be a supporting evidence of relevance if the pair of terms \u27e8ti,t j\u27e9frequently co-occurs in the collection. As shown in Equation 3.19, \f3.3. Term embeddings for IR 69 Table 3.2: Different nearest neighbours in the word2vec embedding space based on whether we compute IN-IN, OUT-OUT, or IN-OUT similarities between the terms. The examples are from [165, 218] where the word2vec embeddings are trained on search queries. Training on short query text, however, makes the inter-term similarity more pronouncedly typical (where, \u201cYale\u201d is closer to \u201cHarvard\u201d and \u201cNYU\u201d) when both terms are represented using their IN vectors. In contrast, the IN-OUT similarity (where, \u201cYale\u201d is closer to \u201cfaculty\u201d and \u201calumni\u201d) mirrors more the topical notions of relatedness. yale seahawks IN-IN OUT-OUT IN-OUT IN-IN OUT-OUT IN-OUT yale yale yale seahawks seahawks seahawks harvard uconn faculty 49ers broncos highlights nyu harvard alumni broncos 49ers jerseys cornell tulane orientation packers n\ufb02 tshirts tulane nyu haven n\ufb02 packers seattle tufts tufts graduate steelers steelers hats in the skip-gram model this probability of co-occurrence p(tj|ti) is proportional to (Wout\u20d7 vt j)\u22ba(Win\u20d7 vti)\u2014i.e., the dot product between the IN embeddings of ti and the OUT embeddings of t j. Therefore, Nalisnick et al. [218] point out that when using word2vec embeddings for estimating the relevance of a document to a query, it is more appropriate to compute the IN-OUT similarity between the query and the document terms. In other words, the query terms should be represented using the IN embeddings and the document terms using the OUT embeddings. Table 3.2 highlights the difference between IN-IN or IN-OUT similarities between terms. The proposed Dual Embedding Space Model (DESM)4 [165, 218] estimates the query-document relevance as follows, DESMin\u2212out(q,d) = 1 |q| \u2211 tq\u2208q \u20d7 v \u22ba tq,in\u20d7 vd,out \u2225\u20d7 vtq,in\u2225\u2225\u20d7 vd,out\u2225 (3.33) \u20d7 vd,out = 1 |d| \u2211 td\u2208d \u20d7 vtd,out \u2225\u20d7 vtd,out\u2225 (3.34) 4The dual term embeddings trained on Bing queries is available for download at https://www. microsoft.com/en-us/download/details.aspx?id=52597 \f70 Chapter 3. Background An alternative to representing queries and documents as an aggregate of their term embeddings is to incorporate the term representations into existing IR models, such as the ones we discussed in Section 3.1. Zuccon et al. [154] proposed the Neural Translation Language Model (NTLM) that uses the similarity between term embeddings as a measure for term-term translation probability p(tq|td) in Equation 3.11. p(tq|td) = cos(\u20d7 vtq,\u20d7 vtd) \u2211t\u2208T cos(\u20d7 vt,\u20d7 vtd) (3.35) On similar lines, Ganguly et al. [223] proposed the Generalized Language Model (GLM) which extends the Language Model based approach in Equation 3.9 to, p(d|q) = \u220f tq\u2208q \u0012 \u03bb t f(tq,d) |d| +\u03b1 \u2211td\u2208d (sim(\u20d7 vtq,\u20d7 vtd)\u00b7t f(td,d)) \u2211td1\u2208d \u2211td2\u2208d sim(\u20d7 vtd1,\u20d7 vtd2)\u00b7|d|2 +\u03b2 \u2211\u00af t\u2208Nt (sim(\u20d7 vtq,\u20d7 v\u00af t)\u00b7\u2211\u00af d\u2208Dt f(\u00af t, \u00af d)) \u2211td1\u2208Nt \u2211td2\u2208Nt sim(\u20d7 vtd1,\u20d7 vtd2)\u00b7\u2211\u00af d\u2208D | \u00af d|\u00b7|Nt| +(1\u2212\u03b1 \u2212\u03b2 \u2212\u03bb)\u2211\u00af d\u2208Dt f(tq, \u00af d) \u2211\u00af d\u2208D | \u00af d| \u0013 (3.36) Where, Nt is the set of nearest-neighbours of term t. Ai et al. [214] incorporate paragraph vectors [210] into the query-likelihood model [90]. Another approach, based on the Earth Mover\u2019s Distance (EMD) [224], involves estimating similarity between pairs of documents by computing the minimum distance in the embedding space that each term in the \ufb01rst document needs to travel to reach the terms in the second document. This measure, commonly referred to as the Word Mover\u2019s Distance (WMD), was originally proposed by Wan et al. [225, 226], but used WordNet and topic categories instead of embeddings for de\ufb01ning the distance between terms. Term embeddings were later incorporated into the model by Kusner et al. [227, 228]. Finally, Guo et al. [229] incorporated similar notion of distance into the Non-linear Word Transportation (NWT) model that \f3.3. Term embeddings for IR 71 estimates relevance between a a query and a document. The NWT model involves solving the following constrained optimization problem, max \u2211 tq\u2208q log \u0012 \u2211 td\u2208u(d) f(tq,td)\u00b7max \u0000cos(\u20d7 vtq,\u20d7 vtd),0 \u0001id f(tq)+b \u0013 (3.37) subject to f(tq,td) \u22650, \u2200tq \u2208q,td \u2208d (3.38) and \u2211 tq\u2208q f(tq,td) = t f(td)+ \u00b5 \u2211\u00af d\u2208Dt f(tq, \u00af d) \u2211\u00af d\u2208D | \u00af d| |d|+ \u00b5 , \u2200td \u2208d (3.39) where, id f(t) = |D|\u2212d f(t)+0.5 d f(t)+0.5 (3.40) u(d) is the set of all unique terms in document d, and b is a constant. Another term-alignment based distance metric was proposed by Kenter and de Rijke [230] for computing short-text similarity. The design of the saliencyweighted semantic network (SWSN) is motivated by the BM25 [80] formulation. swsn(sl,ss) = \u2211 tl\u2208sl id f(tl)\u00b7 sem(tl,ss)\u00b7(k1 +1) sem(tl,ss)+k1 \u00b7 \u0010 1\u2212b+b\u00b7 |ss| avgsl \u0011 (3.41) where, sem(t,s) = max \u00af t\u2208s cos(\u20d7 vt,\u20d7 v\u00af t) (3.42) Here ss is the shorter of the two sentences to be compared, and sl the longer sentence. Figure 3.11 highlights the distinct strengths and weaknesses of matching using local and distributed representations of terms for retrieval. For the query \u201cCambridge\u201d, a local representation (or exact matching) based model can easily distinguish between the passage on Cambridge (Figure 3.11a) and the one on Oxford (Figure 3.11b). However, the model is easily duped by a non-relevant passage that has been arti\ufb01cially injected with the term \u201cCambridge\u201d (Figure 3.11c). The embedding space based matching, on the other hand, can spot that the other terms in the passage pro\f72 Chapter 3. Background the city ofcambridge is a university city and the county town of cambridgeshire , england . it lies in east anglia , on the river cam , about 50 miles ( 80 km ) north of london . according to the united kingdom census 2011 , its population was 123867 ( including 24488 students ) . this makescambridge the second largest city in cambridgeshire after peterborough , and the 54th largest in the united kingdom . there is archaeological evidence of settlement in the area during the bronze age and roman times ; under viking rulecambridge became an important trading centre . the \ufb01rst town charters were granted in the 12th century , although city status was not conferred until 1951 . (a) Passage about the city of Cambridge oxford is a city in the south east region of england and the county town of oxfordshire . with a population of 159994 it is the 52nd largest city in the united kingdom , and one of the fastest growing and most ethnically diverse . oxford has a broad economic base . its industries include motor manufacturing , education , publishing and a large number of information technology and sciencebased businesses , some being academic offshoots . the city is known worldwide as the home of the university of oxford , the oldest university in the englishspeaking world . buildings in oxford demonstrate examples of every english architectural period since the arrival of the saxons , including the mid18thcentury radcliffe camera . oxford is known as the city of dreaming spires , a term coined by poet matthew arnold . (b) Passage about the city of Oxford thecambridge ( giraffa camelopardalis ) is an african eventoed ungulate mammal , the tallest living terrestrial animal and the largest ruminant . its species name refers to its camellike shape and its leopardlike colouring . its chief distinguishing characteristics are its extremely long neck and legs , its hornlike ossicones , and its distinctive coat patterns . it is classi\ufb01ed under the family giraf\ufb01dae , along with its closest extant relative , the okapi . the nine subspecies are distinguished by their coat patterns . the scattered range of giraffes extends from chad in the north to south africa in the south , and from niger in the west to somalia in the east . giraffes usually inhabit savannas , grasslands , and open woodlands . (c) Passage about giraffes, but \u2019giraffe\u2019 is replaced by \u2019Cambridge\u2019 Figure 3.11: A visualization of IN-OUT similarities between terms in different passages with the query term \u201cCambridge\u201d. The visualization reveals that, besides the term \u201cCambridge\u201d, many other terms in the passages about both Cambridge and Oxford have high similarity to the query term. The passage (c) is adapted from a passage on giraffes by replacing all the occurrences of the term \u201cgiraffe\u201d with \u201ccambridge\u201d. However, none of the other terms in (c) are found to be relevant to the query term. An embedding based approach may be able to determine that passage (c) is non-relevant to the query \u201cCambridge\u201d, but fail to realize that passage (b) is also non-relevant. A term counting-based model, on the other hand, can easily identify that passage (b) is non-relevant but may rank passage (c) incorrectly high. \f3.3. Term embeddings for IR 73 (a) Global embedding (b) Local embedding Figure 3.12: A two-dimensional visualization of term embeddings when the vector space is trained on a (a) global corpus and a (b) query-speci\ufb01c corpus, respectively. The grey circles represent individual terms in the vocabulary. The white circle represents the query \u201cocean remote sensing\u201d as the centroid of the embeddings of the individual query terms, and the light grey circles correspond to good expansion terms for this query. When the representations are query-speci\ufb01c then the meaning of the terms are better disambiguated, and more likely to result in the selection of good expansion terms. vide clear indication that the passage is not about a city, but fails to realize that the passage about Oxford (Figure 3.11b) is inappropriate for the same query. Embedding based models often perform poorly when the retrieval is performed over the full document collection [165]. However, as seen in the example of Figure 3.11, the errors made by embedding based models and exact matching models may be different\u2014and the combination of the two is often preffered [165, 198, 214, 223]. Another technique is to use the embedding based model to re-rank only a subset of the documents retrieved by a different\u2014generally an exact matching based\u2014IR model. The chaining of different IR models where each successive model re-ranks a smaller number of candidate documents is called Telescoping [111]. Telescoping evaluations are common in the neural IR literature [7, 163\u2013165, 231] and the results are representative of performances of these models on re-ranking tasks. However, as Mitra et al. [165] demonstrate, good performances on re-ranking tasks may not be indicative how the model would perform if the retrieval involves larger document collections. \f74 Chapter 3. Background 3.3.2 Query expansion Instead of comparing the query and the document directly in the embedding space, an alternative approach is to use term embeddings to \ufb01nd good expansion candidates from a global vocabulary, and then retrieving documents using the expanded query. Different functions [139, 166, 232] have been proposed for estimating the relevance of candidate terms to the query\u2014all of them involves comparing the candidate term individually to every query term using their vector representations, and then aggregating the scores. For example, [139, 166] estimate the relevance of candidate term tc as, score(tc,q) = 1 |q| \u2211 tq\u2208q cos(\u20d7 vtc,\u20d7 vtq) (3.43) Term embedding based query expansion on its own performs worse than pseudorelevance feedback [166]. But like the models in the previous section, shows better performances when used in combination with PRF [232]. Diaz et al. [139] explored the idea of query-speci\ufb01c term embeddings and found that they are more effective in identifying good expansion terms than a global representation (see Figure 3.12). The local model proposed by Diaz et al. [139] incorporate relevance feedback in the process of learning the term embeddings\u2014a set of documents is retrieved for the query and a query-speci\ufb01c term embedding model is trained. This local embedding model is then employed for identifying expansion candidates for the query for a second round of document retrieval. Term embeddings have also been explored for re-weighting query terms [233] and \ufb01nding relevant query re-writes [211], as well as in the context of other IR tasks such as cross-lingual retrieval [217] and entity retrieval [57, 58]. In Section 3.6, we will discuss neural network models with deeper architectures and their applications to retrieval. \f3.4. Supervised learning to rank 75 3.4 Supervised learning to rank Learning to rank (LTR) for IR uses training data relq(d), such as human relevance labels and click data, to train towards an IR objective. Unlike traditional IR approaches, these models typically have large number of learnable parameters that require many training samples to be tuned [35]. LTR models represent a rankable item\u2014e.g., a query-document pair\u2014as a feature vector\u20d7 x \u2208Rn. The ranking model f :\u20d7 x \u2192R is trained to map the vector to a real-valued score such that for a given query more relevant documents are scored higher and some chosen rank-based metric is maximized. The model training is said to be end-to-end if the parameters of f are learned all at once rather than in parts, and if the vector\u20d7 x contains simple features rather than models. Liu [39] categorizes the different LTR approaches based on their training objectives. \u2022 In the pointwise approach, the relevance information relq(d) is in the form of a numerical value associated with every query-document pair with input vector\u20d7 xq,d. The numerical relevance label can be derived from binary or graded relevance judgments or from implicit user feedback, such as a clickthrough rate. A regression model is typically trained on the data to predict the numerical value relq(d) given\u20d7 xq,d. \u2022 In the pairwise approach, the relevance information is in the form of preferences between pairs of documents with respect to individual queries (e.g., di \u227b q dj). The ranking problem in this case reduces to that of a binary classi\ufb01cation to predict the more relevant document. \u2022 Finally, the listwise approach involves directly optimizing for a rank-based metric such as NDCG\u2014which is more challenging because these metrics are often not continuous (and hence not differentiable) with respect to the model parameters. Many machine learning models\u2014including support vector machines [234], neural networks [235], and boosted decision trees [236]\u2014have been employed over \f76 Chapter 3. Background the years for the LTR task, and a correspondingly large number of different loss functions have been explored. 3.4.1 Input features Traditional LTR models employ hand-crafted features [39] for representing querydocument pairs in\u20d7 x. The design of these features typically encodes key IR insights and belong to one of the three categories. \u2022 Query-independent or static features (e.g., incoming link count and document length) \u2022 Query-dependent or dynamic features (e.g., BM25) \u2022 Query-level features (e.g., query length) In contrast, in recently proposed neural LTR models the deep architecture is responsible for feature learning5 from simple vector representations of the input which may resemble the schemes described in Section 3.5.1 (e.g., [164]) or the interaction-based representations that we discuss later in Section 3.6.3 (e.g., [7, 238]). These features, learnt from the query and document texts, can be combined with other features that may not be possible to infer from the content, such as document popularity [239]. 3.4.2 Loss functions In ad hoc retrieval, the LTR model needs to rank the documents in a collection D in response to a query. When training a neural model for this task, the ideal ranking of documents for a query q from the training dataset can be determined based on the relevance labels relq(d) associated with each document d \u2208D. In the pointwise approach, the neural model is trained to directly estimate relq(d), which can be a numeric value or a categorical label. 5In the literature, when the model is responsible for feature learning the task is sometimes categorized as \u201clearning to match\u201d [83, 237]. However, from a machine learning viewpoint, this distinction between whether\u20d7 x is a vector of hand-engineered features or a vector encoding of query-document text makes little difference to the LTR formulation described here. We, therefore, avoid making this distinction in favor of a more general de\ufb01nition. \f3.4. Supervised learning to rank 77 Regression loss Given \u20d7 xq,d, the task of estimating the relevance label relq(d) can be cast as a regression problem, and a standard loss function\u2014such as the square loss\u2014can be employed. Lsquared = \u2225relq(d)\u2212s(\u20d7 xq,d)\u22252 (3.44) Where, s(\u20d7 xq,d) is the score predicted by the model and relq(d) can either be the value of the relevance label [240] or the one-hot representation when the label is categorical [241]. Classi\ufb01cation loss When the relevance labels in the training data are categorical, it makes more sense to treat the label prediction problem as a multiclass classi\ufb01cation. The neural model under this setting, estimates the probability of a label y given\u20d7 xq,d. The probability of the correct label yq,d (= relq(d)) can be obtained by the softmax function, p(yq,d|q,d) = p(yq,d|\u20d7 xq,d) = e\u03b3\u00b7s \u0000\u20d7 xq,d ,yq,d \u0001 \u2211y\u2208Y e\u03b3\u00b7s(\u20d7 xq,d ,y) (3.45) The softmax function normalizes the score of the correct label against the set of all possible labels Y . The cross-entropy loss can then be applied [242] as follows, Lclassi\ufb01cation = \u2212log \u0010 p(yq,d|q,d) \u0011 = \u2212log \u0010 e\u03b3\u00b7s \u0000\u20d7 xq,d ,yq,d \u0001 \u2211y\u2208Y e\u03b3\u00b7s(\u20d7 xq,d ,y) \u0011 (3.46) However, a ranking model does not need to estimate the true relevance label accurately as long as it ranks the relevant documents D+ over all the other candidates in D. Typically, only a few documents from D are relevant to q. If we assume a binary notion of relevance, then the problem is similar to multi-label classi\ufb01cation\u2014or, multiclass classi\ufb01cation if we assume a single relevant document d+ per query\u2014 \f78 Chapter 3. Background where the candidate documents are the classes. Next, we discuss loss functions for LTR models that tries to predict the relevant document by maximizing p(d+|q). Note that this is different from the classi\ufb01cation loss in Equation 3.46 which maximizes p(yq,d|q,d). Contrastive loss In representation learning models, a relevant document should be closer to the query representation than a non-relevant document. The contrastive loss [243, 244]\u2014common in image retrieval\u2014learns the model parameters by minimizing the distance between a relevant pair, while increasing the distance between dissimilar items. LContrastive(q,d,yq,d) = (1\u2212yq,d)\u00b7Lpos(distq,d)+yq,d \u00b7Lneg(distq,d) (3.47) Contrastive loss assumes that the relevance label yq,d \u2208{0,1} is binary. For each training sample, either Lpos or Lneg is applied over the distance distq,d as predicted by the model. In particular, Hadsell et al. [244] use the following formulation of this loss function. LContrastive(q,d,yq,d) = (1\u2212yq,d)\u00b7 1 2 \u0000max(0,m\u2212distq,d) \u00012 (3.48) +yq,d \u00b7 1 2(distq,d)2 (3.49) Where, m is a margin. Cross-Entropy loss over documents The probability of ranking d+ over all the other documents in the collection D is given by the softmax function, p(d+|q) = e\u03b3\u00b7s \u0000q,d+\u0001 \u2211d\u2208D e\u03b3\u00b7s(q,d) (3.50) The cross-entropy (CE) loss then maximizes the difference between scores generated by the model for relevant and less relevant documents. \f3.4. Supervised learning to rank 79 LCE(q,d+,D) = \u2212log \u0010 p(d+|q) \u0011 (3.51) = \u2212log \u0010 e\u03b3\u00b7s \u0000q,d+\u0001 \u2211d\u2208D e\u03b3\u00b7s(q,d) \u0011 (3.52) However, when D is the full collection then computing the softmax (i.e. the denominator in Equation 3.52) is prohibitively expensive. Coincidentally, the CE loss is also useful for non-IR tasks, such as language modelling [190, 191], where the model needs to predict a single term from a large vocabulary given its neighbours as input. Several different approaches have been proposed in the LM literature to address this computational complexity that is relevant to our discussion. We brie\ufb02y describe some of these strategies here. Hierarchical softmax Instead of computing p(d+|q) directly, Goodman [245] groups the candidates D into a set of classes C, and then predicts the correct class c+ given q followed by predicting d+ given \u27e8c+,q\u27e9. p(d+|q) = p(d+|c+,x)\u00b7 p(c+|q) (3.53) The computational cost in this modi\ufb01ed approach is a function of |C|+|c+| which is typically much smaller than |D|. Further computational ef\ufb01ciency can be achieved by employing a hierarchy of such classes [207, 246]. The hierarchy of classes is typically based on either similarity between candidates [191, 247, 248], or frequency binning [249]. Zweig and Makarychev [250] and Grave et al. [251] have explored strategies for building the hierarchy that directly minimizes the computational complexity. Importance sampling (IS) An alternative to computing the exact softmax, is to approximately estimate it using sampling based approaches. Note, that we can rewrite Equation 3.52 as follows, \f80 Chapter 3. Background LCE(q,d+,D) = \u2212log \u0010 e\u03b3\u00b7s \u0000q,d+\u0001 \u2211d\u2208D e\u03b3\u00b7s(q,d) \u0011 (3.54) = \u2212\u03b3 \u00b7s \u0000q,d+\u0001 +log \u2211 d\u2208D e\u03b3\u00b7s(q,d) (3.55) To train a neural model using back-propagation, we need to compute the gradient \u2207\u03b8 of the loss LCE with respect to the model parameters \u03b8, \u2207\u03b8LCE(q,d+,Y) = \u2212\u03b3\u2207\u03b8 \u00b7s \u0000q,d+\u0001 +\u2207\u03b8log \u2211 d\u2208D e\u03b3\u00b7s(q,d) (3.56) = \u2212\u03b3\u2207\u03b8 \u00b7s \u0000q,d+\u0001 + \u2207\u03b8 \u2211d\u2208D e\u03b3\u00b7s(q,d) \u2211d\u2208D e\u03b3\u00b7s(q,d) (3.57) = \u2212\u03b3\u2207\u03b8 \u00b7s \u0000q,d+\u0001 + \u2211d\u2208D \u2207\u03b8e\u03b3\u00b7s(q,d) \u2211d\u2208D e\u03b3\u00b7s(q,d) (3.58) = \u2212\u03b3\u2207\u03b8 \u00b7s \u0000q,d+\u0001 + \u2211d\u2208D \u03b3 \u00b7e\u03b3\u00b7s(q,d)\u2207\u03b8s(q,d) \u2211d\u2208D e\u03b3\u00b7s(q,d) (3.59) = \u2212\u03b3\u2207\u03b8 \u00b7s \u0000q,d+\u0001 +\u03b3 \u2211 d\u2208D e\u03b3\u00b7s(q,d) \u2211d\u2208D e\u03b3\u00b7s(q,d)\u2207\u03b8s(q,d) (3.60) = \u2212\u03b3\u2207\u03b8 \u00b7s \u0000q,d+\u0001 +\u03b3 \u2211 d\u2208D p(d|q)\u2207\u03b8s(q,d) (3.61) As Sen\u00e9cal and Bengio [252] point out, the \ufb01rst component of the gradient \u03b3\u2207\u03b8s \u0000q,d+\u0001 is the positive reinforcement to the model for the correct candidate d+ and the second component \u03b3 \u2211d\u2208D p(d|q)\u2207\u03b8s(q,d) is the negative reinforcement corresponding to all the other (incorrect) candidates. The key idea behind sampling based approaches is to estimate the second component without computing the costly sum over the whole candidate set. In IS [253\u2013256], Monte-Carlo method is used to estimate the second component. Noise Contrastive Estimation (NCE) In NCE [257\u2013259], the task is modi\ufb01ed to that of a binary classi\ufb01cation. The model is trained to distinguish a sample drawn from a true distribution p(d|q) from a sample drawn from a noisy distribution \u02dc p(d). The training data contains k noisy samples for every true sample. Let, E and \u00af E \f3.4. Supervised learning to rank 81 indicate that a sample is drawn from the true and the noisy distributions, respectively. Then, p(E |q,d) = p(d|q) p(d|q)+k \u00d7 \u02dc p(d) (3.62) p( \u00af E |q,d) = k \u00d7 \u02dc p(d) p(d|q)+k \u00d7 \u02dc p(d) (3.63) We want our model to learn the true distribution p(d|q). Remember, that according to our model, p(d|q) = e\u03b3\u00b7s(q,d) \u2211\u00af d\u2208D e\u03b3\u00b7s(q, \u00af d) (3.64) = e\u03b3\u00b7s(q,d) z(q) (3.65) A key ef\ufb01ciency trick involves setting z(q) to 1 [258\u2013260]. Therefore, p(d|q) = e\u03b3\u00b7s(q,d) (3.66) Putting Equation 3.66 back in Equation 3.62 and 3.63. p(E |q,d) = e\u03b3\u00b7s(q,d) e\u03b3\u00b7s(q,d) +k \u00d7 \u02dc p(d) (3.67) p( \u00af E |q,d) = k \u00d7 \u02dc p(d) e\u03b3\u00b7s(q,d) +k \u00d7 \u02dc p(d) (3.68) Finally, the NCE loss is given by, \f82 Chapter 3. Background LNCE = \u2212\u2211 \u27e8x,d+\u27e9 \u0012 log p(E |x,d+)+ k \u2211 i=1 log p( \u00af E |x,y\u2212 i ) \u0013 (3.69) = \u2212\u2211 \u27e8x,d+\u27e9 \u0012 log e\u03b3\u00b7s(q,d+) e\u03b3\u00b7s(q,d+) +k \u00d7 \u02dc p(d+) + k \u2211 i=1 log k \u00d7 \u02dc p(y\u2212 i ) e\u03b3\u00b7s(q,d\u2212 i ) +k \u00d7 \u02dc p(y\u2212 i ) \u0013 (3.70) Note, that the outer summation iterates over all the positive \u27e8x,d+\u27e9pairs in the training data. Negative sampling (NEG) Mikolov et al. [203] modify the NCE loss by replacing k \u00d7 \u02dc p(d) with 1 in Equation 3.67 and 3.68. p(E |q,d) = e\u03b3\u00b7s(q,d) e\u03b3\u00b7s(q,d) +1 (3.71) = 1 1+e\u2212\u03b3\u00b7s(q,d) (3.72) p( \u00af E |q,d) = 1 1+e\u03b3\u00b7s(q,d) (3.73) which changes the NCE loss to the NEG loss. LNEG = \u2212\u2211 \u27e8x,d+\u27e9 \u0012 log 1 1+e\u2212\u03b3\u00b7s(q,d+) + k \u2211 i=1 log 1 1+e\u03b3\u00b7s(q,d\u2212 i ) \u0013 (3.74) BlackOut Related to both IS and NCE, is BlackOut [261]. It is an extension of the DropOut [262] method that is often employed to avoid over-\ufb01tting in neural models with large number of parameters. DropOut is typically applied to the input or hidden layers of the network and involves randomly dropping a subset of the neural units and their corresponding connections. BlackOut applies the same idea to the output layer of the network for ef\ufb01ciently computing the loss. We refer readers to [261] for more rigorous discussions on the relationship between IS, NCE, and DropOut. \f3.4. Supervised learning to rank 83 For document retrieval Huang et al. [164] approximate the cross-entropy loss of Equation 3.52 by replacing D with D\u2032\u2014where, D\u2032 = {d+}\u222aD\u2212and D\u2212is a \ufb01xed number of randomly sampled candidates. Mitra et al. [7] use a similar loss function but focus on the document re-ranking task where the neural model needs to distinguish the relevant documents from less relevant (but likely not completely non-relevant) candidates. Therefore, in their work the re-ranking model is trained with negative examples which comprise of documents retrieved by an existing IR system but manually judged as less relevant, instead of being sampled uniformly from the collection. IS, NCE, NEG, and these other sampling based approaches approximate the comparison with the full collection based on a sampled subset. For additional notes on these approaches, we refer the readers to [263\u2013265]. In a typical retrieval scenario, however, multiple documents may be relevant to the same query q, and the notion of relevance among this set of documents D+ may be further graded. Some LTR approaches consider pairs of documents for the same query and minimize the average number of inversions in ranking\u2014i.e., di \u227b q dj but d j is ranked higher than di. The pairwise loss employed in these approaches has the following form [266], Lpairwise = \u03d5(si \u2212s j) (3.75) where, some possible choices for \u03d5 include, \u2022 Hinge function \u03d5(z) = max(0,1\u2212z) [267, 268] \u2022 Exponential function \u03d5(z) = e\u2212z [269] \u2022 Logistic function \u03d5(z) = log(1+e\u2212z) [235] RankNet loss RankNet [235] is a pairwise loss function that has been a common choice for training neural LTR models and was also for many years an industry favourite, such as at the commercial Web search engine Bing.6 Under the RankNet 6https://www.microsoft.com/en-us/research/blog/ ranknet-a-ranking-retrospective/ \f84 Chapter 3. Background loss, the model is trained on triples \u27e8q,di,dj\u27e9consisting of a query q and a pair of documents di and d j with different relevance labels\u2014such that di is more relevant than dj (i.e., di \u227b q dj)\u2014and corresponding feature vectors \u27e8\u20d7 xi,\u20d7 xj\u27e9. The model f : Rn \u2192R, typically a neural network but can also be any other machine learning model whose output is differentiable with respect to its parameters, computes the scores si = f(\u20d7 xi) and s j = f(\u20d7 xj), where ideally si > s j. Given the scores \u27e8si,s j\u27e9, the probability that di would be ranked higher than dj is given by, pi j \u2261p(si > sj) \u2261 1 1+e\u2212\u03c3(si\u2212s j) (3.76) Where, the constant \u03c3 determines the shape of the sigmoid. During training, the probability of ranking di higher than dj for q is maximized. Let Sij \u2208{\u22121,0,+1} be the true preference label between di and d j for the training sample\u2014 denoting di is less, equal, or more relevant than dj, respectively. Then the desired probability of ranking di over d j is given by \u00af pij = 1 2(1+Si j). The cross-entropy loss L between the desired probability \u00af pi j and the predicted probability pij is given by, L = \u2212\u00af pijlog(pij)\u2212(1\u2212\u00af pi j)log(1\u2212pij) (3.77) = 1 2(1\u2212Sij)\u03c3(si \u2212s j)+log(1+e\u2212\u03c3(si\u2212s j)) (3.78) = log(1+e\u2212\u03c3(si\u2212sj)) if, di \u227b q d j(Si j = 1) (3.79) Note that L is differentiable with respect to the model output si and hence the model can be trained using gradient descent. We direct the interested reader to [270] for more detailed derivations for computing the gradients for RankNet. Readers should note the obvious connection between the CE loss described previously and the RankNet loss. If in the denominator of Equation 3.52, we only sum over a pair of relevant and non-relevant documents then it reduces to the logisticloss function of RankNet described in Equation 3.79. So, at the level of a single \f3.4. Supervised learning to rank 85 training sample, the key distinction between the two is whether we compare the relevant document to a single less relevant candidate or the full collection. However, in case of RankNet, it is important to consider how the pairs are sampled as the training is in\ufb02uenced by their distribution. The key limitation of pairwise objective functions is that the rank inversion of any pair of documents is considered equally harmful. This is, however, generally untrue for most IR metrics where a signi\ufb01cantly large penalty is associated with inversions at the top rank positions. For example, consider two different result lists for the same query\u2014result list A ranks two relevant documents at position one and 50, while result list B ranks the same two relevant documents at positions three and 40. While the result set A has more rank inversions compared to result set B (48 vs. 40), it would fare better on typical IR metrics, such as NDCG. Therefore, to optimize for a rank-based metric we need to incorporate listwise objectives\u2014that are sensitive to these differences\u2014in our model training. However, the rank-based metrics are generally non-continuous and non-differentiable, which makes them dif\ufb01cult to incorporate in the loss function. LambdaRank loss Burges et al. [271] make two key observations: (i) the gradient should be bigger for pairs of documents that produce a bigger impact in NDCG by swapping positions, and (ii) to train a model we don\u2019t need the costs themselves, only the gradients (of the costs w.r.t model scores). This leads to the LambdaRank loss which weights the gradients from the RankNet loss by the NDCG delta that would result from swapping the rank position of the pair of documents. \u03bbLambdaRank = \u03bbRankNet \u00b7|\u2206NDCG| (3.80) This formulation of LambdaRank can optimize directly for NDCG [272, 273], and any other IR measure by incorporating the corresponding delta change in Equation 3.80. \f86 Chapter 3. Background ListNet and ListMLE loss The probability of observing a particular rank order can be estimated from the individual document scores using different models [274\u2013276]. For example, according to the Luce model [274], given four items {d1,d2,d3,d4} the probability of observing a particular rank-order, say [d2,d1,d4,d3], is given by: p(\u03c0|s) = \u03d5(s2) \u03d5(s1)+\u03d5(s2)+\u03d5(s3)+\u03d5(s4) \u00d7 \u03d5(s1) \u03d5(s1)+\u03d5(s3)+\u03d5(s4) \u00d7 \u03d5(s4) \u03d5(s3)+\u03d5(s4) (3.81) Where, \u03c0 is a particular permutation and \u03d5 is a transformation (e.g., linear, exponential, or sigmoid) over the score si corresponding to item di. Using this model, we can compute the probability distribution over all possible permutations based on the model scores and the ground truth labels. The K-L divergence between these two distributions gives us the ListNet loss [277]. However, computing the probability distribution over all possible permutations is computationally expensive, even when restricted to only the top-K items. The ListMLE loss [278] instead computes the probability of the ideal permutation based on the ground truth. However, with categorical labels more than one ideal permutation may be possible which should be handled appropriately. Many of the challenges discussed in this section are common to both retrieval tasks as well as multiclass and multilabel classi\ufb01cation with extremely large number of classes\u2014often referred to as extreme classi\ufb01cation [279\u2013281]. Ad hoc retrieval can be posed as an extreme classi\ufb01cation task under a binary notion of relevance and a \ufb01xed collection constraint. New loss functions (e.g. the spherical loss family [282\u2013 284]) have been explored for these large scale classi\ufb01cation tasks which may be relevant for neural retrieval research. The problem of learning from sparse biased labels [285, 286] is also an important challenge in these frameworks. Finally, deep neural models for LTR with large number of parameters may require large training data for supervised learning. Alternative training schemes\u2014e.g., using weak supervision signals [287, 288] or adversarial learning [141, 289]\u2014are emerging. \f3.5. Deep neural networks 87 forward pass backward pass W1 W2 input actual output loss expected output (a) A neural network with a single hidden layer. non-linearity (tanh) input linear transform (W1, b1) non-linearity (tanh) linear transform (W2, b2) actual output forward pass backward pass expected output loss (b) The same neural network viewed as a chain of computational steps. Figure 3.13: Two different visualizations of a feed-forward neural network with a single hidden layer. In (a), the addition of the bias vector and the non-linearity function is implicit. Figure (b) shows the same network but as a sequence of computational nodes. Most neural network toolkits implement a set of standard computational nodes that can be connected to build more sophisticated neural architectures. 3.5 Deep neural networks Deep neural network models consist of chains of tensor operations. The tensor operation can range from parameterized linear transformations (e.g., multiplication with a weight matrix, or the addition of a bias vector) to elementwise application of non-linear functions, such as tanh or recti\ufb01ed linear units (ReLU) [290\u2013292]. Figure 3.13 shows a simple feed-forward neural network with fully-connected layers. For an input vector\u20d7 x, the model produces the output\u20d7 y as follows, \u20d7 y = tanh(W2 \u00b7tanh(W1 \u00b7\u20d7 x+\u20d7 b1)+\u20d7 b2) (3.82) The model training involves tuning the parameters W1,\u20d7 b1, W2, and\u20d7 b2 to minimize the loss between the expected output and the output predicted by the \ufb01nal layer. \f88 Chapter 3. Background Input features Hidden layers Label surface kerberos book library H1 H2 1 0 1 0 1 0 \u2713 1 1 0 0 0 0 \u2717 0 1 0 1 0 1 \u2713 0 0 1 1 0 0 \u2717 library book surface kerberos +0.5 +0.5 -1 -1 -1 -1 +1 +1 +0.5 +0.5 H1 H2 Figure 3.14: Consider a toy binary classi\ufb01cation task on a corpus of four short texts\u2014 \u201csurface book\u201d, \u201ckerberos library\u201d, \u201clibrary book\u201d, and \u201ckerberos surface\u201d\u2014 where the model needs to predict if the text is related to computers. The \ufb01rst two texts\u2014\u201cSurface Book\u201d and \u201ckerberos library\u201d\u2014are positive under this classi\ufb01cation, and the latter two negative. The input feature space consists of four binary features that indicate whether each of the four terms from the vocabulary is present in the text. The table shows that the speci\ufb01ed classes are not linearly separable with respect to the input feature space. However, if we add couple of hidden nodes, as shown in the diagram, then the classes can be linearly separated with respect to the output of the hidden layer. The parameters are usually trained discriminatively using backpropagation [293\u2013 295]. During forward-pass each layer generates an output conditioned on its input, and during backward pass each layer computes the error gradient with respect to its parameters and its inputs. The design of a DNN typically involves many choices of architectures and hyper-parameters. Neural networks with as few a single hidden layer\u2014but with suf\ufb01cient number of hidden nodes\u2014can theoretically approximate any function [296]. In practice, however, deeper architectures\u2014sometimes with as many as 1000 layers [297]\u2014have been shown to perform signi\ufb01cantly better than shallower networks. For readers who are less familiar with neural network models, we present a simple example in Figure 3.14 to illustrate how hidden layers enable these models to capture non-linear relationships. We direct readers to [298] for further discussions on how additional hidden layers help. The rest of this section is dedicated to the discussion of input representations and standard architectures for deep neural models. \f3.5. Deep neural networks 89 3.5.1 Input text representations Neural models that learn representations of text take raw text as input. A key consideration is how the text should be represented at the input layer of the model. Figure 3.15 shows some of the common input representations of text. Some neural models [256, 299\u2013301] operate at the character-level. In these models, each character is typically represented by a one-hot vector. The vector dimensions\u2014referred to as channels\u2014in this case equals the number of allowed characters in the vocabulary. These models incorporate the least amount of prior knowledge about the language in the input representation\u2014for example, these models are often required to learn about tokenization from scratch by treating space as just another character in the vocabulary. The representation of longer texts, such as sentences, can be derived by concatenating or summing the character-level vectors as shown in Figure 3.15a. The input text can also be pre-tokenized into terms\u2014where each term is represented by either a sparse vector or using pre-trained term embeddings (Figure 3.15d). Terms may have a one-hot (or local) representation where each term has an unique ID (Figure 3.15b), or the term vector can be derived by aggregating one-hot vectors of its constituting characters (or character n-graphs) as shown in Figure 3.15c. If pre-trained embeddings are used for term representation, then the embedding vectors can be further tuned during training or kept \ufb01xed. Similar to character-level models, the term vectors are further aggregated (by concatenation or sum) to obtain the representation of longer chunks of text, such as sentences. While one-hot representations of terms (Figure 3.15b) are common in many NLP tasks, historically pre-trained embeddings (e.g., [302, 303]) and character n-graph based representations (e.g., [7, 164]) are more commonplace in IR. 3.5.2 Architectures In this section, we describe few standard neural architectures commonly used in IR. For broader overview of neural architectures and design patterns please refer to [21, 189, 293]. \f90 Chapter 3. Background d o g s h a v e o w n e r s c a t s h a v e s t a f f one-hot vectors concatenate channels [chars x channels] (a) Character-level input d o g s h a v e o w n e r s c a t s h a v e s t a f f one-hot vectors concatenate sum sum sum sum sum sum channels [words x channels] (b) Term-level input w/ bag-of-characters per term # d o g s # # h a v e # # o w n e r s # # c a t s # # h a v e # # s t a f f # one-hot vectors concatenate or sum sum sum sum sum sum sum channels [words x channels] or [1 x channels] (c) Term-level input w/ bag-of-trigraphs per term d o g s h a v e o w n e r s c a t s h a v e s t a f f pre-trained embeddings concatenate or sum channels [words x channels] or [1 x channels] (d) Term-level input w/ pre-trained term embeddings Figure 3.15: Examples of different representation strategies for text input to deep neural network models. The smallest granularity of representation can be a character or a term. The vector can be a sparse local representation, or a pre-trained embedding. \f3.5. Deep neural networks 91 Shift-invariant neural operations Convolutional [20, 292, 304, 305] and recurrent [306\u2013309] architectures are commonplace in many deep learning applications. These neural operations are part of a broader family of shift-invariant architectures. The key intuition behind these architectures stem from the natural regularities observable in most inputs. In vision, for example, the task of detecting a face should be invariant to whether the image is shifted, rotated, or scaled. Similarly, the meaning of an English sentence should, in most cases, stay consistent independent of which part of the document it appears in. Therefore, intuitively a neural model for object recognition or text understanding should not learn an independent logic for the same action applied to different parts of the input space. All shift-invariant neural operations fundamentally employ a window-based approach. A \ufb01xed size window moves over the input space with \ufb01xed stride in each step. A (typically parameterized) function\u2014referred to as a kernel, or a \ufb01lter, or a cell\u2014is applied over each instance of the window. The parameters of the cell are shared across all the instances of the input window. The shared parameters not only imply a smaller number of total parameters in the model, but also more supervision per parameter per training sample due to the repeated application. Figure 3.16a shows an example of a cell being applied on a sequence of terms\u2014 with a window size of three terms\u2014in each step. A common cell implementation involves multiplying with a weight matrix\u2014in which case the architecture in Figure 3.16a is referred as convolutional. An example of a cell without any parameters is pooling\u2014which consists of aggregating (e.g., by computing the max or the average per channel) over all the terms in the window. Note, that the length of the input sequence can be variable in both cases and the length of the output of a convolutional (or pooling) layer is a function of the input length. Figure 3.16b shows an example of global pooling\u2014where the window spans over the whole input\u2014being applied on top of a convolutional layer. The global pooling strategy is common for generating a \ufb01xed size output from a variable length input.7 In convolution or pooling, each window is applied independently. In con7It may be obvious, but worth pointing out, that a global convolutional layer is exactly the same as a fully-connected layer. \f92 Chapter 3. Background output (a) Convolution or pooling convolution pooling output (b) Convolution w/ global pooling output (c) Recurrent output (d) Recursive or tree output k0 v0 q kn vn a0 an on o0 ki vi oi ai (e) Attention k1 v1 qi kn vn a1 an on o1 ki vi oi ai outputn outputi output1 (f) Self-attention Figure 3.16: Standard shift-invariant neural architectures including convolutional neural networks (CNN), recurrent neural networks (RNN), pooling layers, treestructured neural networks, attention layer, and self-attention layer. trast, in the recurrent architecture of Figure 3.16e the cell not only considers the input window but also the output of the previous instance of the cell as its input. Many different cell architectures have been explored for recurrent neural networks \f3.5. Deep neural networks 93 (RNN)\u2014although Elman network [310], Long Short-Term Memory (LSTM) [309], and Gated Recurrent Unit (GRU) [311, 312] are commonly used. RNNs are typically applied to sequences but can also be useful for two (and higher) dimensional inputs [313]. One consideration when using convolutional or recurrent layers is how the window outputs are aggregated. Convolutional layers are typically followed by pooling or fully-connected layers that perform a global aggregation over all the window instances. While a fully-connected layer is aware of each window position, a global pooling layer is typically agnostic to it. However, unlike a fully-connected layer, a global max-pooling operation can be applied to a variable size input. Where a global aggregation strategy may be less appropriate (e.g., long sequences), recurrent networks with memory [314\u2013316] and/or attention [44, 317\u2013320] may be useful. Figure 3.16e shows tree-structured (or recursive) neural networks [321\u2013325] where the same cell is applied at multiple levels in a tree-like hierarchical fashion resulting in a recursive aggregation strategy. Finally, attention mechanisms\u2014in particular, self-attention [326]\u2014have demonstrated remarkable usefulness for many NLP and IR tasks. In a typical attention setting, we have a set of n items that we can attend over and an input context, and we produce a probability distribution {a1,...,ai,...,an} of attending to each item as a function of similarity between a learned representation q of the context and learned representations ki of the items. The \ufb01nal output o is the aggregate of learned value vi corresponding to each item weighted by their attention probabilities. o = n \u2211 i \u03d5(q,ki) \u2211n j \u03d5(q,kj) \u00d7vi (3.83) In self-attention, we repeat the above process n times treating one of the n items themselves as the context in each case. Self-attention layers have been operationalized in Transformer-based [326] architectures, e.g., BERT [327]. \f94 Chapter 3. Background Auto-encoders The autoencoder architecture [294, 328, 329] is based on the information bottleneck method [201]. The goal is to learn a compressed representation \u20d7 x \u2208Rk of items from their higher-dimensional vector representations\u20d7 v \u2208RK, such that k \u226aK. The model has an hour-glass shape as shown in Figure 3.17a and is trained by feeding in the high-dimensional vector inputs and trying to re-construct the same representation at the output layer. The lower-dimensional middle layer forces the encoder part of the model to extract the minimal suf\ufb01cient statistics of\u20d7 v into \u20d7 x, such that the decoder part of the network can reconstruct the original input back from\u20d7 x. The model is trained by minimizing the reconstruction error between the input \u20d7 v and the actual output of the decoder \u20d7 v\u2032. The squared-loss is commonly employed. Lautoencoder(\u20d7 v,\u20d7 v\u2032) = \u2225\u20d7 v\u2212\u20d7 v\u2032\u22252 (3.84) Siamese networks Siamese networks were originally proposed for comparing \ufb01ngerprints [330] and signatures [331]. Yih et al. [332] later adapted the same architecture for comparing short texts. The siamese network, as seen in Figure 3.17b, resembles the autoencoder architecture (if you squint hard enough)\u2014but unlike the latter is trained on pairs of inputs \u27e8input1,input2\u27e9. The architecture consists of two models (model1 and model2) that project input1 and input2, respectively, to \u20d7 v1 and \u20d7 v2 in a common latent space. A pre-de\ufb01ned metric (e.g., cosine similarity) is used to then compute the similarity between \u20d7 v1 and \u20d7 v2. The model parameters are optimized such that \u20d7 v1 and \u20d7 v2 are closer when the two inputs are expected to be similar, and further away otherwise. One possible loss function is the logistic loss. If each training sample consist of a triple \u27e8\u20d7 vq, \u20d7 vd1, \u20d7 vd2\u27e9, such that sim(\u20d7 vq, \u20d7 vd1) should be greater than sim(\u20d7 vq, \u20d7 vd2), then we minimize, \f3.5. Deep neural networks 95 input output embedding encode decode (a) Autoencoder input1 input2 embedding1 model1 similarity function embedding2 model2 (b) Siamese network Figure 3.17: Both (a) the autoencoder and (b) the Siamese network architectures are designed to learn compressed representations of inputs. In an autoencoder the embeddings are learnt by minimizing the self-reconstruction error, whereas a Siamese network focuses on retaining the information that is necessary for determining the similarity between a pair of items (say, a query and a document). \f96 Chapter 3. Background input output \u03c3 encode sample sampled embedding decode \u03bc Figure 3.18: Instead of directly generating an encoded representation, variational autoencoders sample the latent vector from the generated vector of means \u00b5 and standard deviations \u03c3. This local variation forces the model to learn a smoother and more continuous latent space. Lsiamese(\u20d7 vq, \u20d7 vd1, \u20d7 vd2) = log \u0010 1+e\u2212\u03b3(sim(\u20d7 vq, \u20d7 vd1)\u2212sim(\u20d7 vq, \u20d7 vd2))\u0011 (3.85) Where, \u03b3 is a constant that is often set to 10. Typically, both the models\u2014model1 and model2\u2014share identical architectures, but can also choose to share the same parameters. In image retrieval, the contrastive loss [243, 244] is also used for training Siamese networks. It is important to note that, unlike the autoencoder, the minimal suf\ufb01cient statistics retained by a Siamese network is dictated by which information it deems important for determining the similarity between the paired items. Variational autoencoders (VAE) In Variational autoencoders [333, 334], the encoder part of the network generates two separate vectors\u2014the vector of means \u00b5 and the vector of standard deviations \u03c3. The latent representation \u20d7 x of the input is then generated by sampling a random variable xi with mean \u00b5i and standard deviation \u03c3i along each of the k latent dimensions. \f3.5. Deep neural networks 97 \u20d7 x = [x0 \u223cN(\u00b50,\u03c32 0),...,xi \u223cN(\u00b5i,\u03c32 i ),...,xk\u22121 \u223cN(\u00b5k\u22121,\u03c32 k\u22121)] (3.86) By sampling the latent representation, we expose the decoder to a certain degree of local variations in its input that should force the model to learn a smoother continuous latent space. The VAE is trained by jointly minimizing the reconstruction loss\u2014 similar to vanilla autoencoders\u2014and an additional component to the loss function which is the KL-divergence between the latent variable xi and a unit gaussian. LVAE = Lreconstruction +LKL\u2212divergence (3.87) = \u2225\u20d7 v\u2212\u20d7 v\u2032\u22252 + k \u2211 i \u03c32 i + \u00b52 i \u2212log(\u03c3i)\u22121 (3.88) Without the LKL\u2212divergence component the model can learn very different \u00b5 for different classes of inputs and minimize the \u03bb to be arbitrarily small such that the learnt latent space is no longer smooth or continuous. Readers should note that the sampling step is non-differentiable, but the model can be trained using the \u201creparameterization trick\u201d proposed by Kingma and Welling [333]. An important application of VAE is for the synthesis of new items (e.g., images [335] or text [336]) not observed in the training collection. Another class of techniques for synthesis includes the Generative Adversarial Networks. Generative Adversarial Networks (GAN) Goodfellow et al. [337] proposed a framework for training generative models under an adversarial setting. GANs typically consist of two separate neural networks\u2014a generator network and a discriminator network. The goal of the generator network is to synthesize new (fake) items that mimic similar distributions as items that exist in the training collection. The goal of the discriminator network is to correctly distinguish between a true item and an item produced by the generator. The generator is trained to maximize the probability of the discriminator wrongly classifying the true and the generated item\u2014 \f98 Chapter 3. Background which corresponds to a minimax two-player game. 3.5.3 Neural toolkits In recent years, the advent of numerous \ufb02exible toolkits [338\u2013345] has had a catalytic in\ufb02uence on the area of neural networks. Most of the toolkits de\ufb01ne a set of common neural operations that\u2014like Lego8 blocks\u2014can be composed to build complex network architectures. Each instance of these neural operations or computation nodes can have associated learnable parameters that are updated during training, and these parameters can be shared between different parts of the network if necessary. Every computation node under this framework must implement the appropriate logic for, \u2022 computing the output of the node given the input (forward-pass) \u2022 computing the gradient of the loss with respect to the inputs, given the gradient of the loss with respect to the output (backward-pass) \u2022 computing the gradient of the loss with respect to its parameters, given the gradient of the loss with respect to the output (backward-pass) A deep neural network, such as the one in Figure 3.13 or ones with much more complex architectures (e.g., [297, 346, 347]), can then be speci\ufb01ed by chaining instances of these available computation nodes, and trained end-to-end on large datasets using backpropagation over GPUs or CPUs. In IR, various application interfaces [348, 349] bind these neural toolkits with existing retrieval/indexing frameworks, such as Indri [156]. Refer to [350] for a comparison of different neural toolkits based on their speed of training using standard performance benchmarks. 3.6 Deep neural models for IR Traditionally, deep neural network models have much larger number of learnable parameters than their shallower counterparts. A DNN with a large set of parameters can easily over\ufb01t to smaller training datasets [351]. Therefore, during model design 8https://en.wikipedia.org/wiki/Lego \f3.6. Deep neural models for IR 99 it is typical to strike a balance between the number of model parameters and the size of the data available for training. Data for ad hoc retrieval mainly consists of, \u2022 Corpus of search queries \u2022 Corpus of candidate documents \u2022 Ground truth\u2014in the form of either explicit human relevance judgments or implicit labels (e.g., from clicks)\u2014for query-document pairs While both large scale corpora of search queries [85, 352] and documents [353\u2013 355] are publicly available for IR research, the amount of relevance judgments that can be associated with them are often limited outside of large industrial research labs\u2014mostly due to user privacy concerns. We note that we are interested in datasets where the raw text of the query and the document is available. Therefore, this excludes large scale public labelled datasets for learning-to-rank (e.g., [356]) that don\u2019t contain the textual contents. The proportion of labelled and unlabelled data that is available in\ufb02uences the level of supervision that can be employed for training these deep models. Most of the models we covered in Section 3.3 operate under the data regime where large corpus of documents or queries is available, but limited (or no) labelled data. Under such settings where no direct supervision or relevance judgments is provided, typically an unsupervised approach is employed (e.g., using auto-encoding [357] or masked language modeling [327]). The unlabelled document (or query) corpus is used to learn good text representations, and then these learnt representations are incorporated into an existing retrieval model or a query-document similarity metric. If small amounts of labelled data are available, then that can be leveraged to train a retrieval model with few parameters that in turn uses text representations that is pre-trained on larger unlabelled corpus. Examples of such semi-supervised training includes models such as [163, 238, 302]. In contrast, fully-supervised models\u2014 e.g., [7, 8, 164, 358, 359]\u2014optimize directly for the target task by training on large number of labelled query-document pairs. \f100 Chapter 3. Background It is also useful to distinguish between deep neural models that focus on ranking long documents, from those that rank short texts (e.g., for the questionanswering task, or for document ranking where the document representation is based on a short text \ufb01eld like title). The challenges in short text ranking are somewhat distinct from those involved in the ad hoc retrieval task [360]. When computing similarity between pairs of short-texts, vocabulary mismatches are more likely than when the retrieved items contain long text descriptions [95]. Neural models that perform matching in a latent space tend to be more robust towards the vocabulary mismatch problem compared to lexical term-based matching models. On the other hand, documents with long body texts may contain mixture of many topics and the query matches may be spread over the whole document. A neural document ranking model must effectively aggregate the relevant matches from different parts of a long document. In the rest of this section, we discuss several neural architectures and approaches to document ranking. 3.6.1 Document auto-encoders Salakhutdinov and Hinton [357] proposed Semantic Hashing\u2014one of the earliest deep neural models for ad hoc retrieval. The model is a deep autoencoder trained under unsupervised setting on unlabelled document collection. The model considers each document as a bag-of-terms and uses one-hot vector representation for the terms\u2014considering only top two thousand most frequent terms in the corpus after removing stopwords. Salakhutdinov and Hinton [357] \ufb01rst pre-train the model layer-by-layer, and then train it further end-to-end for additional tuning. After \ufb01ne tuning the output of the model are thresholded to generate binary vector encoding of the documents. Given a search query, a corresponding hash is generated, and the relevant candidate documents quickly retrieved that match the same hash vector. A standard IR model can then be employed to rank between the selected documents. Semantic hashing is an example of a document encoder based approach to IR. Variational autoencoders have also been explored [361] on similar lines. While vocabulary sizes of few thousand distinct terms may be too small for most practical IR tasks, a larger vocabulary or a different term representation strategy\u2014such as the \f3.6. Deep neural models for IR 101 character trigraph based representation of Figure 3.15c\u2014may be considered in practice. Another shortcoming of the autoencoder architecture is that it minimizes the document reconstruction error which may not align well with the goal of the target IR task. A better alternative may be to train on query-document paired data where the choice of what constitutes as the minimal suf\ufb01cient statistics of the document is in\ufb02uenced by what is important for determining relevance of the document to likely search queries. In line with this intuition, we next discuss the Siamese architecture based models. 3.6.2 Siamese networks In recent years, several deep neural models based on the Siamese architecture have been explored especially for short text matching. The Deep Semantic Similarity Model (DSSM) [164] is one such architecture that trains on query and document title pairs where both the pieces of texts are represented as bags-of-charactertrigraphs. The DSSM architecture consists of two deep models\u2014for the query and the document\u2014with all fully-connected layers and cosine distance as the choice of similarity function in the middle. Huang et al. [164] proposed to train the model on clickthrough data where each training sample consists of a query q, a positive document d+ (a document that was clicked by a user on the SERP for that query), and a set of negative documents D\u2212randomly sampled with uniform probability from the full collection. The model is trained my minimizing the cross-entropy loss, Ldssm(q,d+,D\u2212) = \u2212log \u0010 e\u03b3\u00b7cos \u0000\u20d7 q, \u20d7 d+\u0001 \u2211d\u2208D e\u03b3\u00b7cos \u0000\u20d7 q,\u20d7 d \u0001 \u0011 (3.89) where, D = {d+}\u222aD\u2212 (3.90) While, DSSM [164] employs deep fully-connected architecture for the query and the document models, more sophisticated architectures involving convolutional layers [231, 303, 362, 363], recurrent layers [364, 365], and tree-structured networks [324] have also been explored. The similarity function can also be parameter\f102 Chapter 3. Background Table 3.3: Comparing the nearest neighbours for \u201cseattle\u201d and \u201ctaylor swift\u201d in the CDSSM embedding spaces when the model is trained on query-document pairs vs. query pre\ufb01x-suf\ufb01x pairs. The former resembles a topical notion of similarity between terms while the latter is more typical in the de\ufb01nition of inter-term similarities. seattle taylor swift Query-Document Pre\ufb01x-Suf\ufb01x Query-Document Pre\ufb01x-Suf\ufb01x weather seattle chicago taylor swift.com lady gaga seattle weather san antonio taylor swift lyrics meghan trainor seattle washington denver how old is taylor swift megan trainor ikea seattle salt lake city taylor swift twitter nicki minaj west seattle blog seattle wa taylor swift new song anna kendrick ized and implemented as additional layers of the neural network as in [358]. Most of these models have been evaluated on the short text matching task, but Mitra et al. [7] recently reported meaningful performances on the long document ranking task from models like DSSM [164] and CDSSM [231] under telescoping evaluation. Mitra et al. [7] also show that sampling the negative documents uniformly from the collection is less effective to using documents that are closer to the query intent but judged as non-relelvant by human annotators in similar evaluation settings. Notions of similarity It is important to emphasize that our earlier discussion in Section 3.2.2 on different notions of similarity between terms that can be learnt by shallow embedding models is also relevant in the context of these deeper architectures. In the case of Siamese networks, such as the convolutional-DSSM (CDSSM) [231], the notion of similarity being modelled depends on the choice of the paired data that the model is trained on. When the CDSSM is trained on query and document title pairs [231] then the notion of similarity is more topical in nature. Mitra and Craswell [19] trained the same CDSSM architecture on query pre\ufb01x-suf\ufb01x pairs which, in contrast, captures a more typical notion of similarity, as shown in Table 7.2. In a related work, Mitra [18] demonstrated that the CDSSM model when trained on session-query pairs is amenable to vector-based text analogies. \f3.6. Deep neural models for IR 103 interaction matrix neural network query document Figure 3.19: Schematic view of an interaction matrix generated by comparing windows of text from the query and the document. A deep neural network\u2014such as a CNN\u2014operates over the interaction matrix to \ufb01nd patterns of matches that suggest relevance of the document to the query. \u20d7 vthings to do in london \u2212\u20d7 vlondon +\u20d7 vnew york \u2248\u20d7 vnew york tourist attractions (3.91) \u20d7 vuniversity of washington \u2212\u20d7 vseattle +\u20d7 vdenver \u2248\u20d7 vuniversity of colorado (3.92) \u20d7 vnew york +\u20d7 vnewspaper \u2248\u20d7 vnew york times (3.93) By modelling different notions of similarity these deep neural models tend to be more suitable for other IR tasks, such as query auto-completion [19] or sessionbased personalization [18]. 3.6.3 Interaction-based networks Siamese networks represent both the query and the document using single embedding vectors. Alternatively, we can individually compare different parts of the query with different parts of the document, and then aggregate these partial evidences of relevance. Especially, when dealing with long documents\u2014that may contain a mixture of many topics\u2014such a strategy may be more effective than trying to represent the full document as a single low-dimensional vector. Typically, in these approaches a sliding window is moved over both the query and the document text and each instance of the window over the query is compared (or \u201cinteracts\u201d) against \f104 Chapter 3. Background The President of the United States of America (POTUS) is the elected head of state and head of government of the United States. The president leads the executive branch of the federal government and is the commander in chief of the United States Armed Forces. Barack Hussein Obama II (born August 4, 1961) is an American politician who is the 44th and current President of the United States. He is the \ufb01rst African American to hold the of\ufb01ce and the \ufb01rst president born outside the continental United States. (a) Lexical model The President of the United States of America (POTUS) is the elected head of state and head of government of the United States. The president leads the executive branch of the federal government and is the commander in chief of the United States Armed Forces. Barack Hussein Obama II (born August 4, 1961) is an American politician who is the 44th and current President of the United States. He is the \ufb01rst African American to hold the of\ufb01ce and the \ufb01rst president born outside the continental United States. (b) Semantic model Figure 3.20: Analysis of term importance for estimating the relevance of a passage to the query \u201cUnited States President\u201d by a lexical and a semantic deep neural network model. The lexical model only considers the matches of the query terms in the document but gives more emphasis to earlier occurrences. The semantic model is able to extract evidence of relevance from related terms such as \u201cObama\u201d and \u201cfederal\u201d. each instance of the window over the document text (see Figure 3.19). The terms within each window can be represented in different ways including, one-hot vectors, pre-trained embeddings, or embeddings that are updated during the model training. A neural model (typically convolutional) operates over the generated interaction matrix and aggregates the evidence across all the pairs of windows compared. The interaction matrix based approach have been explored both for short text matching [302, 303, 366\u2013369], as well as for ranking long documents [7, 238, 370, 371]. 3.6.4 Lexical matching networks Much of the explorations in neural IR models have focused on learning good representations of text. However, these representation learning models tend to perform poorly when dealing with rare terms and search intents. In Section 2.1.2, we highlighted the importance of modelling rare terms in IR. Based on similar motivaions, Guo et al. [163] emphasized the importance of modelling lexical matches using deep neural networks, and proposed to use histogram-based features in their DNN model to capture lexical notion of relevance. Neural models that focus on lexical matching typically have fewer parameters, and can be trained under small data \f3.7. Conclusion 105 regimes\u2014unlike their counterparts that focus on learning representations of text. 3.6.5 BERT BERT-based [327] architectures have recently demonstrated signi\ufb01cant performance improvements on retrieval tasks [15, 16]. The model architecture comprises of stacked Transformer [326] layers. The query and document are concatenated and then tokenized as a single sequence of subword terms for input. The relevance estimation task is cast as a binary classi\ufb01cation problem\u2014i.e., given a query-document pair predict if they are relevant or nonrelevant\u2014although other training objectives have also been explored [372]. 3.7 Conclusion We surveyed a large body work in this section. We introduced the fundamentals of traditional IR models and representation learning with neural networks. We presented some of the recent (shallow and deep) neural approaches for document ranking and question-answer matching. Readers should note that this is an active area for research, and new architectures and learning methods are continuously emerging. So, it is likely that by the time this thesis is published, many of the methods described here may have already been superseded by more recent and advanced methods. In the subsequent chapters of this thesis, we will cover our contributions in the form of new neural models and approaches for some of these IR tasks. \f\fChapter 4 Learning to rank with Duet networks In traditional Web search, the query consists of only few terms but the body text of the documents may typically have tens or hundreds of sentences. In the absence of click information, such as for newly-published or infrequently-visited documents, the body text can be a useful signal to determine the relevance of the document for the query. Therefore, extending existing neural text representation learning approaches to long body text for document ranking is an important challenge in IR. However, as was noted previously [373], despite the recent surge in interests towards applying deep neural networks (DNN) for retrieval, their success on ad hoc retrieval tasks has been rather limited. Some papers [166, 238] report worse performance of neural embedding models when compared to traditional term-based approaches, such as BM25 [80]. Traditional IR approaches consider terms as discrete entities. The relevance of the document to the query is estimated based on, amongst other factors, the number of matches of query terms in the document, the parts of the document in which the matches occur, and the proximity between the matches. In contrast, latent semantic analysis (LSA) [170], probabilistic latent semantic analysis (PLSA) [198] and latent Dirichlet allocation (LDA) [200, 374] learn low-dimensional vector representations of terms, and match the query against the document in the latent semantic space. In Section 2.1, we emphasized the importance of both lexical and latent matching in IR. Lexical matching can be particularly important when the query terms are new or rare. On the other hand, matches between learned latent representations of query \f108 Chapter 4. Learning to rank with Duet networks The President of the United States of America (POTUS) is the elected head of state and head of government of the United States. The president leads the executive branch of the federal government and is the commander in chief of the United States Armed Forces. Barack Hussein Obama II (born August 4, 1961) is an American politician who is the 44th and current President of the United States. He is the \ufb01rst African American to hold the of\ufb01ce and the \ufb01rst president born outside the continental United States. (a) Local subnetwork The President of the United States of America (POTUS) is the elected head of state and head of government of the United States. The president leads the executive branch of the federal government and is the commander in chief of the United States Armed Forces. Barack Hussein Obama II (born August 4, 1961) is an American politician who is the 44th and current President of the United States. He is the \ufb01rst African American to hold the of\ufb01ce and the \ufb01rst president born outside the continental United States. (b) Distributed subnetwork Figure 4.1: Visualizing the drop in the local and the distributed subnetwork\u2019s retrieval score by individually removing each of the passage terms for the query \u201cunited states president\u201d. Darker green signi\ufb01es a bigger drop. The local subnetwork uses only exact term matches. The distributed subnetwork uses matches based on a learned representation. and document are important for addressing the vocabulary mismatch problem. Retrieval models can be classi\ufb01ed based on what representations of text they employ at the point of matching the query against the document. At the point of match, if each term is represented by a unique identi\ufb01er (local representation [175]) then the query-document relevance is a function of the pattern of occurrences of the exact query terms in the document. However, if the query and the document text is \ufb01rst projected into a continuous latent space, then it is their distributed representations that are compared. Along these lines, Guo et al. [163] classify recent DNNs for short-text matching as either interaction-focused [302, 303, 366] or representation-focused [164, 231, 303, 358, 362]. They claim that IR tasks are different from NLP tasks, and that it is more important to focus on exact matching for the former and on learning text embeddings for the latter. Mitra et al. [165], on the other hand, claim that models that compare the query and the document in the latent semantic space capture a different sense of relevance than models that focus on exact term matches, and therefore the combination of the two is more favourable. Our work is motivated by the latter intuition that it is important to match the query and the document using both local and distributed representations of text. We propose a \f109 novel ranking model comprised of two separate DNNs that model query-document relevance using local and distributed representations, respectively. The two DNNs, referred to henceforth as the local subnetwork and the distributed subnetwork, are jointly trained as part of a single model, that we name as the Duet network because the two subnetworks co-operate to achieve a common goal. Figure 4.1 demonstrates how each subnetwork models the same document given a \ufb01xed query. While the local subnetwork captures properties like exact match position and proximity, the distributed subnetwork detects synonyms (e.g., \u2018Obama\u2019), related terms (e.g., \u2018federal\u2019), and even well-formedness of content (e.g., \u2018the\u2019, \u2018of\u2019).1 In this chapter, we show that the combination of the two DNNs not only outperforms the individual subnetworks, but also demonstrates large improvements over traditional baselines and other previously proposed models based on DNNs on the document ranking task. Unlike other previous work [166, 238], our model signi\ufb01cantly outperforms classic IR approaches by using a DNN to learn text representation. Deep neural network models are known to bene\ufb01t from large training data, achieving state-of-the-art performance in areas where large scale training corpora are available [21, 256]. Some of the lack of positive results from neural models in ad hoc retrieval is likely due to the scarce public availability of large quantity of training data necessary to learn effective representations of text. In Section 4.5, we will present some analysis on the effect of training data on the performance of these DNN models. In particular, we found that\u2013unsurprisingly\u2013the performance of the distributed model improves drastically in the presence of more data. Unlike some previous work [164, 231, 362] that train on clickthrough data with randomly sampled documents as negative examples, we train our model on human-judged labels. Our candidate set for every query consists of documents that were retrieved by the commercial search engine Bing, and then labelled by crowdsourced judges. We found that training with the documents that were rated non-relevant by the human judges as the negative examples is more effective than randomly sampling negative 1While surprising, this last property is important for detecting quality web content [375]. \f110 Chapter 4. Learning to rank with Duet networks examples from the corpus. In Section 4.4 we present additional improvements to the Duet network benchmarked on the MS MARCO passage ranking task [52] and TREC 2019 Deep Learning track [15]. To summarize, the key contributions of this chapter are: 1. We propose a novel Duet network that jointly learns two deep neural networks that match query and document based on their lexical similarity and similarity in their learned latent representations, respectively. 2. We demonstrate that Duet out-performs previous state-of-the-art neural and traditional non-neural baselines. 3. We demonstrate that training with documents judged as non-relevant as the negative examples is more effective than randomly sampling them from corpus. 4. We report additional improvement to the original Duet network evaluated on two recently released public benchmarks with suf\ufb01ciently large training data. 4.1 The Duet network Figure 4.2 provides a detailed schematic view of the Duet network. The distributed subnetwork projects the query and the document text into an embedding space before matching, while the local subnetwork operates over an interaction matrix comparing every query term to every document term. The \ufb01nal score under the Duet setting is the sum of scores from the local and the distributed subnetworks, Duet(q,d) = Duet local(q,d)+Duet distrib(q,d) (4.1) Where both the query and the document are considered as ordered list of terms, q = [tq1,...,tq|q|] and d = [td1,...,td|d|]. Each query term tq and document term td is represented by a m \u00d7 1 vector where m is the input representation of the text (e.g., \f4.1. The Duet network 111 Interaction Featurizer 1000 x 10 (binary) Convolution (1000 x 1) 300 x 10 Fully Connected 300 Fully Connected 300 Dropout 300 Fully Connected 1 N-graph Featurizer 2000 x 10 (counts) Convolution (2000 x 3) 300 x 8 Max Pooling (1 x 8) 300 Fully Connected 300 N-graph Featurizer 2000 x 1000 (counts) Convolution (2000 x 3) 300 x 998 Max Pooling (1 x 100) 300 x 899 Convolution (300 x 1) 300 x 899 Hadamard Product 300 x 899 Fully Connected 300 Fully Connected 300 Dropout 300 Fully Connected 1 Sum Query Document Query Document local model distributed model Figure 4.2: The Duet network is composed of the local subnetwork (left) and the distributed subnetwork (right). The local subnetwork takes an interaction matrix of query and document terms as input, whereas the distributed subnetwork learns embeddings of the query and the document text before matching. The parameters of both subnetworks are optimized jointly during training. Hyperparameters such as nhidden and npool shown as in the document ranking task. \f112 Chapter 4. Learning to rank with Duet networks the number of terms in the vocabulary for the local subnetwork). The query q and the document d is, in turn, represented by the matrices Xq and Xd, respectively. Xq = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u20d7 vtq1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb ... \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u20d7 vtq|q| \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , Xd = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u20d7 vtd1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb ... \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u20d7 vtd|d| \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (4.2) We \ufb01x the length of the inputs across all the queries and the documents such that we consider only the \ufb01rst nq terms in the query and the \ufb01rst nd terms in the document. If either the query or the document is shorter than these target dimensions, then the input vectors are padded with zeros. The truncation of the document body text to the \ufb01rst nd terms is performed only for our subnetwork and its variants, but not for the baseline models. For all the neural and the non-neural baseline models we consider the full body text. 4.1.1 Local subnetwork Match positions of the query terms in the document not only re\ufb02ect where potentially the relevant parts of the document are localized (e.g., title, \ufb01rst paragraph, closing paragraph) but also how clustered the individual query term matches are with each other. Figure 4.3 shows the position of matches on two different queries and a sample of relevant and non-relevant documents. In the \ufb01rst query, we see that the query term matches in the relevant document are much more clustered than in the non-relevant documents. We observe this behaviour also in the second query but in addition notice that the clustered matches are localized near the beginning of the relevant document. Match proximity serves as a foundation for traditional methods such as sequential dependence models [155]. The local subnetwork estimates document relevance based on patterns of exact matches of query terms in the document. To this end, each term is represented by its one-hot encoding in a mlocal-dimensional space, where mlocal is the size of the \f4.1. The Duet network 113 Query: big deal derby carpet \u2713 \u2717 \u2717 \u2713 \u2717 \u2717 Query: rosario trainer 1 1000 Document terms rosario trainer rosario trainer rosario trainer Big Deal Derby carpet Big Deal Derby carpet Big Deal Derby carpet Figure 4.3: Visualizing patterns of query term matches in documents. Query terms are laid out along the vertical axis, and the document terms along the horizontal axis. The short vertical lines correspond to exact matches between pairs of query and document terms. For both queries, the \ufb01rst document was rated relevant by a human judge and the following two as non-relevant. The query term matches in the relevant documents are observed to be more clustered, and more localized near the beginning of the document. vocabulary. The subnetwork then generates the nd \u00d7 nq binary matrix X = X \u22ba d Xq, capturing every exact match (and position) of query terms in the document. This interaction matrix is similar to the visual representation of term matches in Figure 4.3, and captures both the exact term matches and the match positions. It is also similar to the indicator matching matrix proposed previously by Pang et al. [302]. While the interaction matrix X perfectly captures every query term match in the document, it does not retain any information about the actual terms themselves. Therefore, the local subnetwork cannot learn term-speci\ufb01c properties from the training corpus, nor model interactions between dissimilar terms. The interaction matrix X is \ufb01rst passed through a convolutional layer with nhidden \ufb01lters, a kernel size of nd \u00d71, and a stride of 1. The output Zi corresponding to the ith convolutional window over X is a function of the match between the tqi term against all the terms in the document, \f114 Chapter 4. Learning to rank with Duet networks Zi = tanh \u0012 Xi \u22ba\u00b7W \u0013 (4.3) Where Xi is the row i of X, tanh is performed elementwise, and the nd \u00d7 nhidden matrix W contains the learnable parameters of the convolutional layer. The output Z of the convolutional layer is a matrix of dimension nhidden \u00d7nq. The output of the convolutional layer is then passed through two fully-connected layers, a dropout layer, and a \ufb01nal fully-connected layer that produces a single real-valued output. All the nodes in the local subnetwork uses the hyperbolic tangent function for nonlinearity. 4.1.2 Distributed subnetwork The distributed subnetwork learns dense lower-dimensional vector representations of the query and the document text, and then computes the positional similarity between them in the learnt embedding space. Instead of one-hot encoding of terms, as in the local subnetwork, we use a character n-graph based representation of each term in the query and document. Our n-graph based input encoding is motivated by the trigraph encoding proposed by Huang et al. [164], but unlike their approach we don\u2019t limit our input representation to n-graphs of a \ufb01xed length. For each term, we count all the n-graphs present for 1 \u2264n \u2264nmaxgraph. We then use this n-graph frequency vector of length mdistrib to represent the term. Instead of directly computing the interaction between the mdistrib \u00d7 nq matrix Xq and the mdistrib\u00d7nd matrix Xd, we \ufb01rst learn a series of nonlinear transformations to the character-based input. For both the query and the document, the \ufb01rst step is convolution. The mdistrib \u00d7 nwindow convolution window has \ufb01lter size of nhidden. It projects nwindow consecutive terms to a nhidden-dimensional vector, then takes a stride by 1 position, and projects the next nwindow terms, and so on. For the query, the convolution step generates a tensor of dimensions nhidden \u00d7 (nq \u2212nwindow + 1). For the document, it generates one of dimensions nhidden \u00d7(nd \u2212nwindow +1). Following this, we conduct a max-pooling step. For the query the pooling \f4.1. The Duet network 115 kernel dimensions are 1 \u00d7 (nq \u2212nwindow + 1). For the document, it is 1 \u00d7 npool. Thus, we get one nhidden-dimensional embedding \u20d7 vq for the query and a nhidden \u00d7 (nd \u2212nwindow \u2212npool + 2) matrix \u02dc Xd for the document. The document matrix \u02dc Xd can be interpreted as (nd \u2212nwindow \u2212npool + 2) separate embeddings, each corresponding to different equal-sized spans of text within the document. Our choice of a window-based max-pooling strategy, instead of global max-pooling as employed by CDSSM [362], is motivated by the fact that the window-based approach allows the model to distinguish between matches in different parts of the document. As posited in the previous section, a model that is aware of match positions may be more suitable when dealing with long documents, especially those containing mixture of many different topics. The output of the max-pooling layer for the query is then passed through a fully-connected layer. For the document, the nhidden \u00d7 (nd \u2212nwindow \u2212npool + 2) dimensional matrix output is operated on by another convolutional layer with \ufb01lter size of nhidden, kernel dimensions of nhidden \u00d71, and a stride of 1. The combination of these convolutional and max-pooling layers enable the distributed subnetwork to learn suitable representations of text for effective inexact matching. To perform the matching, we conduct the element-wise or Hadamard product between the embedded document matrix and the extended or broadcasted query embedding, \u02dc X = (\u20d7 vq ..................\u20d7 vq | {z } (nd\u2212nwindow\u2212npool+2) times )\u25e6\u02dc Xd (4.4) After this, we pass the matrix through fully connected layers, and a dropout layer until we arrive at a single score. Like the local subnetwork, we use hyperbolic tangent function here for non-linearity. 4.1.3 Optimization Each training sample consists of a query q, a relevant document d+ and a set of nonrelevant documents D\u2212= {d0,...,dnneg}. We use a softmax function to compute \f116 Chapter 4. Learning to rank with Duet networks the posterior probability of the positive document given a query based on the score. p(d+|q) = eDuet(q,d+) \u2211d\u2208D eDuet(q,d) (4.5) where, D = {d+}\u222aD\u2212 (4.6) We maximize the log likelihood log p(d+|q) using stochastic gradient descent. 4.2 Experiments We conduct three experiments on a document ranking task to test: (1) the effectiveness of the Duet network compared to the local and distributed subnetworks separately, (2) the effectiveness of the Duet network compared to existing baselines for content-based web ranking, and (3) the effectiveness of training with judged negative documents compared to random negative documents. In addition, we also evaluate the effectiveness of the Duet model on the TREC Complex Answer Retrieval (TREC CAR) task [376]. In this section, we detail both the experiment setup and the corresponding baseline implementations. 4.2.1 Data Document ranking task The training dataset consist of 199,753 instances in the format described in Section 4.2.2. The queries in the training dataset are randomly sampled from Bing\u2019s search logs from a period between January 2012 and September 2014. Human judges rate the documents on a \ufb01ve-point scale (perfect, excellent, good, fair, and bad). The document body text is retrieved from Bing\u2019s Web document index. We use proprietary parsers for extracting the body text from raw HTML content. All query and document text are normalized by down-casing and removing all non-alphanumeric characters. We consider two different test sets, both sampled from Bing search logs. The weighted set consist of queries sampled per their frequency in the search logs. Thus, frequent queries are well-represented in this dataset. Queries are sampled between October 2014 and December 2014. The unweighted set consist of queries sampled \f4.2. Experiments 117 Table 4.1: Statistics of the three test sets randomly sampled from Bing\u2019s search logs for the document ranking task. The candidate documents are generated by querying Bing and then rated using human judges. queries documents docs per query training 199,753 998,765 5 weighted test 7,741 171,302 24.9 unweighted test 6,808 71,722 10.6 uniformly from the entire population of unique queries. The queries in this samples remove the bias toward popular queries found in the weighted set. The unweighted queries are sampled between January 2015 and June 2015. Because all of our datasets are derived from sampling real query logs and because queries naturally repeat, there is some overlap in queries between the training and testing sets. Speci\ufb01cally, 14% of the testing queries in the weighted set occurr in the training set, whereas only 0.04% of the testing queries in the unweighted set occurr in the training set. We present both results for those who may be in environments with repeated queries (as is common in production search engines) and for those who may be more interested in cold start situations or tail queries. Table 4.1 summarizes statistics for the two test sets. TREC Complex Answer Retrieval task The goal of TREC CAR task is to, given a document title and a section heading from the same document as a query, retrieve and rank passages from a provided collection. In order to support this task, the TREC CAR organizers present a large training set derived from English Wikipedia. The mediawiki format of articles is parsed to extract the title, the section headings, and the corresponding passages. The collection is \ufb01ltered to exclude pages which belong to frequent categories, such as people and events, and articles with less than \ufb01ve sections are discarded. For each heading, we construct a set that includes all passages from the page (in random order) as well as the same amount of passages randomly drawn from other pages. On average this process yields a mean of 35 passages per section which includes: (1) passages from the correct section, (2) passages from the same page, but from different sections, or (3) passages from other pages. The retrieval \f118 Chapter 4. Learning to rank with Duet networks involves ranking the correct passages (1) higher than the passages from the wrong section or article (2 and 3). We split the dataset for training and testing at a 4:1 ratio. 4.2.2 Training Besides the architecture (Figure 4.2), our model has the following free parameters: (1) the maximum order of the character-based representation for the distributed subnetwork nmaxgraph, (2) the maximum number of query terms nq and document terms nd considered by the model, (3) the convolutional \ufb01lter size nhidden and window size nwindow, (4) the windows size for max-pooling on the document input for the distributed subnetwork npool, (5) the number of negative documents to sample at training time nneg, (6) the dropout rate, and (7) the learning rate.. We use a maximum order of \ufb01ve for our character n-graphs in the distributed subnetwork. Instead of using the full 62,193,780-dimensional vector, we only consider the top 2,000 most frequent n-graphs, resulting in 36 unigraphs (a-z and 0-9), 689 bigraphs, 1149 trigraphs, 118 4-graphs, and eight 5-graphs. For both the document ranking and the TREC CAR tasks we limit the maximum number of query terms nq to 10, and \ufb01x the window size of the convolution nwindow to 3. The dropout rate is also set to 0.20 for both. For the document ranking task, we consider the \ufb01rst 1000 terms in the document. Correspondingly, the max-pooling window size npool is \ufb01xed at 100, and nhidden is set to 300. When training our model, we sample four negative documents for every relevant document. More precisely, for each query we generated a maximum of one training sample of each form, (1) One excellent document with four fair documents (2) One excellent document with four bad documents (3) One good document with four bad documents. Pilot experiments showed that treating documents judged as fair or bad as the negative examples result in signi\ufb01cantly better performance, than when the model is trained with randomly sampled negatives. For training, we discard all documents rated as perfect because a large portion of them fall under the navigational intent, which can be better satis\ufb01ed by historical click based ranking signals. When dealing with long documents, it is necessary to use a small minibatch size of 8 to \ufb01t the \f4.2. Experiments 119 whole data in GPU memory. For TREC CAR, the average size of passages is signi\ufb01cantly smaller than the documents in the previous ranking task. So we consider the \ufb01rst 100 terms in every passage and set npool and nhidden to 10 and 64, respectively. Because of the (1) smaller size of the input, (2) the smaller number of model parameters, as well as (3) the use of single negative documents, we increase the minibatch size to 1024. Finally, we choose 0.01 and 0.001 as the learning rates for the two tasks, respectively, based on corresponding validation sets. We implement our model using CNTK [339] and train the model with stochastic gradient descent based optimization (with automatic differentiation) on a single GPU.2 4.2.3 Baselines Document ranking task Exact term matching is effectively performed by many classic information retrieval models. We used the Okapi BM25 [80] and query likelihood (QL) [90] models as representative of this class of model. We use Indri3 for indexing and retrieval. Match positions are handled by substantially fewer models. Metzler\u2019s dependence model (DM) [155] provides an inference network approach to modeling term proximity. We use the Indri implementation for our experiments. Inexact term matching received both historic and modern treatments in the literature. Deerwester et al. [170] originally presented latent semantic analysis (LSA) as a method for addressing vocabulary mismatch by projecting terms and documents into a lower-dimension latent space. The dual embedding space model (DESM) [165, 218] computes a document relevance score by comparing every term in the document with every query term using pre-trained term embeddings. We used the same pre-trained term embeddings dataset that the authors made publicly available online for download4. These embeddings, for approximately 2.8M terms, were previously trained on a corpus of Bing queries. In particular, we use the 2A CNTK implementation of Duet is available at https://github.com/bmitra-msft/ NDRM/blob/master/notebooks/Duet.ipynb under the MIT license. 3http://www.lemurproject.org/indri/ 4https://www.microsoft.com/en-us/download/details.aspx?id=52597 \f120 Chapter 4. Learning to rank with Duet networks DESMIN-OUT model, which was reported to have the best performance on the retrieval task, as a baseline here. Both the deep structured semantic model (DSSM) [164] and its convolutional variant CDSSM [362] consider only the document title for matching with the query. While some negative results have been reported for title-based DSSM and CDSSM on the ad hoc document retrieval tasks [163, 238], we include document-based variants appropriately retrained on the same set of positive query and document pairs as our model. As with the original implementation we choose the non-relevant documents for training by randomly sampling from the document corpus. For the CDSSM model, we concatenate the trigraph hash vectors of the \ufb01rst n terms of the body text followed by a vector that is a sum of the trigraph hash vectors for the remaining terms. The choice of n is constrained by memory requirements, and we pick 499 for our experiments. The DRMM model [163] uses a DNN to perform term matching, with few hundred parameters, over histogram-based features. The histogram features, computed using exact term matching and pre-trained term embeddings based cosine similarities, ignoring the actual position of matches. We implemented the DRMMLCH\u00d7IDF variant of the model on CNTK [339] using term embeddings trained on a corpus of 341,787,174 distinct sentences randomly sampled from Bing\u2019s Web index, with a corresponding vocabulary of 5,108,278 terms. Every training sample for our model is turned into four corresponding training samples for DRMM, comprised of the query, the positive document, and each one of the negative documents. This guarantees that both models observed the exact same pairs of positive and negative documents during training. We adopted the same loss function as proposed by Guo et al. [163]. TREC Complex Answer Retrieval task We rank results using Okapi BM25 [80] with k1=1.2 and b=0.75. Porter stemming is applied to a Lucene 6.4.1 index and the query. In addition, we experiment with three different query expansion approaches (terms, entities, and passages) and three vector space representations of queries and \f4.2. Experiments 121 documents (tf-idf, GloVe embeddings, and RDF2Vec embeddings). Each of the possible combinations (e.g., term-expansion + tf-idf vectors, or passage-expansion + term embedding vectors) de\ufb01nes a query representation. Results are ranked according to the cosine similarity between the vector representations of the query and the document. We experiment with three different query expansion approaches: \u2022 Expansion terms (RM). Feedback terms are derived using pseudo relevance feedback and the relevance model [158]. We use Galago\u2019s implementation5 which is based on a Dirichlet smoothed language model for the feedback run. We achieve the best performance by expanding the query with top 10 terms extracted from the top 10 feedback documents. \u2022 Expansion entities. We also expand the query using supporting entities retrieved by a search for the query. Best performance is achieved using 10 entities for expansion. \u2022 Passage Rocchio. Inspired by the work of Banerjee and Mitra [377], we retrieve other passages, which have identical section heading to the heading part of our query, from the portion of the dataset reserved for training. For example, given a query such as \u201cUnited States demographic\u201d, with respect to the entity United States, we collect supporting passages from the pages of other entities (e.g.\u201e United Kingdom), that fall under the section titled \u201cDemographics\u201d. Headings are processed with tokenisation, stopword and digit removal, and stemming. We are able to retrieve at least one supporting passage for one-third of our queries. We obtain best performance from expanding the query with 5 passages. We investigate three vector representation schemes for the query and the passage: \u2022 Local representation. Each term in the vocabulary is represented by a one-hot vector. Queries and passages are represented as bag of terms, where the term frequencies are weighted by TF-IDF and are logarithmic L2-normalised. 5lemurproject.org/galago.php \f122 Chapter 4. Learning to rank with Duet networks \u2022 Term Embeddings. Under this scheme, each term is represented by their corresponding pre-trained GloVE [202] embeddings. The query and passage vectors are obtained by averaging the term embeddings\u2014with TF-IDF weighting. \u20d7 vq = 1 |q| \u2211 tq\u2208u(q) tf-idf(tq)\u00b7\u20d7 vtq where, u(d) is the set of unique words in query q. \u2022 Entity Embeddings. Queries and documents are represented as their mentioned DBpedia entities, using the entity linker TagMe [378]\u2014with default parameters. We obtain latent vector representations \u20d7 ve of each linked entity e using pre-computed RDF2Vec entity embeddings [379]. Query and passage representation is obtained from weighted average of these entity vectors. Entity vectors are weighted based on inlink and outlink statistics from the 2015-04 DBpedia Data Set [380]. \u20d7 vq = 1 |{e \u2208ent(q)}| \u2211 e\u2208ent(q) link(e)\u00b7\u20d7 ve where, ent(d) is the set of entities in query q. Additionally, we combine the ranking-score of these different baselines with supervised machine learning [381]. We train a linear model using RankLib 6 optimized for MAP, trained with coordinate ascent. 4.2.4 Evaluation For the document ranking task, we report the normalized discounted cumulative gain (NDCG) metric computed at positions one and ten. All performance metrics are averaged over queries for each run. Whenever testing for signi\ufb01cant differences in performance, we used the paired t-test with a Bonferroni correction. For the TREC CAR task, we report MRR, R-Prec, and MAP numbers for all the models. 6lemurproject.org/ranklib.php \f4.3. Results 123 Table 4.2: Performance on the document ranking task. All Duet runs signi\ufb01cantly outperformed our local and distributed model (p < 0.05). All Duet runs also outperformed non-neural and neural baselines. The difference between the Duet model and the best performing baseline per dataset and position (italics) is statistically signi\ufb01cant (p < 0.05). The best NDCG performance on each dataset and position is highlighted in bold. Weighted Unweighted NDCG@1 NDCG@10 NDCG@1 NDCG@10 Non-neural baselines LSA 22.4 44.2 31.9 62.7 BM25 24.2 45.5 34.9 63.3 DM 24.7 46.2 35.0 63.4 QL 24.6 46.3 34.9 63.4 Neural baselines DRMM 24.3 45.2 35.6 65.1 DSSM 25.8 48.2 34.3 64.4 CDSSM 27.3 48.2 34.3 64.0 DESM 25.4 48.3 35.0 64.7 Our models Local model 24.6 45.1 35.0 64.4 Distributed model 28.6 50.5 35.2 64.9 Duet model 32.2 53.0 37.8 66.4 4.3 Results Document ranking task Table 4.2 reports NDCG based evaluation results on two test datasets for our model and all the baseline models. Our main observation is that Duet performs signi\ufb01cantly better than the individual local and distributed models. This supports our underlying hypothesis that matching in a latent semantic space can complement exact term matches in a document ranking task, and hence a combination of the two is more appropriate. Note that the NDCG numbers for the local and the distributed subnetworks correspond to when these DNNs are trained individually, but for Duet the two DNNs are trained together as part of a single neural network. Among the baseline models, including both traditional and neural network based models, CDSSM and DESM achieve the highest NDCG at position one and ten, respectively, on the weighted test set. On the unweighted test set DRMM is our best baseline model at both rank positions. Duet demonstrates signi\ufb01cant improvements over all these baseline models on both test sets and at both NDCG positions. \f124 Chapter 4. Learning to rank with Duet networks 46 48 50 52 local distributed joint judged random (a) Weighted set 64.5 65.0 65.5 66.0 local distributed joint judged random (b) Unweighted set Figure 4.4: Duet demonstrates signi\ufb01cantly better NDCG performance (p < 0.05) on both test sets when trained with judged non-relevant documents as the negative examples, instead of randomly sampling them from the document corpus. The distributed subnetwork also shows statistically signi\ufb01cant NDCG gain (p < 0.05) on the weighted set, and a non-statistically signi\ufb01cant NDCG gain on the unweighted set. We also test our independent local and distributed models against their conceptually closest baselines. Because our local model captures both matching and proximity, we compared performance to dependence models (DM). While the performance in terms of NDCG@1 is statistically indistinguishable, both NDCG@10 results are statistically signi\ufb01cant (p < 0.05). We compared our distributed model to the best neural model for each test set and metric. We found no statistically signi\ufb01cant difference except for NDCG@10 for the weighted set. We were interested in testing our hypotheses that training with labeled negative documents is superior to training with randomly sampled documents presumed to be negative. We conducted an experiment training with negative documents following each of the two protocols. Figure 4.4 shows the results of these experiments. We found that, across all our models, using judged nonrelevant documents was more effective than randomly sampling documents from the corpus and considering them as negative examples. Very recently, Xiong et al. [382] have presented similar evidence on the importance of sampling negative documents that are closer in relevance to the query than documents sampled from the collection at uniform probability, and operationalized the idea in the form of active metric learning [383\u2013385]. TREC Complex Answer Retrieval task Results are presented in table 4.3. Not all query expansion approaches and vector space representations methods improve \f4.4. Further improvements 125 Table 4.3: Duet outperforms (statistically signi\ufb01cant at p < 0.05) the best baseline model (italics) on the TREC Complex Answer Retrieval task. The best performances are highlighted in bold for each metric. MRR R-Prec MAP bm25 query only 0.409 0.232 0.320 tf-idf (cs) query only 0.383 0.212 0.350 query + RM1 0.384 0.205 0.324 query + Rocchio 0.466 0.286 0.400 GloVe (cs) query only 0.387 0.210 0.329 query + RM1 0.339 0.177 0.289 query + Rocchio 0.410 0.236 0.349 RDF2Vec (cs) entity-query only 0.369 0.200 0.313 ent-query + ent-RM1 0.377 0.208 0.320 ent-query + ent-Rocchio 0.375 0.206 0.316 Learning to Rank all (cs) scores 0.475 0.290 0.412 Duet query only 0.553 0.359 0.470 over the BM25 baseline. The most promising results\u2014among the baseline methods which employ cosine similarity as a ranking function\u2014are obtained when the query is expanded with supporting textual paragraphs. This is an interesting \ufb01nding that recon\ufb01rms the results of previous work on the automatic generation of Wikipedia articles based on its structural information [377, 386]. The learning to rank model is our best performing baseline. Duet yields a substantial improvement over all presented approaches, including a 47% improvement in MAP over the BM25 baseline and a 14% improvement over the learning to rank model. 4.4 Further improvements In follow up work, we explore several additional modi\ufb01cations to the original Duet architecture and demonstrate through an ablation study that incorporating these changes results in signi\ufb01cant improvements on passage ranking. We evaluate the modi\ufb01ed Duet model on the MS MARCO passage ranking task [52] and the TREC \f126 Chapter 4. Learning to rank with Duet networks Deep Learning track [15]. In the context of the document ranking task at TREC, we further modify the architecture to incorporate multiple-\ufb01eld representation of documents. 4.4.1 Duet on MS MARCO In this section, we brie\ufb02y describe several modi\ufb01cations to the Duet architecture in the context of passage ranking. A public implementation of the updated Duet model using PyTorch [387] is available online7. 1. Word embeddings. We replace the character level n-graph encoding in the input of the distributed subnetwork with word embeddings. We see signi\ufb01cant reduction in training time given a \ufb01xed number of minibatches and a \ufb01xed minibatch size. This change primarily helps us to train on a signi\ufb01cantly larger amount of data under \ufb01xed training time constraints. We initialize the word embeddings using pre-trained GloVe [202] embeddings before training Duet. 2. Inverse document frequency weighting. In contrast to some of the other datasets on which Duet has been previously evaluated [7, 8], the MS MARCO dataset contains a relatively larger percentage of natural language queries and the queries are considerably longer on average. In traditional IR models, the inverse document frequency (IDF) [91] of a query term provides an effective mechanism for weighting the query terms by their discriminative power. In the original Duet architecture, the input to the local subnetwork corresponding to a query q and a document d is a binary interaction matrix X \u2208R|q|\u00d7|d| de\ufb01ned as follows: Xij = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1, if qi = d j 0, otherwise (4.7) 7https://github.com/dfcf93/MSMARCO/blob/master/Ranking/ Baselines/Duet.ipynb \f4.4. Further improvements 127 We incorporate IDF in Duet by weighting the interaction matrix by the IDF of the matched terms. We adopt the Robertson-Walker de\ufb01nition of IDF [388] normalized to the range [0,1]. X\u2032 ij = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 IDF(qi), if qi = dj 0, otherwise (4.8) IDF(t) = log(N/nt) log(N) (4.9) Where, N is the total number of passages in the collection and nt is the number of passages in which the term t appears at least once. 3. Non-linear combination of local and distributed subnetworks. Zamani et al. [389] show that when combining different subnetworks in a neural ranking model, it is more effective if each subnetwork produce a vector output that are further combined by additional multi-layer perceptrons (MLP). In the original Duet, the local and the distributed subnetwork produce a single score that are linearly combined. In our updated architecture, both subnetworks produce a vector that are further combined by an MLP\u2014with two hidden layers\u2014to generate the estimated relevance score. 4. Recti\ufb01er Linear Units (ReLU). We replace the Tanh non-linearities in the original Duet with ReLU [390] activations. 5. Bagging. We observe some additional improvements from combining multiple Duet models\u2014trained with different random seeds and on different random sample of the training data\u2014using bagging [391]. Experiments We evaluate the proposed modi\ufb01cations to Duet on the recently released MS MARCO passage ranking task [52]. The task requires a model to rank \f128 Chapter 4. Learning to rank with Duet networks approximately thousand passages for each query. The queries are sampled from Bing\u2019s search logs, and then manually annotated to restrict them to questions with speci\ufb01c answers. A BM25 [80] model is employed to retrieve the top thousand candidate passages for each query from the collection. For each query, zero or more candidate passages are deemed relevant based on manual annotations. The ranking model is evaluated on this passage re-ranking task using the mean reciprocal rank (MRR) metric [75]. Participants are required to submit the ranked list of passages per query for a development (dev) set and a heldout (eval) set. The ground truth annotations for the development set are available publicly, while the corresponding annotations for the evaluation set are heldout to avoid over\ufb01tting. A public leaderboard8 presents all submitted runs from different participants on this task. The MS MARCO task provides a pre-processed training dataset\u2014called \u201ctriples.train.full.tsv\u201d\u2014where each training sample consists of a triple \u27e8q, p+, p\u2212\u27e9, where q is a query and p+ and p\u2212are a pair of passages, with p+ being more relevant to q than p\u2212. Similar to the original Duet, we employ the cross-entropy with softmax loss to learn the parameters of our network M : L = Eq,p+,p\u2212\u223c\u03b8[\u2113(Mq,p+ \u2212Mq,p\u2212)] (4.10) where, \u2113(\u2206) = log(1+e\u2212\u03c3\u00b7\u2206) (4.11) Where, Mq,p is the relevance score for the pair \u27e8q, p\u27e9as estimated by the model M . Note, that by considering a single negative passage per sample, our loss is equivalent to the RankNet loss [235]. We use the Adam optimizer with default parameters and a learning rate of 0.001. We set \u03c3 in Equation 5.8 to 0.1 and dropout rate for the model to 0.5. We trim all queries and passages to their \ufb01rst 20 and 200 words, respectively. We restrict our input vocabulary to the 71,486 most frequent terms in the collection and set the size of all hidden layers to 300. We use minibatches of size 1024 and train the model for 1024 minibatches. Finally, for bagging we train eight different Duet networks 8http://www.msmarco.org/leaders.aspx \f4.4. Further improvements 129 Table 4.4: Comparison of the different Duet variants and other state-of-the-art approaches from the public MS MARCO leaderboard. The update Duet bene\ufb01ts signi\ufb01cantly from the modi\ufb01cations proposed in this paper. Model MRR@10 Dev Eval Other approaches BM25 0.165 0.167 Single CKNRM [392] model 0.247 0.247 Ensemble of 8 CKNRM [392] models 0.290 0.271 IRNet (a proprietary deep neural model) 0.278 0.281 BERT [167] 0.365 0.359 Duet variants Single Duet w/o IDF weighting for interaction matrix 0.163 Single Duet w/ Tanh non-linearity (instead of ReLU) 0.179 Single Duet w/o MLP to combine local and distributed scores 0.208 Single Duet 0.243 0.245 Ensemble of 8 Duet networks 0.252 0.253 with different random seeds and on different samples of the training data. We train and evaluate our models using a Tesla K40 GPU\u2014on which it takes a total of only 1.5 hours to train each single Duet model and to evaluate it on both dev and eval sets. Results Table 4.4 presents the MRR@10 corresponding to all the Duet variants we evaluated on the dev set. The updated Duet with all the modi\ufb01cations described in Section 4.4.1 achieves an MRR@10 of 0.243. We perform an ablation study by leaving out one of the three modi\ufb01cations\u2014(i) IDF weighting for interaction matrix, (ii) ReLU non-linearity instead of Tanh, and (iii) LP to combine local and distributed scores,\u2014out at a time. We observe a 33% degradation in MRR by not incorporating the IDF weighting alone. It is interesting to note that the Github implementations9 of the KNRM [393] and CKNRM [392] models also indicate that their MS MARCO submissions incorporated IDF term-weighting\u2014potentially indicating the value of IDF weighting across multiple architectures. Similarly, we also observe a 26% degradation in MRR by using Tanh non-linearity instead of ReLU. Using a linear combination of scores from the local and the distributed subnetwork 9https://github.com/thunlp/Kernel-Based-Neural-Ranking-Models \f130 Chapter 4. Learning to rank with Duet networks instead of combining their vector outputs using an MLP results in 14% degradation in MRR. Finally, we observe a 3% improvement in MRR by ensembling eight Duet networks using bagging. We also submit the individual Duet model and the ensemble of eight Duets for evaluation on the heldout set and observe similar numbers. We include the MRR numbers for other non-Duet based approaches that are available on the public leaderboard in Table 4.4. As of writing this paper, BERT [327] based approaches\u2014e.g., [167]\u2014are outperforming other approaches by a signi\ufb01cant margin. Among the non-BERT based approaches, a proprietary deep neural network\u2014called IRNet\u2014currently demonstrates the best performance on the heldout evaluation set. This is followed, among others, by an ensemble of CKNRM [392] models and the single CKNRM model. The single Duet model achieves comparable MRR to the single CKNRM model on the eval set. The ensemble of Duets, however, performs slightly worse than the ensemble of the CKNRM models on the same set. 4.4.2 Duet on TREC Deep Learning track The deep learning track at TREC 2019 makes large training datasets\u2014suitable for traininig deep models with large number of learnable parameters\u2014publicly available in the context of a document ranking and a passage ranking tasks. We benchmark Duet on both tasks. In the context of the document ranking task, we adapt Duet to ingest a \u201cmultiple \ufb01eld\u201d view of documents, based on \ufb01ndings from Zamani et al. [389]. We refer to this new architecture as Duet with Multiple Fields (DuetMF). We also combine the relevance estimates from DuetMF with several other traditional and neural retrieval methods in a learning-to-rank (LTR) [39] framework. For the passage ranking task, we submit a single run based on an ensemble of eight Duet models. The architecture and the training scheme resembles that described in Section 4.4.1. TREC 2019 deep learning track The TREC 2019 deep learning track introduces: (i) a document retrieval task and (ii) a passage retrieval task. For both tasks, participants are provided a set of candidates\u2014100 documents and 1000 passages, \f4.4. Further improvements 131 respectively\u2014per query that should be ranked. Participants can choose to either rerank provided candidates or retrieve from the full collection. For the passage retrieval task, the track reuses the set of 500K+ manuallyassessed binary training labels released as part of the Microsoft Machine Reading COmprehension (MS MARCO) challenge [52]. For the document retrieval task, the passage-level labels are transferred to their corresponding source documents\u2014 producing a training dataset of size close to 400K labels. For evaluation, a shared test set of 200 queries is provided for both tasks, of which two different overlapping set of 43 queries were later selected for manual NIST assessments corresponding to the two tasks. Full details of all datasets is available on the track website10 and in the track overview paper [15]. Duet with Multiple Fields (DuetMF). Zamani et al. [389] study neural ranking models in the context of documents with multiple \ufb01elds. In particular, they make the following observations: Obs. 1: It is more effective to summarize the match between query and individual document \ufb01elds by a vector\u2014as opposed to a single score\u2014before aggregating to estimate full document relevance to the query. Obs. 2: It is better to learn different query representations corresponding to each document \ufb01eld under consideration. Obs. 3: Structured dropout (e.g., \ufb01eld-level dropout) is effective for regularization during training. We incorporate all of these ideas in the updated Duet network as shown in Fig. 4.5. Documents in the deep learning track dataset contains three text \ufb01elds: (i) URL, (ii) title, and (iii) body. We employ Duet to match the query against each individual document \ufb01elds. In line with Obs. 1 from [389], the \ufb01eld-speci\ufb01c Duet outputs a vector instead of a single score. We do not share the parameters of Duet 10https://microsoft.github.io/TREC-2019-Deep-Learning/ \f132 Chapter 4. Learning to rank with Duet networks generate embedding doc URL text interaction matrix query text generate embedding 1 local sub-model distributed submodel hadamard product sum w/ local submodel dropout aggregate generate embedding doc title text interaction matrix generate embedding 2 hadamard product generate embedding doc body text interaction matrix hadamard product generate embedding 3 local sub-model distributed submodel sum w/ local submodel dropout aggregate local sub-model distributed submodel sum w/ local submodel dropout aggregate sum w/ field-level dropout aggregate Figure 4.5: The modi\ufb01ed Duet (DuetMF) that considers multiple document \ufb01elds. between the \ufb01eld-speci\ufb01c instances based on Obs. 2. Following Obs. 3, we introduce structured dropouts at different stages of the model. We randomly dropout each of the local subnetworks for 50% of the training samples. Similarly, we also dropout different combinations of \ufb01eld-level subnetworks uniformly at random\u2014 taking care that at least one \ufb01eld-level model is always retained. We consider the \ufb01rst 20 terms for queries and for document URLs and titles. For document body text, we consider the \ufb01rst 2000 terms. Similar to Section 4.4.1, we employ pretrained word embeddings as the input text representation for the distributed subnetworks. We train the word embeddings using a standard word2vec [203] implementation in FastText [394] on a combination of the MS MARCO document corpus and training queries. The query and document \ufb01eld embeddings are learned by deep convolutionalpooling layers. We set the hidden layer size at all stages of the model to 300 and dropout rate for different layers to 0.5. For training, we employ the RankNet loss [235] over < q,dpos,dneg > triples and the Adam optimizer [395]\u2014with a minibatch size of 128 and a learning rate of 0.0001 for training. We sample dneg uni\f4.4. Further improvements 133 formly at random from the top 100 candidates provided that are not positively labeled. When employing structured dropout, the same sub-models are masked for both dpos and dneg. In light of the recent success of large pretrained language models\u2014 e.g., [167]\u2014we also experiment with an unsupervised pretraining scheme using the MS MARCO document collection. The pretraining is performed over < qpseudo,dpos,dneg >\u2014where dpos and dneg are randomly sampled from the collection and a pseudo-query qpseudo is generated by picking the URL or the title of dpos randomly (with equal probability) and masking the corresponding \ufb01eld on the document side for both dpos and dneg. We see faster convergence during supervised training when the DuetMF model is pretrained in this fashion on the MS MARCO document collection. We posit that a more formal study should be performed in the future on pretraining Duet networks on large collections, such as Wikipedia and the BookCorpus [396]. In addition to the independent Duet model, we train a neural LTR model with two hidden layers\u2014each with 1024 hidden nodes. The LTR run reranks a set of 100 document candidates retrieved by query likelihood (QL) [90] with Dirichlet smoothing (\u00b5 = 1250) [150]. Several ranking algorithms based on neural and inference networks act as features: (i) DuetMF, (ii) Sequential Dependence Model (SDM) [155], and (iii) Pseudo-Relevance Feedback (PRF) [157, 158], (iv) BM25, [80], and (v) Dual Embedding Space Model (DESM) [165, 218]. We employ SDM with an order of 3, combine weight of 0.90, ordered window weight of 0.034, and an unordered window weight of 0.066 as our base candidate scoring function. We use these parameters to retrieve from the target corpus as well as auxiliary corpora of English language Wikipedia (enwiki-20180901-pages-articles-multistream.xml.bz2), LDC Gigaword (LDC2011T07). For PRF, initial retrievals\u2014from either of the target, wikipedia, or gigaword corpora\u2014adopted the SDM parameters above, however are used to rank 75-word passages with a 25-word overlap. These passages are then interpolated using the top m passages and standard relevance modeling techniques, \f134 Chapter 4. Learning to rank with Duet networks Table 4.5: Of\ufb01cial TREC 2019 Deep Learning track results. The recall metric is computed at position 100 for the document retrieval task and at position 1000 for the passage retrieval task. Run description Subtask MRR NDCG@10 MAP Recall Document retrieval task LTR w/ DuetMF fullrank 0.876 0.578 0.237 0.368 DuetMF model rerank 0.810 0.533 0.229 0.387 Passage retrieval task Ensemble of 8 Duets rerank 0.806 0.614 0.348 0.694 from which we select the top 50 words to use as an expanded query for the \ufb01nal ranking of the target candidates. We do not explicitly adopt RM3 [160] because our LTR model implicitly combines our initial retrieval score and score from the expanded query. All code for the SDM and PRF feature computation is available at https://github.com/diazf/indri. We evaluate two different BM25 models with hyperparameters < k1 = 0.9,b = 0.4 > and < k1 = 3.44,b = 0.87 >. Corresponding to each of the DuetMF, SDM, PRF, and BM25 runs we generate two features based on the score and the rank that the model predicts for a document w.r.t. the target query. We generate eight features by comparing the query against two different document \ufb01elds (title and body) and using different DESM similarity estimates (INxIN, INxOUT, OUTxIN, OUTxOUT). We add couple of features based on query length and domain quality\u2014where the latter is de\ufb01ned simply as a ratio between how often documents from a given domain appear in the positively labeled training data and in the overall document collection. Finally, for the passage ranking task, we adopt the exact same model and training procedure from Section 4.4.1. Our \ufb01nal submission is an ensemble of eight Duet networks. Table 4.5 summarizes the of\ufb01cial evaluation results for all three runs. \f4.5. Discussion 135 42 44 46 48 50 52 54 1 2 3 4 5+ local dist. duet (a) Model performance by query length 42 44 46 48 50 52 54 frequent rare unseen local dist. duet (b) Model performance by term rarity Figure 4.6: NDCG performance of different models by length of query and how rare the rarest query term is in the training data. For the rare term analysis, we place all query terms into one of \ufb01ve categories based on their occurrence counts in the training data. Then we then categorize each query in the test dataset based on the frequency of the rarest term belongs in the query. We include a category for queries with at least one term which has no occurrences in the training data. 4.5 Discussion Our results demonstrated that our joint optimization of local and distributed subnetworks provides substantial improvement over several state-of-the-art baselines. Although the independent models were competitive with existing baselines, the combination provided a signi\ufb01cant boost. We also con\ufb01rm that using judged negative documents should be used when available. We speculate that training with topically-similar (but non-relevant) documents allows the model to better discriminate between the documents provided by an earlier retrieval stage that are closer to each other w.r.t. relevance. This sort of staged ranking, \ufb01rst proposed by Cambazoglu et al. [397], is now a common web search engine architecture. In Section 4.2.3 we described our baseline models according to which of the properties of effective retrieval systems they incorporate. It is reasonable to expect that models with certain properties are better suited to deal with certain segments of queries. For example, the relevant Web page for the query \u201cwhat channel are the seahawks on today\u201d may contain the name of the actual channel (e.g.\u201e \u201cESPN\u201d or \u201cFOX\u201d) and the actual date for the game, instead of the terms \u201cchannel\u201d or \u201ctoday\u201d. A retrieval model that only counts repetitions of query terms is likely to retrieve less relevant documents for this query \u2013 compared to a model that con\f136 Chapter 4. Learning to rank with Duet networks BM25 QL DM LSA DSSM CDSSM DESM DRMM local dist. duet Figure 4.7: Principal component analysis of models based on retrieval performance across testing queries. Models using exact term matches (\u25b3), proximity (\u25e6), and inexact matches (\u25bd) are presented. Our models are presented as black squares. siders \u201cESPN\u201d and \u201cFOX\u201d to be relevant document terms. In contrast, the query \u201cpekarovic land company\u201d, which may be considered as a tail navigational intent, is likely to be better served by a retrieval model that simply retrieves documents containing many matches for the term \u201cpekarovic\u201d. A representation learning model is unlikely to have a good representation for this rare term, and therefore may be less equipped to retrieve the correct documents. These anecdotal examples agree with the results in in Table 4.2 that show that on the weighted test set all the neural models whose main focus is on learning distributed representations of text (Duet model, distributed model, DESM, DSSM, and CDSSM) perform better than the models that only look at patterns of term matches (local model and DRMM). We believe that this is because the DNNs can learn better representations for more frequent queries, and perform particularly well on this segment. Figure 4.6 provides further evidence towards this hypothesis by demonstrating that the distributed model has a larger NDCG gap with the local model for queries containing more frequent terms, and when the number of terms in the query is small. The Duet model , however, is found to perform better than both the local and the distributed models across all these segments. To better understand the relationship of our models to existing baselines, we compared the per-query performance amongst all models. We conjecture that similar models should perform similarly for the same queries. We represented a retrieval \f4.5. Discussion 137 model as a vector where each position of the vector contains the performance of the model on a different query. We randomly sample two thousand queries from our weighted test set and represent all ranking models as vectors of their NDCG values against these two thousand queries. We visualized the similarity between models by projecting using principal component analysis on the set of performance vectors. The two-dimensional projection of this analysis is presented in Figure 4.7. The \ufb01gure largely con\ufb01rms our intuitions about properties of retrieval models. Models that use only local representation of terms are closer together in the projection, and further away from models that learn distributed representations of text. Interestingly, the plot does not distinguish between whether the underlying model is based on a neural network based or not \u2013 with neural networks of different retrieval properties appearing in each of the three clusters. Another interesting distinction between deep neural models and traditional approaches is the effect of the training data size on model performance. BM25 has very few parameters and can be applied to new corpus or task with almost no training. On the other hand, DNNs like ours demonstrate signi\ufb01cant improvements when trained with larger datasets. Figure 4.8 shows that the effect of training data size particularly pronounced for Duet and the distributed subnetwork that learns representations of text. The trends in these plots indicate that training on even larger datasets may result in further improvements in model performance over what is reported here. We believe this should be a promising direction for future work. A last consideration when comparing these models is runtime ef\ufb01ciency. Web search engines receive tens of thousands of queries per second. Running a deep neural model on raw body text at that scale is a hard problem. The local subnetwork of our model operates on the term interaction matrix that should be reasonable to generate using an inverted index. For the distributed model, it is important to note that the 300\u00d7899 dimensional matrix representation of the document, that is used to compute the Hadamard product with the query, can be pre-computed and stored as part of the document cache. At runtime, only the Hadamard product and the subsequent part of the network needs to be executed. Such caching strategies, if \f138 Chapter 4. Learning to rank with Duet networks 42 44 46 48 50 52 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 27 28 29 210 211 212 213 214 215 216 217 Same # of total samples Same # of epochs QL baseline Number of training samples per epoch Overall NDCG@10 (a) Local subnetwork 42 44 46 48 50 52 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 27 28 29 210 211 212 213 214 215 216 217 Same # of total samples Same # of epochs QL baseline Number of training samples per epoch Overall NDCG@10 (b) Distributed subnetwork 42 44 46 48 50 52 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 27 28 29 210 211 212 213 214 215 216 217 Same # of total samples Same # of epochs QL baseline Number of training samples per epoch Overall NDCG@10 (c) Duet Figure 4.8: We study the performance of our model variants when trained with different size datasets. For every, dataset size we train two models \u2013 one for exactly one epoch and another one with multiple epochs such that the total number of training samples seen by the model during training is 131,072. \f4.6. Conclusion 139 employed effectively, can mitigate large part of the runtime cost of running a DNN based document ranking model at scale. In Chapter 5, we will revisit the question of runtime ef\ufb01ciency, but in the context of a family of neural IR models. 4.6 Conclusion We propose a novel ranking model composed of two separate deep subnetworks, one that matches using local representation of text, and another that learns distributed representation before matching. The Duet of these two subnetworks achieve better performance compared to the sub-models individually on the document ranking and passage ranking tasks\u2014as well as signi\ufb01cant improvements over other neural and traditional non-neural baselines. Our analysis indicate that the improvements over traditional methods are more substantial in the presence of larger training datasets. \f\fChapter 5 Retrieve, not just rerank, using deep neural networks In response to short text queries, search engines attempt to retrieve the top few relevant results by searching through collections containing billions of documents [398], often under a second [399]. Response time is a key consideration in web search. Even a 100ms latency has been shown to invoke negative user reactions [108, 109]. To achieve such short response times, these systems typically distribute the collection over multiple machines that can be searched in parallel [400]. Specialized data structures\u2014such as inverted indexes [401, 402]\u2014are used to dramatically cut down the number of documents required to be evaluated for any speci\ufb01c query. The index organization and query evaluation strategies, in particular, trade-off retrieval effectiveness and ef\ufb01ciency during the candidate generation stage. However, unlike in late stage re-ranking where machine learning (ML) models are commonplace [39, 403], the candidate generation frequently employs traditional retrieval models with few learnable parameters. Query evaluation using state-of-the-art deep neural ranking models require time and resource intensive computations. Typically these models also require both query and document as input to inspect the interactions between query and document terms. The study of these neural ranking methods have, therefore, been largely limited to late stage re-ranking. Ef\ufb01cient retrieval using these complex machine learned relevance estimators is an important challenge [404]. \f142 Chapter 5. Retrieve, not just rerank, using deep neural networks Recently, a few different attempts [405\u2013407] have been made to leverage neural methods for retrieval over large collections. All of these studies focus on neural methods that compute the latent representations of documents independent of the query. This allows the document embeddings to be precomputed. At query evaluation time, only the query embedding is computed by evaluating the corresponding portion of the deep neural model. This is followed by an approximate nearest-neighbour search over the collection\u2014using the precomputed document embeddings. This approaches typically achieve signi\ufb01cantly poorer retrieval performance compared to traditional IR methods\u2014and generally need to be combined with classical IR functions [405, 406]. In this chapter, we describe a different approach\u2014that assumes query term independence\u2014to leverage state-of-the-art neural ranking models for retrieval over the full collection. Based on our initial study, we posit there the is signi\ufb01cant opportunity to use neural methods in combination with impact-ordered inverted index [408\u2013410]. These data structures employ score quantization for ef\ufb01cient retrieval. In the second half of this chapter, we propose a method to learn appropriate quantization schemes that optimize for retrieval effectiveness. 5.1 Query term independence assumption Many traditional IR ranking functions [80, 90, 147, 148, 151] and early word embedding based IR methods [218, 223, 230] manifest the query-term independence (QTI) property\u2014i.e., the documents can be scored independently w.r.t. each query term, and then the scores accumulated. Given a document collection, these termdocument scores can be precomputed [409]. Specialized IR data structures, such as inverted indexes [401, 402], in combination with clever organization strategies (e.g., impact-ordering [408\u2013410]) can take advantage of the simplicity of the accumulation function (typically a linear sum) to aggressively prune the set of documents that need to be assessed per query. This dramatically speeds up query evaluations enabling fast retrieval from large collections, containing billions of documents. Recent deep neural architectures\u2014such as BERT [167], Duet (see Chapter 4), \f5.1. Query term independence assumption 143 and CKNRM [392]\u2014have demonstrated state-of-the-art performance on several IR tasks [1, 15]. However, the superior retrieval effectiveness comes at the cost of evaluating deep models with tens of millions to hundreds of millions of parameters at query evaluation time. In practice, this limits the scope of these models to late stage re-ranking. Like traditional IR models, we can incorporate the QTI assumption into the design of the deep neural model\u2014which would allow of\ufb02ine precomputation of all term-document scores. The query evaluation then involves only their linear combination\u2014alleviating the need to run the computation intensive deep model at query evaluation time. We can further combine these precomputed machine-learned relevance estimates with an inverted index, to retrieve from the full collection. This signi\ufb01cantly increases the scope of potential impact of neural methods in the retrieval process. We study this approach in this work. Of course, by operating independently per query term, the ranking model has access to less information compared to if it has the context of the full query. Therefore, we expect the ranking model to show some loss in retrieval effectiveness under this assumption. However, we trade this off with the expected gains in ef\ufb01ciency of query evaluations and the ability to retrieve, and not just re-rank, using deep models. The ef\ufb01ciency bene\ufb01ts of our proposed approach is two-fold. First and foremost, incorporating the QTI assumption allows for the deep model evaluations to be performed at document indexing time, instead of at query evaluation time. While query evaluation has strict response time constraints [108, 109, 399], IR systems generally have more leeway dealing with heavy computation during the of\ufb02ine indexing process. Furthermore, the of\ufb02ine evaluation provides additional \ufb02exibility to group samples into large batches and can take advantage of large-scale parallelization by distributing the workload over large clusters of machines. Secondly, the computation complexity involved in exhaustively evaluating every document in a collection D with respect to a set of queries Q for a typical deep ranking models, that operate over individual query-document pairs, is O(|D|\u00d7|Q|). For models that incorporate the QTI assumption, the compute complexity changes to O(|D| \u00d7 |T|), \f144 Chapter 5. Retrieve, not just rerank, using deep neural networks where T is the vocabulary of all indexed terms. While this may not look like an obvious improvement over the O(|D| \u00d7 |Q|) complexity, we note that rarely do we need to evaluate the document exhaustively with respect to every term in the vocabulary. In fact, we can rewrite the complexity for query term independent ranking models as O(|D|\u00d7k), where k is the maximum number of terms that are practically important to evaluate for any given document. We posit that k \u226a|T| and that we can employ ef\ufb01cient methods, including simple heuristics, to preselect candidate terms for a given document. The compute complexity can be further improved if, say, the costliest part of the model\u2014e.g., the document encoder\u2014needs to be evaluated only once per document and then only a small overhead is incurred for each of the k candidate terms. In that case, the compute complexity may be closer to O(|D|). A similar motivation has recently been operationalized in the Conformer-Kernel [11, 12] and DeepCT [411] architectures that incorporate the QTI assumption. In this study, we incorporate the QTI assumption into three state-of-the-art neural ranking models\u2014BERT, Duet, and CKNRM\u2014and evaluate their effectiveness on the MS MARCO passage ranking task [52]. We surprisingly \ufb01nd that the two of the models suffer no statistically signi\ufb01cant adverse affect w.r.t. ranking effectiveness on this task under the query term independence assumption. While the performance of BERT degrades under the strong query term independence assumption\u2014 the drop in MRR is reasonably small and the model maintains a signi\ufb01cant performance gap compared to other non-BERT based approaches. We conclude that at least for a certain class of existing neural IR models, incorporating query term independence assumption may result in signi\ufb01cant ef\ufb01ciency gains in query evaluation at minimal (or no) cost to retrieval effectiveness. 5.2 Related work Several neural IR methods\u2014e.g., [163, 218, 223, 230]\u2014already operate under query term independence assumption. However, recent performance breakthroughs on many IR tasks have been achieved by neural models [7, 167, 302, 303, 392] that learn latent representations of the query or inspect interaction patterns between \f5.3. Model 145 query and document terms. In this work, we demonstrate the potential to incorporate query term independence assumption in these recent representation learning and interaction focused models. Some neural IR models [164, 357, 412] learn (dense low-dimensional or sparse high-dimensional) vector representations of document that can be computed independently of the query. The query-document relevance is then estimated as simple similarity functions (e.g., cosine or dot-product) of the learned representations. These models are also amenable to precomputation of document representations and fast retrieval using approximate nearest neighbor search [413]\u2014or even traditional IR data structures [407]. However, these approaches do not work when the model architecture incorporates early interactions between query and document representations\u2014e.g., [7, 167, 238, 392]. The approach proposed in this study allows for interactions between individual query terms and documents. 5.3 Model IR functions that assume QTI observe the following general form: Sq,d = \u2211 t\u2208q st,d (5.1) Where, s \u2208R|V|\u00d7|C| \u22650 is the set of positive real-valued scores as estimated by the relevance model corresponding to documents d \u2208C in collection C w.r.t. to terms t \u2208V in vocabulary V\u2014and Sq,d denotes the aggregated score of document d w.r.t. to query q. For example, in case of BM25 [80]: st,d = idft \u00b7 tftd \u00b7(k1 +1) tftd +k1 \u00b7 \u0010 1\u2212b+b\u00b7 |d| avgdl \u0011 (5.2) Where, tf and idf denote term-frequency and inverse document frequency, respectively\u2014and k1 and b are the free parameters of the BM25 model. \f146 Chapter 5. Retrieve, not just rerank, using deep neural networks Deep neural models for ranking, in contrast, do not typically assume QTI. Instead, they learn complex matching functions to compare the candidate document to the full query. The parameters of such a model \u03d5 is typically learned discriminatively by minimizing a loss function of the following form: L = Eq\u223c\u03b8q, d+\u223c\u03b8d+,d\u2212\u223c\u03b8d\u2212[\u2113(\u2206q,d+,d\u2212)] (5.3) where, \u2206q,d+,d\u2212= \u03d5q,d+ \u2212\u03d5q,d\u2212 (5.4) We use d+ and d\u2212to denote a pair of relevant and non-relevant documents, respectively, w.r.t. query q. The instance loss \u2113in Equation 5.8 can take different forms\u2014e.g., ranknet [235] or hinge [267]. \u2113ranknet(\u2206q,d+,d\u2212) = log(1+e\u2212\u03c3\u00b7\u2206q,d+,d\u2212) (5.5) \u2113hinge(\u2206q,d+,d\u2212) = max{0,\u03b5 \u2212\u2206q,d+,d\u2212} (5.6) Given a neural ranking model \u03d5, we de\ufb01ne \u03a6\u2014the corresponding model under the QTI assumption\u2014as: \u03a6q,d = \u2211 t\u2208q \u03d5t,d (5.7) The new model \u03a6 preserves the same architecture as \u03d5 but estimates the relevance of a document independently w.r.t. each query term, as shown in Figure 5.1. The parameters of \u03a6 are learned using the modi\ufb01ed loss: L = Eq\u223c\u03b8q, d+\u223c\u03b8d+,d\u2212\u223c\u03b8d\u2212[\u2113(\u03b4q,d+,d\u2212)] (5.8) where, \u03b4q,d+,d\u2212= \u2211 t\u2208q \u03d5t,d+ \u2212\u03d5t,d\u2212 (5.9) \f5.3. Model 147 score (a) Any arbitrary relevance model score (b) Same relevance model with QTI assumption Figure 5.1: A visual representation of incorporating QTI assumption into any relevance model. We treat the model in (a) as a black-box and re-visualize the same model under QTI assumption in (b). Given collection C and vocabulary V, we precompute \u03d5t,d for all t \u2208V and d \u2208 C. In practice, the total number of combinations of t and d may be large but we can enforce additional constraints on which \u27e8t,d\u27e9pairs to evaluate, and assume no contributions from remaining pairs. During query evaluation, we can lookup the precomputed score \u03d5t,d without dedicating any additional time and resource to evaluate the deep ranking model. We employ an inverted index, in combination with the precomputed scores, to perform retrieval from the full collection using the learned relevance function \u03a6. We note that several IR data structures assume that \u03d5t,d be always positive which may not hold for any arbitrary neural architecture. But this can be recti\ufb01ed1 by applying a recti\ufb01ed linear unit [390] activation on the 1Pun intended. \f148 Chapter 5. Retrieve, not just rerank, using deep neural networks model\u2019s output. 5.4 Experiments 5.4.1 Task description We study the effect of the QTI assumption on deep neural IR models in the context of the MS MARCO passage ranking task [52]. We \ufb01nd this ranking task to be suitable for this study for several reasons. Firstly, with one million question queries sampled from Bing\u2019s search logs, 8.8 million passages extracted from web documents, and 400,000 positively labeled query-passage pairs for training, it is one of the few large datasets available today for benchmarking deep neural IR methods. Secondly, the challenge leaderboard2\u2014with 18 entries as of March 3, 2019\u2014is a useful catalog of approaches that show state-of-the-art performance on this task. Conveniently, several of these high-performing models include public implementations for the ease of reproducibility. Two comparable benchmarks include the TREC CAR [8, 414] and the Google Natural Questions [415] datasets. However, we note that the queries in the former dataset are synthetically generated\u2014from Wikipedia page titles and section headings. The latter dataset was released fairly recently and does not list many IR methods that have been evaluated on that benchmark\u2014limiting our options to select appropriate baselines for the study. Therefore, we adopt the MS MARCO benchmark for this work. The MS MARCO passage ranking task comprises of one thousand passages per query that the IR model, being evaluated, should re-rank. Corresponding to every query, one or few passages have been annotated by human editors as containing the answer relevant to the query. The rank list produced by the model is evaluated using the MRR metric against the ground truth annotations. We use the MS MARCO training dataset to train all baseline and treatment models, and report their performance on the publicly available development set which we consider\u2014and hereafter refer to\u2014as the test set for our experiments. This test set contains about 2http://www.msmarco.org/leaders.aspx \f5.5. Results 149 seven thousand queries which we posit is suf\ufb01cient for reliable hypothesis testing. Note that the thousand passages per query were originally retrieved using BM25 from a collection that is provided as part of the MS MARCO dataset. This allows us to also use this dataset in a retrieval setting\u2014in addition to the re-ranking setting used for the of\ufb01cial challenge. We take advantage of this in our study. 5.4.2 Baseline models We begin by identifying models listed on the MS MARCO leaderboard that can serve as baselines for our work. We only consider the models with public implementations. We \ufb01nd that a number of top performing entries\u2014e.g., [167]\u2014are based on recently released large scale language model called BERT [327]. The BERT based entries are followed in ranking by Duet and CKNRM. Therefore, we limit this study to BERT, Duet, and CKNRM. BERT Nogueira and Cho [167] report state-of-the-art retrieval performance on the MS MARCO passage re-ranking task by \ufb01ne tuning BERT [327] pretrained models. In this study, we reproduce the results from their paper corresponding to the BERT Base model and use it as our baseline. Under the term independence assumption, we evaluate the BERT model once per query term\u2014wherein we input the query term as sentence A and the passage as sentence B. Duet We employ the Duet variant described in Section 4.4.1 for this study. CKNRM The CKNRM model [392] combines kernel pooling based soft matching [393] with a convolutional architecture for comparing n-grams. CKNRM uses kernel pooling to extract ranking signals from interaction matrices of query and passage n-grams. Under the query term independence assumption, the model considers one query term at a time\u2014i.e., the interactions between individual query unigrams and passage n-grams. We use a public implementation3 of the model in our study. 5.5 Results Table 5.1 compares the BERT, the Duet, and the CKNRM models trained under the query term independence assumption to their original counterparts on the pas3https://github.com/thunlp/Kernel-Based-Neural-Ranking-Models \f150 Chapter 5. Retrieve, not just rerank, using deep neural networks Table 5.1: Comparing ranking effectiveness of BERT, Duet, and CKNRM with the query independence assumption (denoted as \u201cTerm ind.\u201d) with their original counterparts (denoted as \u201cFull\u201d). The difference between the median MRR for \u201cfull\u201d and \u201cterm ind.\u201d models are not statistically signi\ufb01cant based on a student\u2019s t-test (p < 0.05) for Duet and CKNRM. The difference in MRR is statistically significant based on a student\u2019s t-test (p < 0.05) for BERT (single run). The BM25 baseline (single run) is included for reference. Model MRR@10 Mean (\u00b1 Std. dev) Median BERT Full 0.356 0.356 Term ind. 0.333 0.333 Duet Full 0.239 (\u00b10.002) 0.240 Term ind. 0.244 (\u00b10.002) 0.244 CKNRM Full 0.223 (\u00b10.004) 0.224 Term ind. 0.222 (\u00b10.005) 0.221 BM25 0.167 0.167 sage re-ranking task. During this study, we observed some variance in relevance metrics corresponding to different training runs for the CKNRM model using different random seeds. To control for this variance we train eight different clones of the CKNRM model and report mean and median MRR. Similarly, the metrics corresponding to the Duet model is based on \ufb01ve separate training runs, although we observe negligible variance in the context of this architecture. For the BERT based models, due to long training time we only report results based on a single training and evaluation run. As table 5.1 shows, we observe no statistically significant difference in effectiveness from incorporating the query term independence assumptions in either Duet or CKNRM. The query term independent BERT model performs slightly worse than its original counterpart on MRR but the performance is still superior to other non-BERT based approaches listed on the public leaderboard. Note that all three models emphasize on early interactions between query and document representations\u2014unlike other prior work [164, 412] where the interaction is limited to the \ufb01nal stage. Under the QTI assumption, we allow early interaction between individual query terms and document, but delay the full query-document \f5.6. Conclusion 151 Table 5.2: Comparing Duet (with QTI assumption) and BM25 under the full retrieval settings. The differences in recall and MRR between Duet (term ind.) and BM25 are statistically signi\ufb01cant according to student\u2019s t-test (p < 0.01). Model Recall@1000 MRR@10 BM25 0.80 0.169 BM25 + Duet 0.80 0.212 Duet (term ind.) 0.85 0.218 interaction till the end. Our observation that delaying the query-document interaction has no signi\ufb01cant impact on effectiveness of these interaction-based models is a key \ufb01nding of this study. We posit that models with query term independence assumption\u2014even when slightly less effective compared to their full counterparts\u2014are likely to retrieve better candidate sets for re-ranking. To substantiate this claim, we conduct a smallscale retrieval experiment based on a random sample of 395 queries from the test set. We use the Duet model with the query term independence assumption to precompute the term-passage scores constrained to (i) the term appears at least once in the passage, and (ii) the term does not appear in more than 5% of the passage collection. Table 5.2 compares Duet and BM25 on their effectiveness as a \ufb01rst stage retrieval method in a potential telescoping setting [111]. We observe a 6.25% improvement in recall@1000 from Duet over the BM25 baseline. To perform similar retrieval from the full collection using the full Duet model, unlike its query-termindependent counterpart, is prohibitive because it involves evaluating the model on every passage in the collection against every incoming query. 5.6 Conclusion The emergence of compute intensive ranking models, such as BERT, motivates rethinking how these models should be evaluated in large scale IR systems. The approach proposed in this paper moves the burden of model evaluation from the query evaluation stage to the document indexing stage. This may have further consequences on computational ef\ufb01ciency by allowing batched model evaluation that more effectively leverages GPU (or TPU) parallelization. \f152 Chapter 5. Retrieve, not just rerank, using deep neural networks This preliminary study is based on three state-of-the-art deep neural models on a public passage ranking benchmark. The original design of all three models\u2014 BERT, Duet, and CKNRM\u2014emphasize on early interactions between query and passage representations. However, we observe that limiting the interactions to passage and individual query terms has reasonably small impact on their effectiveness. These results are promising as they support the possibility of dramatically speeding up query evaluation for some deep neural models, and even employing them to retrieve from the full collection. The ability to retrieve\u2014and not just re-rank\u2014 using deep models has signi\ufb01cant implications for neural IR research. Any loss in retrieval effectiveness due to incorporating strong query term independence assumptions may be further recovered by additional stages of re-ranking in a telescoping approach [111]. This study is focused on the passage ranking task. The trade-off between effectiveness and ef\ufb01ciency may be different for document retrieval and other IR tasks. Traditional IR methods in more complex retrieval settings\u2014e.g., when the document is represented by multiple \ufb01elds [146]\u2014also observe the query term independence assumption. So, studying the query term independence assumption in the context of corresponding neural models\u2014e.g., [389]\u2014may also be appropriate. We note these as important future directions for our research. The \ufb01ndings from this study may also be interpreted as pointing to a gap in our current state-of-the-art neural IR models that do not take adequate advantage of term proximity signals for matching. This is another \ufb01nding that may hold interesting clues for IR researchers who want to extract more retrieval effectiveness from deep neural methods. \fChapter 6 Stochastic learning to rank for target exposure Retrieval systems mediate what information users are exposed to and consume. A typical large collection may contain several documents that are relevant, albeit to varying degrees, to a user\u2019s query. Because users rarely inspect all retrieved results exhaustively, the IR system must prioritize what documents are exposed more than others to maximize the chances of user satisfaction. The need for this prioritization is often operationalized by formulating retrieval as a ranking task, as we have also assumed in previous chapters. Consequently, this assumption that the system produces a ranked list of results is often also baked into the design of many information access interfaces. A common form of presentation involves displaying a vertical (or sometimes horizontal) result list. Even sophisticated visual interfaces, such as grid layouts, or non-visual interaction modes, as in the case of voice-based search, may assume that the backend retrieval system returns a ranked list of results which determines how prominently they should be displayed. Across these different modalities, the probability that the user inspects a certain result depends on its display position [73, 113] and size [114] among other factors, which in turn may be determined by the document\u2019s rank in the results list. A static ordering by estimated relevance makes sense if we assume: (i) the IR system is only concerned about satisfying the user performing the search, and (ii) all relevant documents are equivalent from the user\u2019s perspective and therefore \f154 Chapter 6. Stochastic learning to rank for target exposure the user\u2019s interests are best served by ordering retrieved documents strictly by their estimated relevance. In many real-life IR scenarios, however, the system must also care about document and producer-side fairness [115\u2013117]. For example, in web search we may want the IR system to give equal exposure to documents of comparable relevance\u2014which may directly impact their monetization and other value that producers can extract from content exposure. When documents correspond to different demographics like gender or race\u2014e.g., candidate pro\ufb01les on a job application website\u2014parity of exposure across demographics may be important from fairness and legal concerns. In scenarios where the system produces a ranking of service providers, such as booking a hotel or hailing a ride [416], distributing exposure across multiple providers may be necessary to avoid producer starvation or overload. When retrieved documents have comparable relevance but contain different information, then balanced exposure may increase diversity of consumption and help mitigate phenomenon like \ufb01lter bubbles [417]. In these scenarios, a single \ufb01xed ranking makes less sense. Instead, it may be more meaningful for the system to present different randomized permutations of comparably relevant documents to distribute exposure more fairly among them. Such stochastic ranking policies provide a framework for optimizing how exposure is distributed in expectation. In Chapters 4 and 5, we adopted the narrow view that it is suf\ufb01cient to learn a relevance model whose estimates are appropriate for generating single static ordering of results. In contrast, in this chapter we shift our focus to optimizing models that produce relevance estimates appropriate for generating different permutations of results that minimizes the deviation of expected exposure from a speci\ufb01ed target distribution. Our main contribution here is to adapt the learning to rank [39] framework for direct optimization towards target exposure. 6.1 Related work In the learning to rank [39] literature, several optimization objectives have been proposed that can be broadly categorized into: (i) pointwise, (ii) pairwise, and (iii) listwise loss functions. Because the exposure of a document is a function of its rank in \f6.2. Expected exposure metrics 155 the result list, our optimization goals are better served by the listwise formulation. Several listwise loss functions [277, 278] operationalize the idea of deriving the probability of a rank ordering given the score distribution over documents using the Plackett-Luce model [274, 275]. It is noteworthy, that enumerating all distinct document permutations can be computationally challenging even for a moderately sized set of candidates. More recently, Bruch et al. [418] demonstrated a mechanism for sampling rankings from the Plackett-Luce distribution using the reparameterization trick [395] that is amenable to gradient-based optimization. Their approach involves adding independently drawn noise samples from the Gumbel distribution [419] and then deriving the approximate rank of the document following the method proposed by Qin et al. [420] and Wu et al. [421]. While not developed in the context of deploying stochastic ranking models, we adopt a similar methodology herein in our framework. Our work is at the intersection of learning to rank optimization and expected exposure metrics. For the latter, we operationalize the framework proposed by Diaz et al. [14]. In Section 6.2, we provide a brief primer on this topic. 6.2 Expected exposure metrics Given an information need q, Diaz et al. [14] de\ufb01ne the expected exposure \u03b5d of document d as: \u03b5d = E\u03c3\u223c\u03c0q [\u00b5(d|\u03c3)] (6.1) Where, \u03c3 is a ranking of documents in the collection, sampled from \u03c0q, a probability distribution over all possible permutations of documents conditioned on q. We use \u00b5(d|\u03c3) to denote the conditional probability of exposure of document d given ranking \u03c3. To compute \u00b5(d|\u03c3), we can adopt any arbitrary user behavior model [422] that de\ufb01nes how the user interacts with the presented rank list. For example, the rank-biased precision (RBP) [423] metric assumes that a user\u2019s probability of visiting a position decreases exponentially with rank. \f156 Chapter 6. Stochastic learning to rank for target exposure \u00b5RBP(d|\u03c3) = \u03b3(\u03c1\u03c3,d\u22121) (6.2) Where, \u03c1\u03c3,d is the rank of the document d in \u03c3\u2014and \u03b3 is the patience parameter that controls how deep in the ranking the user is likely inspect. We adopt this RBP user behavior model in this study but note that this analysis can be easily extended to more elaborate browsing models like the cascade model [424]. Plugging in the RBP user-model into Equation 6.1 we get: \u03b5d = E\u03c3\u223c\u03c0q h \u03b3(\u03c1\u03c3,d\u22121)i (6.3) Diaz et al. [14] further de\ufb01ne a metric that quanti\ufb01es the deviation between the expected exposure vector \u03b5 corresponding to all documents in the collection under a retrieval system from a speci\ufb01ed target distribution \u03b5\u2217: EE(\u03c0,q) = \u2225\u03b5 \u2212\u03b5\u2217\u22252 2 (6.4) = \u2225\u03b5\u22252 2 |{z} EE-D \u22122\u03b5\u22ba\u03b5\u2217 | {z } EE-R +\u2225\u03b5\u2217\u22252 2 (6.5) Equation 6.5 factorizes the expected exposure metric into expected exposure disparity (EE-D) and expected exposure relevance (EE-R). EE-D measures the inequity in exposure distribution over all documents which we want to minimize when optimizing the parameters of the ranking policy. In contrast, EE-R quanti\ufb01es how much of the exposure is on relevant documents which a good ranking policy should maximize. This leads to a natural trade-off between disparity (EE-D) and relevance (EE-R) which often relates to the degree of randomization applied by a stochastic policy. A deterministic policy may achieve the highest relevance at the cost of high disparity. Similarly, a policy that randomly samples documents from the collection with uniform probability achieves lowest disparity but also lowest relevance. In \f6.2. Expected exposure metrics 157 our experiments, we plot a disparity-relevance curve by controlling the degree of randomization and report the area under this curve (EE-AUC). The target exposure \u03b5\u2217speci\ufb01es the ideal behaviour we desire from our retrieval system. One way to compute this is by assuming some oracle ranking policy. For example, in this work we adopt the principle of equal expected exposure de\ufb01ned by Diaz et al. [14] as: Given a \ufb01xed information need, no item should be exposed (in expectation) more or less than any other item of the same relevance. Under this ideal policy, documents always appear in ranking above other documents of lower grades, and documents in the same grade are permuted with uniform probability. Let mg be the number of documents with relevance grade g and m>g the number of documents with relevant grade strictly larger than g. Given an RBP browsing model, the optimal exposure for a document d with grade g is, \u03b5\u2217 d = 1 mg \u2211 \u03c1\u2208[1,mg] \u03b3(\u03c1+m>g) (6.6) = \u03b3m>g \u00b7(1\u2212\u03b3mg) mg(1\u2212\u03b3) (6.7) We refer the readers to the original paper for more detailed derivation and discussion of this individual exposure parity target. If we associate the documents in our collection with a set A of k attributes, then we can also de\ufb01ne a group notion of exposure parity. These attributes may re\ufb02ect, for example, demographic information about the content producer or some topical grouping by content. Let A be a n \u00d7 k binary matrix mapping each of the n documents in the collection to their group identity. We can then compute the total exposure for all documents with an attribute as \u03be = A\u22ba\u03b5. We recover equal exposure across groups, by enforcing \u03be to be uniform. We can replace \u03b5 with \u03be in Equation 6.4 to de\ufb01ne as a measure of equal exposure across groups. Other notions of demographic and group fairness can be similarly derived. \f158 Chapter 6. Stochastic learning to rank for target exposure 6.3 Optimizing for target exposure Following the Plackett-Luce model [274, 275], given some arbitrary score distribution y over documents, we can sample different rankings by iteratively sampling documents without replacement based on the following softmax distribution. P PL(d|q) = exp(yd) \u2211\u00af d exp(y \u00af d) (6.8) Under the assumption of binary relevance and a perfect relevance estimator, Plackett-Luce randomization should perform optimally. However, learning to rank models are not perfect estimators of relevance. Therefore, we believe there should be some advantage to optimizing directly for expected exposure. We leverage recent results in optimization of relevance-based objectives computed over a distribution over rankings [418]. Our results can be seen as an extension of this framework to individual and group exposure objectives. We focus on a shared model architecture with varying loss functions in order to measure differences due to the objective alone, instead of artifacts resulting from the functional form of the models. We begin by describing how we optimize for expected exposure before proceeding to our experiment design and empirical results. 6.3.1 Individual exposure parity Although optimizing for pointwise or pairwise loss has been well-studied in the information retrieval community, directly optimizing for a metric based on a distribution over rankings has received less attention. We begin by de\ufb01ning an appropriate loss function for our model. Turning to Equation 6.5, we can drop the constant term and add a hyperparameter to trade-off disparity and relevance, \u2113\u03bb(\u03b5,\u03b5\u2217) = \u03bb\u2225\u03b5\u22252 2 \u2212(1\u2212\u03bb)\u03b5\u22ba\u03b5\u2217 (6.9) Where \u03b5\u2217is based on graded relevance. \f6.3. Optimizing for target exposure 159 Let f\u03b8 : D \u2192R be a document scoring function parameterized by \u03b8. Given a query, y is a n \u00d7 1 vector of document scores for the entire collection such that, yd = f\u03b8(d). Using a Plackett-Luce model, we can translate the raw scores into sampling probabilities as in Equation 6.8. This allows us to construct a ranking \u03c3 by sampling documents sequentially. Unfortunately, this sampling process is nondifferentiable and, therefore, prohibitive to a large class of models, including those that learn by gradient descent. We address this by adopting the method proposed by Bruch et al. [418]. To construct a sampled ranking \u03c3, we reparameterize the probability distribution by adding independently drawn noise samples G from the Gumbel distribution [419] to y and sorting documents by the \u201cnoisy\u201d probability distribution \u02dc p, \u02dc p(di) = exp(ydi +Gi) \u2211dj\u2208D exp \u0000yd j +Gj \u0001 (6.10) Where Gi is sample from the Gumbel distribution. Gi = \u2212log(\u2212logUi) (6.11) U \u223cUniform(0,1) (6.12) Given the perturbed probability distribution \u02dc p, we compute each document\u2019s smooth rank [420, 421] as, \u03c3d = \u2211 d\u2032\u2208D/d \u0012 1+exp \u0012 \u02dc p(d)\u2212\u02dc p(d\u2032) \u03c4 \u0013\u0013\u22121 (6.13) The smooth rank is sensitive to the temperature \u03c4 [425]. At high temperatures the smooth rank is a poor approximation of the true rank and at low temperatures may result in vanishing gradients. To rectify this issue, we employ the straightthrough estimator [426] to compute the true ranks in forward pass but differentiate \f160 Chapter 6. Stochastic learning to rank for target exposure the gradients with respect to the smooth ranks during backpropagation. Using the estimated ranks and a speci\ufb01ed user model we compute the exposure for each document. For example, assuming RBP as the user model the exposure of document d from a single ranking \u03c3 is given by \u03b5d = \u03b3(\u03c1\u03c3,d\u22121). We compute expected exposure by averaging over ntrain different rankings\u2014each generated by independently sampling different Gumbel noise in Equation 6.10. We use this expected exposure vector \u03b5 in Equation 6.9 to compute the loss that we minimize through gradient descent, as shown in Figure 6.1. The relevance grades are not used for training beyond computing target exposure. We set \u03c4 in Equation 6.13 to 0.1. 6.3.2 Group exposure parity We can also adapt this model to optimize fpr group-level exposure parity. To do so, we replace \u2225\u03b5\u22252 2 with \u2225\u03be\u22252 2 in Equation 6.9 to de\ufb01ne an optimization objective that trades-off relevance and group parity. \u2113group,\u03bb = \u03bb\u2225\u03be\u22252 2 \u2212(1\u2212\u03bb)\u03b5\u22ba\u03b5\u2217 (6.14) This loss function assumes that the ideal policy distributes exposure equally across all groups. Optimization objectives corresponding to other group exposure criteria can be derived similarly in future work. 6.4 Experiments 6.4.1 Models We restrict our choice of baselines to neural networks so that the exposure-based optimization can be compared to baseline ranking loss functions with respect to the same model. Our base model consists of a fully-connected neural network with two hidden layers of size 256 nodes per layer and recti\ufb01ed linear unit for activation function. We choose a learning rate of 0.001 and a dropout rate of 0.1 and perform early-stopping for all models based on validation sets. Baseline stochastic rankings \f6.4. Experiments 161 add independently sampled Gumbel noise neural scoring function compute smooth rank value compute exposure using user model compute loss with target exposure compute average exposure items target exposure Figure 6.1: To sample multiple rankings proportional to the softmax distribution over document scores, we \ufb01rst add independently sampled noise from the Gumbel distribution to the scores and then estimate corresponding smooth rank values. We then compute exposure of each document for a given ranking based on a preselected user model. Next, we estimate the expected exposure of a document by averaging across multiple rankings. Finally, we compute the loss between the predicted and the target expected exposure vectors, which can be then minimized using gradient-based methods as every step in the above process is differentiable. \f162 Chapter 6. Stochastic learning to rank for target exposure are derived by employing Plackett-Luce sampling over two deterministic policies (pointwise and pairwise models) with varying softmax temperatures to obtain different trade-off points between disparity and relevance. We set ntrain to 20 for our model and ntest to 50 for all models. We consider three training objectives in our experiments. The pointwise model [240] minimizes the squared error between the model prediction and true relevance. The pairwise model [235] minimizes misclassi\ufb01ed preferences using a cross-entropy loss. The expected exposure model minimizes the loss in Equation 6.9 and, in our group parity experiments, Equation 6.14. 6.4.2 Data Our experiments use the MSLR-WEB10k dataset [427], a learning-to-rank dataset containing ten thousand queries. We perform \ufb01ve-fold cross validation (60/20/20 split between training, validation, and testing sets). Each query-document pair is represented by a 136-dimensional feature vector and graded according to relevance on a \ufb01ve-point scale. For the group parity experiments, as there are no obvious appropriate group attributes in the MSLR-WEB10k dataset, we discretize the PageRank feature in the ranges <1000, 1000\u201310000, and \u226510000 and treat it as a group attribute. The choice of using discretized PageRank as a group attribute is rather arbitrary, but we con\ufb01rm that this discretization scheme is reasonable as roughly 70% of the queries have at least one document corresponding to each group with a relevance grade greater than one. 6.4.3 Evaluation We use a \u03b3 = 0.50 for all of our experiments, as consistent with standard TREC evaluation protocol. RBP is evaluated at depth 20. 6.5 Results We present the results of our experiments in Table 6.1. In terms of expected exposure, we do not observe a difference in performance between pointwise and pairwise \f6.6. Conclusion 163 Table 6.1: Results for optimizing towards individual and group parity using different ranking objectives. We report average EE-AUC for both tasks and highlight the best performance for each in bold. Optimizing directly for individual and group parity using our proposed method achieves best performance in both cases. Loss function AUC Individual parity Group parity Pointwise loss 0.229 0.112 Pairwise loss 0.229 0.108 Our methods Expected exposure loss (Eqn. 6.9) 0.238 0.141 Group parity loss (Eqn. 6.14) 0.178 models. However, directly optimizing for expected exposure resulted in a 3.9% improvement in EE-AUC over the pointwise and pairwise models. We con\ufb01rm that the difference in EE-AUC follows a normal distribution and accordingly perform a paired student\u2019s t-test to check their statistical signi\ufb01cance. The EE-AUC differences between our proposed method and the baselines are statistically signi\ufb01cant (p < 0.01). In terms of group parity, we observe a difference in performance between pointwise and pairwise models. Moreover, directly optimizing for expected exposure results in improved performance while directly optimizing for group parity further boosts performance. The gap in EE-AUC between all pairs of models are statistically signi\ufb01cant (p < 0.01). These results, while based on a limited study, indicate that direct optimization for expected exposure metrics is both viable in the learning to rank framework as well as useful for optimization under fairness constraints. 6.6 Conclusion An exposure-based view of retrieval explicitly codi\ufb01es the role that IR systems play as intermediary in two-sided marketplaces consisting of users seeking information and documents (or their producers). Stochastic ranking policies allow for more balanced distribution of exposure over multiple rankings. In this work, we demonstrate that these policies can be directly optimized to reduce deviation from speci\ufb01ed target exposure distribution. While our work is grounded in parity of individual and \f164 Chapter 6. Stochastic learning to rank for target exposure group exposure, the framework described is \ufb02exible enough to incorporate any arbitrary target exposure policy beyond fairness constraints\u2014e.g., based on topical diversity considerations or to maximize monetization in the context of paid search. Our de\ufb01nition of target exposure in this work is based on a universal notion of relevance. If the relevance of a document instead changes based on the searcher (i.e., personalization) or other context (e.g., location), then our framework needs to be appropriately extended. Exposure can also be nuanced by user attributes. For example in commercial searches, exposure to users with an intent to purchase may be weighted differently than to users who may be casually browsing. We believe that there is a rich space for exploring different extensions of our proposed framework. Deploying stochastic ranking policies may also come with its own unique challenges. For example, randomized rankings may have unintended consequences on the system\u2019s caching mechanisms. It may also make it harder for users to re-\ufb01nd information [428] they have previously discovered for a query. More detailed studies are also necessary to understand the differential impact of stochastic policies on queries of varying dif\ufb01culty, especially on queries for which the model\u2019s relevance estimates are highly uncertain. \fChapter 7 Learning to Rank for Query Auto-Completion In this chapter, we discuss the application of deep architectures to the query autocompletion task that presents different challenges than ad hoc retrieval. Query autocompletion helps the user of a search system to formulate their information request by recommending queries based on their partially typed query. The query autocompletion system typically considers the user history, the task context the location and temporal context, and other information to make more relevant recommendation. The ranking task, in case of query auto-completion, therefore, involves ranking either query suf\ufb01xes or full query candidates in response to a query pre\ufb01x. In this chapter, we discuss work in which we employ deep neural networks for that ranking task. 7.1 Query Auto-Completion for Rare Pre\ufb01xes As users enter their query into the search box, most modern search engines provide a ranked list of query suggestions based on the current pre\ufb01x already typed by the user. In a typical approach used by many query auto-completion (QAC) systems, candidate queries are identi\ufb01ed by doing an exact pre\ufb01x lookup against a \ufb01xed set of popular queries, using a data structure such as a pre\ufb01x tree [143]. The candidates are then ranked by their expected likelihood, which is typically computed as a function of its past popularity (commonly referred to as the MostPopularCompletion \f166 Chapter 7. Learning to Rank for Query Auto-Completion Table 7.1: Synthetic QAC candidates generated by the suf\ufb01x-based approach and ranked using only the CDSSM similarity feature. The CDSSM model projects both the pre\ufb01x and the suf\ufb01x to a common 128-dimensional space allowing us to rank according to pre\ufb01x-suf\ufb01x cosine similarity. One of the lower quality synthetic candidates \"cheapest \ufb02ights from seattle to airport\" is ranked seventh in the second list. what to cook with chicken and broccoli and what to cook with chicken and broccoli and bacon what to cook with chicken and broccoli and noodles what to cook with chicken and broccoli and brown sugar what to cook with chicken and broccoli and garlic what to cook with chicken and broccoli and orange juice what to cook with chicken and broccoli and beans what to cook with chicken and broccoli and onions what to cook with chicken and broccoli and ham soup cheapest \ufb02ights from seattle to cheapest \ufb02ights from seattle to dc cheapest \ufb02ights from seattle to washington dc cheapest \ufb02ights from seattle to bermuda cheapest \ufb02ights from seattle to bahamas cheapest \ufb02ights from seattle to aruba cheapest \ufb02ights from seattle to punta cana cheapest \ufb02ights from seattle to airport cheapest \ufb02ights from seattle to miami (MPC) model [429]). Such a system can only suggest queries with enough historic popularity to make it into the pre\ufb01x tree. We propose an additional candidate generation strategy for QAC by mining popular query suf\ufb01xes. Candidate suf\ufb01xes are popular n-grams that appear at the ends of queries. By appending such n-grams suf\ufb01xes to a user\u2019s query pre\ufb01x we can generate synthetic suggestion candidates that have never been observed in the historical query logs. Table 7.1 contains examples of such suggestions. We further propose a supervised framework for ranking these synthetic queries alongside the traditional full-query suggestion candidates. We also explore new ranking signals in this framework, based on the query n-gram statistics and a deep CDSSM [362]. 7.1.1 Related work Most modern browsers, search engines, text editors and command shells implement some form of an auto-completion feature to aid users in faster text entry. In Web search, pre-computed auto-completion systems are popular, where the suggestions \f7.1. Query Auto-Completion for Rare Pre\ufb01xes 167 W1 W2 W3 W4 Wn-2 50K 50K 50K 50K 50K Wn-1 Wn 50K 50K 300 300 300 max max max 300 32 \u2026 \u2026 \u2026 Term vector Letter tri-gram layer Convolutional matrix Convolutional layer Max pooling operation Max pooling layer Output layer \u2026 Figure 7.1: Architecture of the CDSSM. The model has an input layer that performs the word hashing, a convolutional layer, a max pooling layer, and an output layer that produces the \ufb01nal semantic vector representation of the query. are typically \ufb01ltered by exact pre\ufb01x matching from a pre-selected set of candidates and ranked according to past popularity. Ranking suggestions by past frequency is commonly referred to as the MostPopularCompletion (MPC) model and can be regarded as a maximum likelihood approximator [429]. Given a pre\ufb01x p and a set of all unique queries Q from the search logs, MPC(p) = argmax \u00af q\u2208pc(p) lf( \u00af q) \u2211qi\u2208Q lf(qi) (7.1) pc returns the set of queries that qualify as completions for the query q and lf is the frequency of the query in the search logs. Language modelling based approaches for sentence completion have been studied in the context of e-mail and document authoring[430\u2013433]. In Web search, White and Marchionini [434] and Fan et al. [435] proposed models for term recommendations to aid users in their query formulation process. Bhatia et al. [436] extracted frequently occurring phrases from document corpus and used them to gen\f168 Chapter 7. Learning to Rank for Query Auto-Completion Table 7.2: Comparing the nearest neighbours for \"seattle\" and \"taylor swift\" in the CDSSM embedding spaces when the model is trained on query-document pairs vs. query pre\ufb01x-suf\ufb01x pairs. The former resembles a Topical notion of similarity between terms, while the latter is more Typical in the de\ufb01nition of inter-term similarities. seattle taylor swift Query-Document Pre\ufb01x-Suf\ufb01x Query-Document Pre\ufb01x-Suf\ufb01x weather seattle chicago taylor swift.com lady gaga seattle weather san antonio taylor swift lyrics meghan trainor seattle washington denver how old is taylor swift megan trainor ikea seattle salt lake city taylor swift twitter nicki minaj west seattle blog seattle wa taylor swift new song anna kendrick erate suggestion candidates in the absence of a query log. Duan and Hsu [437] have studied the problem of online spelling correction for query auto-completion and Hawking and Grif\ufb01ths [438] have explored mechanisms for generating query suggestions in the enterprise settings. Our proposed approach generates synthetic query suggestion candidates by combining the input pre\ufb01x with popular query suf\ufb01xes to augment the regular full-query QAC suggestions. Within our proposed supervised framework, we explore CDSSM [231, 362] as a ranking signal. 7.1.2 Model For document retrieval, Shen et al. [362] demonstrated that discriminatively training a deep neural network model with a convolutional-pooling structure on clickthrough data can be effective for modelling query-document relevance. We adopt the CDSSM by training on a pre\ufb01x-suf\ufb01x pairs dataset (instead of query-document titles). The training data for the CDSSM is generated by sampling queries from the search logs and splitting each query at every possible word boundary. For example, from the query \"breaking bad cast\" we generate the two pairs (\"breaking\", \"bad cast\") and (\"breaking bad\", \"cast\"). The architecture shown in Figure 7.1 is used on both the pre\ufb01x and the suf\ufb01x side of the CDSSM model. It is important to emphasize that our earlier discussion in Section 3.2.2 on different notions of similarity between terms that can be learnt by shallow embedding models is also relevant in the context of these deeper architectures. In the case of CDSSM [231], the notion of similarity being modelled depends on the choice of \f7.1. Query Auto-Completion for Rare Pre\ufb01xes 169 Table 7.3: Most popular query suf\ufb01xes extracted from the publicly available AOL logs. Top suf\ufb01xes Top 2-word suf\ufb01xes Top 3-word suf\ufb01xes com for sale federal credit union org yahoo com new york city net myspace com in new york gov google com or no deal pictures new york disney channel com lyrics real estate my space com edu of america in new jersey sale high school homes for sale games new jersey department of corrections \ufb02orida space com chamber of commerce for sale aol com bath and beyond us s com in las vegas the paired data that the model is trained on. When the CDSSM is trained on query and document title pairs [231] then the notion of similarity is more Topical in nature. However, when the same CDSSM architecture is trained on query pre\ufb01x-suf\ufb01x pairs\u2014as described in this section\u2014it captures a more Typical notion of similarity, as shown in Table 7.2. 7.1.3 Method We propose two key ideas in this section. Firstly, we generate synthetic query suggestion candidates for QAC using popular query suf\ufb01xes. We then introduce n-gram and CDSSM based features in a supervised learning setting to rank these synthetic suggestions alongside the full-query suggestion candidates. Candidate Generation From every query in the search engine logs we generate all possible n-grams from the end of the query. For example, from the query \"bank of america\" we generate the suf\ufb01xes \"america\", \"of america\" and \"bank of america\". By aggregating across all queries we identify the most popular suf\ufb01xes. Table 7.3 shows the most frequently observed query suf\ufb01xes in the publicly available AOL logs [85]. Next, for a given pre\ufb01x we extract the end-term as shown in Figure 7.2. We match all the suf\ufb01xes that start with the end-term from our precomputed set. These selected suf\ufb01xes are appended to the pre\ufb01x to generate synthetic suggestion candidates. For example, the pre\ufb01x \"cheap \ufb02ights fro\" is matched with the suf\ufb01x \"from \f170 Chapter 7. Learning to Rank for Query Auto-Completion cheapest flight fro End-term: \u201cfro\u201d cheapest flight from End-term: \u201cfrom\u201d cheapest flight from End-term: \u201cfrom \u201d cheapest flight from n End-term: \u201cn\u201d Figure 7.2: Examples of fully or partially typed end-terms extracted from the input pre\ufb01xes. The end-term is used for selecting the set of candidate suf\ufb01xes for the generation of synthetic query suggestions. seattle\" to generate the candidate \"cheap \ufb02ights from seattle\". Note that many of these synthetic suggestion candidates are likely to not have been observed by the search engine before. We merge these synthetic suggestions with the set of candidates selected from the list of historically popular queries. This combined set of candidates is used for ranking as we will describe in Sec 7.1.4. Ranking Features For every pre\ufb01x and suggestion candidate (synthetic or previously observed), we compute a set of common features for the supervised ranking model. We describe these features in this section, focusing on the n-gram and CDSSM features that we propose for this setting. \u2022 N-gram based features. Given a candidate suggestion q, we compute features based on the frequency of n-grams fngrami of different lengths (for i = 1 to 6). fngrami = \u2211 g\u2208ngi(q) lf(g) (7.2) where, ngi(q) is the set of all n-grams in query q of length i. lf(g) is the observed frequency of the n-gram g in the historical query logs. These ngram features model the likelihood that the candidate suggestion is generated by the same language model as the queries in the search logs. \u2022 CDSSM based features. Given a pre\ufb01x p and a suggestion candidate c, we extract a normalized pre\ufb01x \u00af p by removing the end-term from the pre\ufb01x. Then \f7.1. Query Auto-Completion for Rare Pre\ufb01xes 171 a normalized suf\ufb01x \u00af s is extracted by removing \u00af p from the query c. Then we use the trained CDSSM model to project the normalized pre\ufb01x and the normalized suf\ufb01x to a common 128-dimensional space and compute a fcdssm feature. fcdssm( \u00af p, \u00af s) = cosine(\u20d7 v \u00af p,\u20d7 v\u00af s) = \u20d7 v \u22ba \u00af p\u20d7 v\u00af s \u2225\u20d7 v \u00af p\u2225\u2225\u20d7 v\u00af s\u2225 (7.3) where \u20d7 v \u00af p and \u20d7 v\u00af s are the CDSSM vector outputs corresponding to \u00af p and \u00af s, respectively. Table 7.1 shows examples of synthetic suggestion candidates ranked by the fcdssm feature alone. \u2022 Other features. Other features used in our model includes the frequency of the candidate query in the historical logs, length based features (length of the pre\ufb01x, the suf\ufb01x and the full suggestion in both characters and words) and a boolean feature that indicates whether the pre\ufb01x ends with a space character. 7.1.4 Experiments Our experiment setup is based on the learning to rank framework proposed by Shokouhi [439]. We generate all possible pre\ufb01xes1 from each query impression to use for training, validation and testing. For each pre\ufb01x we identify the set of candidate suggestions as described in Section 7.1.3. We associate a positive relevance judgment with the candidate that matches the original query from which the pre\ufb01x was extracted. To accurately measure the coverage impact of our approach we retain all pre\ufb01x impressions where the submitted query is not in the list of candidates available for ranking. We train LambdaMART [440] models for ranking the suggestions using features described in Section 7.1.3. We limit our ranking task to instances where the pre\ufb01x contains at least one complete word, since completions with very short pre\ufb01xes is already well solved by our popularity-based features and we are focusing on rare pre\ufb01xes. We always train 300 trees (with early stopping using a validation set) 1Mitra et al. [72] showed that users use QAC more at word boundaries but for simplicity we sample the pre\ufb01xes with equal probability. \f172 Chapter 7. Learning to Rank for Query Auto-Completion and evaluate the model performances on the test set using the mean reciprocal rank (MRR) metric. We conduct all our experiments on the publicly available AOL query logs [85] and reproduce the same results on the large-scale query logs of the Bing search engine. We refer to these two datasets hereafter as the AOL testbed and the Bing testbed, respectively. The query impressions on both the testbeds are divided into four temporally separate partitions (background, training, validation and test). On the AOL testbed we use all the data from 1 March, 2006 to 30 April, 2006 as the background data. We sample queries from the next two weeks for training, and from each of the following two weeks for validation and test, respectively. On the Bing testbed we sample data from the logs from April, 2015 and use the \ufb01rst week of data for background, the second week for training, the third for validation and the fourth for testing. We normalize all the queries in each of these datasets by removing any punctuation characters and converting them to lower case. For candidate generation, both the list of popular queries and suf\ufb01xes are mined from the background portion of the two testbeds. We use 724,340 and 1,040,674 distinct queries on the AOL testbed and the Bing testbed, respectively, as the set of full-query candidates. We evaluate our approach using 10K and 100K most frequent suf\ufb01xes. We limit the number of full-query candidates per pre\ufb01x to ten and compute the \ufb01nal reciprocal rank by considering only the top eight ranked suggestions per model. Finally, the CDSSM models are trained using 44,558,631 and 212,854,198 pre\ufb01x-suf\ufb01x pairs on the AOL and the Bing testbeds, respectively. 7.1.5 Results Table 7.4 summarizes the experiment results and clearly demonstrates the improvements from the synthetic suggestion over the MPC model. All the LambdaMART models with different feature sets when combined with the suf\ufb01x-based candidates show an improved MRR over the popularity based baseline. The models however perform no better, and in most cases worse, compared to the MPC baseline when only the full-query based candidates are considered. This is expected as the models \f7.1. Query Auto-Completion for Rare Pre\ufb01xes 173 Table 7.4: Comparison of all models on the AOL and the Bing testbeds. Due to the proprietary nature of the Bing dataset, we only report MRR improvements relative to the MPC model for this testbed. Statistically signi\ufb01cant differences by the t-test (p < 0.01) are marked with \"*\". Top three highest MRR values per testbed are bolded. AOL Bing Models MRR % Improv. % Improv. Full-query based candidates only MostPopularCompletion 0.1446 LambdaMART Model ( fngrami = no, fcdssm = no) 0.1445 -0.1 -1.7* LambdaMART Model ( fngrami = yes, fcdssm = no) 0.1427 -1.4* -1.2* LambdaMART Model ( fngrami = no, fcdssm = yes) 0.1445 -0.1 -1.2* LambdaMART Model ( fngrami = yes, fcdssm = yes) 0.1432 -1.0* -1.5* Full-query based candidates + Suf\ufb01x based candidates (Top 10K suf\ufb01xes) MostPopularCompletion 0.1446 LambdaMART Model ( fngrami = no, fcdssm = no) 0.2116 +46.3* +32.8* LambdaMART Model ( fngrami = yes, fcdssm = no) 0.2326 +60.8* +42.6* LambdaMART Model ( fngrami = no, fcdssm = yes) 0.2249 +55.5* +40.1* LambdaMART Model ( fngrami = yes, fcdssm = yes) 0.2339 +61.7* +43.8* Full-query based candidates + Suf\ufb01x based candidates (Top 100K suf\ufb01xes) MostPopularCompletion 0.1446 LambdaMART Model ( fngrami = no, fcdssm = no) 0.2105 +45.5* +39.9* LambdaMART Model ( fngrami = yes, fcdssm = no) 0.2441 +68.7* +54.2* LambdaMART Model ( fngrami = no, fcdssm = yes) 0.2248 +55.4* +48.9* LambdaMART Model ( fngrami = yes, fcdssm = yes) 0.2453 +69.6* +55.3* are trained with the suf\ufb01x-based candidates in the training data. The models with the fcdssm feature perform better than the corresponding models without the feature across all experiments. However, in general the fngrami features seems to be showing higher improvements compared to the CDSSM based feature. We hypothesize that the fcdssm feature is less precise than the fngrami features. For example, we can see in Table 7.1 that the CDSSM based feature ranks a suf\ufb01x highly that generates a semantically meaningless query suggestion \"cheapest \ufb02ight from seattle to airport\". While \"airport\" is a location that you can take a \ufb02ight to, in the context of the given pre\ufb01x it is clearly an inappropriate suggestion. It is possible that the pre\ufb01x-suf\ufb01x pairs based training of the CDSSM can be further improved. We believe that this is an important area for future investigations given that the CDSSM holds certain other advantages over n-gram models. For example, the \f174 Chapter 7. Learning to Rank for Query Auto-Completion MPC LambdaMART Overall Frequent Rare Unseen 0.14 0.25 0.28 0.29 0.29 0.33 0.00 0.18 0.0 0.1 0.2 0.3 0.4 0.5 MRR Figure 7.3: MRR improvements by historical popularity of the input pre\ufb01x on the AOL testbed. The LambdaMART model uses n-gram and fcdssm features and includes suf\ufb01x-based suggestion candidates. Any pre\ufb01x in the top 100K most popular pre\ufb01xes from the background data is considered as Frequent. There are 7622, 6917 and 14,135 pre\ufb01x impressions in the Frequent, Rare and Unseen segments, respectively. All reported differences in MRR with the MPC model are statistically signi\ufb01cant by the t-test (p < 0.01). CDSSM has limited storage requirements2, and because of the word hashing technique the CDSSM may be more robust to morphological variations and spelling errors in the input pre\ufb01x compared to the n-gram based models. Figure 7.3 analyses the improvements by segmenting the pre\ufb01xes based on their historical popularity. The improvements from the suf\ufb01x-based candidates are expectedly higher for the rarer pre\ufb01xes. Interestingly, the absolute MRR values for both models are higher for rare pre\ufb01xes than for the frequent ones. One factor in this is that rare pre\ufb01xes tend to be longer and therefore more speci\ufb01c, giving fewer candidates to rank and making it easier to achieve good MRR. 2The CDSSM model itself needs to be stored in memory but has no data storage requirements, unlike the n-gram models. \f7.2. Session Context Modelling for Query Auto-Completion 175 7.1.6 Conclusion We proposed a novel candidate generation technique for query auto-completion by mining and ranking popular query suf\ufb01xes. Our empirical study shows that this is an effective strategy for signi\ufb01cantly improving MRR for rare and unseen pre\ufb01xes. The supervised ranking framework proposed in this paper is generic and can be employed in any QAC system that combines multiple sources of candidates. We described features based on n-gram language models and convolutional neural networks with demonstrable improvements. While we have shown signi\ufb01cant improvements in MRR using synthetic candidate generation, we have not measured how often this approach generates semantically meaningless synthetic suggestions and have not quanti\ufb01ed the effect of showing synthetic suggestions to search users. A user study on this aspect is left as future work. There is also further scope for exploring other language models (such as recurrent neural networks) in the context of this task. 7.2 Session Context Modelling for Query AutoCompletion Short-term user history provides useful cues about the user intent that an IR system can consider to improve the relevance of retrieved results [132]. In QAC systems, in particular, when only a few characters have been entered the search engine has little understanding of the actual information need of the user and the generic suggestions provided by a non-contextual QAC system typically perform poorly [429]. The high ambiguity associated with short pre\ufb01xes makes QAC a particularly interesting candidate for leveraging any additional information available about the user\u2019s current task. The same study also showed that 49% of Web searches are preceded by a different search which can be used to gain additional insights into the user\u2019s current information need. The majority of previous work [123, 441] on using short-term user history for search personalization has been focused on modelling the topical relevance of the candidate results (documents or query suggestions) to the previous queries and \f176 Chapter 7. Learning to Rank for Query Auto-Completion viewed documents in the same search session. Using such implicit feedback has been shown to be a very attractive strategy for improving retrieval performance when the user intent is ambiguous. For example, knowing that the user\u2019s previous query was \"guardians of the galaxy\" can help to inform a QAC system to promote the query \"imdb\" in ranking over \"instagram\" when the user has just typed \"i\" in the search box. Query reformulation behaviours within search sessions have also been studied but are mostly limited to taxonomy based classi\ufb01cations [442, 443] and models based on syntactic changes [444]. A quick study of a sample of Bing\u2019s search engine logs reveal that users frequently search for \"san francisco 49ers\" and \"san francisco weather\" immediately after searching for \"san francisco\". Similarly, the query \"detroit\" is often followed by the queries \"detroit lions\" and \"detroit weather\". Intuitively, \"san francisco\" \u2192\"san francisco 49ers\" represents a similar shift in user\u2019s intent as \"detroit\" \u2192\"detroit lions\". We can see many such frequently occurring patterns of reformulations in large scale search logs. Modelling these reformulations using lexical matching alone is dif\ufb01cult. For example, we understand that \"movies\" \u2192\"new movies\" is not the same intent shift as \"york\" \u2192\"new york\" even though in both cases the same term was added to both the queries by the user. On the other hand, \"london\" \u2192\"things to do in london\" and \"new york\" \u2192\"new york tourist attractions\" are semantically similar although the two reformulations involve the addition of completely disjoint sets of new terms to the queries. In text processing, Mikolov et al. [204] demonstrated that the distributed representation of words learnt by continuous space language models are surprisingly good at capturing syntactic and semantic relationships between the words. Simple algebraic operations on the word vectors have been shown to produce intuitive results. For example,\u20d7 vking\u2212\u20d7 vman+\u20d7 vwoman results in a vector that is in close proximity to \u20d7 vqueen. In Section 7.2.2, we show that the embeddings learnt by CDSSM [362] exhibit similar favourable properties and hence provide an intuitive mechanism to represent query reformulations as the offsets between query vectors. Our empirical study, described in Section 7.2.3, demonstrate that the vector representations of queries and reformulations can be useful for capturing session \f7.2. Session Context Modelling for Query Auto-Completion 177 context for the retrieval of query suggestions. The CDSSM is trained to map queries (and documents) with similar intents to the same neighbourhood in the semantic space. Therefore they are suitable for measuring the topical similarity between candidate suggestions and the user\u2019s recent queries. In addition, our experiments show that the vector representation of the reformulation, from the user\u2019s previous query to the candidate suggestion, can also be a useful signal for predicting the relevance of the suggestion. We present our results in Section 7.2.5 that demonstrate that session context features based on these vector representations can signi\ufb01cantly improve the QAC ranking over the supervised ranking baseline proposed by Shokouhi [439]. The main contributions of the work described in this section are, \u2022 Demonstrating that query reformulations can be represented as lowdimensional vectors which map syntactically and semantically similar query changes close together in the embedding space. \u2022 Using features based on the distributed representations of queries and reformulations to improve upon a supervised ranking baseline for session contextaware QAC ranking. Our experiments on the large-scale query logs of the Bing search engine and the publicly available AOL query logs [85] show that these features can improve MRR by more than 10% on these testbeds. \u2022 Demonstrating that CDSSM trained on session query pairs performs signi\ufb01cantly better for the contextual QAC ranking task compared to the CDSSM model trained on clicked query-document pairs. Next, we review related work that are relevant to this study. 7.2.1 Related work In Web search, Bennett et al. [132] investigated the impact of short-term and longterm user behaviour on relevance prediction, and showed that short-term user history becomes more important as the session progresses. Li et al. [445] evaluated DSSM and CDSSM for modelling session context for Web search. Besides the primary IR task, QAC as opposed to Web ranking, our work differs from this study \f178 Chapter 7. Learning to Rank for Query Auto-Completion by going beyond computing the topical similarity using the existing models and explicitly modelling query reformulations as vectors. We also show the bene\ufb01ts of optimizing a CDSSM model directly for capturing session context by training on session query pairs. Yan et al. [446] proposed an approach that maps queries and clicks to latent search intents represented using Open Directory Project3 categories for making context-aware query recommendations. Cao et al. [441] and Liao et al. [447] have explored session context using latent concept clusters from click-through bipartite graphs, while Guo et al. [448] represented the user\u2019s previous queries using a regularized topic model. Zhang et al. [449] proposed a task-centric click model for characterizing user behaviour within a single search session. Cao et al. [450] learnt a variable length Hidden Markov Model from large scale search logs, whereas Boldi et al. [451] studied random walks on query-\ufb02ow graphs for improved recommendations. Previous studies on the relationships between neighbouring queries from a search session have been mostly focused on categorizing the reformulations based on broad manually de\ufb01ned taxonomies (e.g., generalization, specialization, error correction and parallel move) [452] or understanding the user goals behind common actions (e.g., addition, removal or substitution of terms) [453]. Motivated by the broad manually identi\ufb01ed reformulation categories Xiang et al. [131] and Jiang et al. [454] designed simple features for supervised retrieval models. Finally, Guan et al. [444] use reinforcement learning for modifying term weights in response to the observed modi\ufb01cations made to the query by the user. While clearly using session context for Web search is a well-studied topic, context-sensitive query auto-completion has been discussed less thoroughly in the literature. Weber and Castillo [455] and Shokouhi [439] showed how query distributions change across different user demographics and argued that QAC systems based on personalization features can signi\ufb01cantly outperform popularity-based baselines. Ranking suggestions based on temporal context has also been explored 3http://www.dmoz.org/ \f7.2. Session Context Modelling for Query Auto-Completion 179 [456, 457]. The two QAC related studies most relevant to our work have been done by Shokouhi [439] and Kharitonov et al. [458]. To capture short-term context, Shokouhi [439] relied on letter n-gram matches between the previous queries and the candidates, and trained a supervised ranking model for combining them with MPC and other non-contextual and user demographic features. Kharitonov et al. [458] proposed a uni\ufb01ed framework for contextualizing and diversifying the ranking of QAC suggestions. Their empirical evaluations show that by considering the user\u2019s previous query alone more than 96% of the improvements can be achieved, as compared to additionally considering the document examination history and diversi\ufb01cation context. Given the previous query, their proposed model computes the expected probability of a given completion as follows, p(q1|q0) = p(c = 0|q0)p(q1)+ p(c = 1|q0)p(q1|c = 1,q0) (7.4) Where c is an indicator variable whose value is 1 if the user continues the current task, and 0 otherwise. The two primary components of the above equation are P(q1) and P(q1|c = 1,q0), which correspond to the probability of observing the query q1 globally and in the context of the query q0, respectively, in the query logs. For our evaluation, we implement the supervised ranking framework proposed by Shokouhi and include the n-gram similarity, the query frequency and the query pairwise frequency features among others as described in Section 7.2.3. 7.2.2 Model We adopt the CDSSM architecture proposed by Shen et al. [362] for our study. Unless speci\ufb01ed otherwise, for all models in this paper the window size for the convolutional layer is set to three and the dimensions of the output vector to 32. The training data for the CDSSM models consists of source-target text pairs. The original DSSM [164] and CDSSM [362] models were trained on clickthrough data which consists of pairs of queries and document titles, corresponding to clicked \f180 Chapter 7. Learning to Rank for Query Auto-Completion \u22122 \u22121 0 1 2 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 2.0 seattle denver san francisco new york chicago seattle seahawks denver broncos san francisco 49ers new york giants chicago bears seattle times denver post san francisco chronicle new york times chicago tribune Figure 7.4: A two-dimensional PCA projection of the 32 dimensional CDSSM output vectors shows how intuitively similar intent transitions, represented by the directed edges, are automatically modelled in the embedding space. The CDSSM model used for this illustration is trained on the symmetric session pairs dataset. results. In addition to clickthrough data, we also train the CDSSM models on sampled pairs of queries from search logs that were observed in succession during user sessions. In the rest of this paper, we refer to this as the session pairs dataset. For a pair of observed queries q1 and q2, if the dataset includes both the ordering q1 \u2192q2 and q2 \u2192q1 then we refer to it as the symmetric session pairs dataset, otherwise as asymmetric. The symmetric session pairs data is further randomly sub-sampled by half to keep the count of the training pairs in both the datasets comparable. The session pairs datasets are extracted from the exact same user sessions from which the clickthrough data is generated. While this does not imply that the actual count of training pairs in these two types of datasets are equal, it does make the comparison more meaningful as it assumes the same amount of raw log data is examined for training both the types of models. In practice, however, we did observe the data sizes to be comparable across all three datasets during this study. All the CDSSM models in this study are trained using mini-batch based stochastic gradient descent, as described by Shen et al. [362]. Each mini-batch consists of 1024 training samples (source-target pairs) and for each positive pair 100 negative targets are randomly sampled from the data for that source that were \f7.2. Session Context Modelling for Query Auto-Completion 181 Table 7.5: k-means clustering of 65K in-session query pairs observed in search logs. Examples from \ufb01ve of the top ten biggest clusters shown here. The \ufb01rst and the second clusters contain examples where the follow up query is a different formulation of the exact same intent. The third and the fourth clusters contain examples of narrowing intent, in particular the fourth cluster contains reformulations where the additional speci\ufb01cation is based on location disambiguation. Finally, the last cluster contains examples of intent jumps across tasks. soundcloud \u2192 www.soundcloud.com coasthills coop \u2192 www.coasthills.coop american express \u2192 www.barclaycardus.com login duke energy bill pay \u2192 www.duke-energy.com pay my bill cool math games \u2192 www.coolmath.com majesty shih tzu \u2192 what is a majesty shih tzu hard drive dock \u2192 what is a hard drive dock lugia in leaf green \u2192 where is lugia in leaf green red river log jam \u2192 what is th red river log jam prowl \u2192 what does prowl mean rottweiler \u2192 rottweiler facebook sundry \u2192 sundry expense elections \u2192 \ufb02orida governor race 2014 pleurisy \u2192 pleurisy shoulder pain elections \u2192 2014 rowan county election results cna classes \u2192 cna classes in lexington tennessee container services inc \u2192 container services ringgold ga enclosed trailers for sale \u2192 enclosed trailers for sale north carolina \ufb01rewood for sale \u2192 \ufb01rewood for sale in asheboro nc us senate race in colorado \u2192 us senate race in georgia siol \u2192 facebook cowboy bebop \u2192 facebook mr doob \u2192 google great west 100 west 29th \u2192 facebook avatar dragons \u2192 youtube not originally paired. The CDSSM models project the queries to an embedding space with \ufb01xed number of dimensions. The semantic similarity between two queries q1 and q2 in this semantic space is de\ufb01ned by, Sim(q1,q2) = cosine(\u20d7 vq1,\u20d7 vq2) = \u20d7 v \u22ba q1\u20d7 vq2 \u2225\u20d7 vq1\u2225\u2225\u20d7 vq2\u2225 (7.5) Where\u20d7 vq1 and\u20d7 vq2 are the CDSSM vector outputs corresponding to the two queries, \f182 Chapter 7. Learning to Rank for Query Auto-Completion 0.0 \u2212 0.1 0.1 \u2212 0.2 0.2 \u2212 0.3 0.3 \u2212 0.4 0.4 \u2212 0.5 0.5 \u2212 0.6 0.6 \u2212 0.7 0.0 0.2 0.4 0.6 0.8 1.0 new york \u2192things to do in new york things to do in new york \u2192new york Cosine Similarity Bins Ratio of Counts 0.0 \u2212 0.1 0.1 \u2212 0.2 0.2 \u2212 0.3 0.3 \u2212 0.4 0.4 \u2212 0.5 0.5 \u2212 0.6 0.6 \u2212 0.7 0.0 0.2 0.4 0.6 0.8 1.0 fcebook \u2192facebook facebook \u2192fcebook Cosine Similarity Bins Ratio of Counts Figure 7.5: Visualization of the cosine similarity scores of a given reformulation with respect to a set of 100,000 other reformulations randomly sampled from Bing\u2019s logs. The similarity scores are binned and the ratio of the counts are shown above. The counts corresponding to bins with cosine similarity greater than 0.7 were too small, hence excluded. respectively. A close examination of the CDSSM output vectors reveal that the learnt distributed representations hold useful information about inter-query relationships. Figure 7.4 illustrates how the offset vectors between pairs of queries, represented by the directed edges, are directionally similar in the embedding space for similar intent transitions. This matches the observations made by Mikolov et al. [191] on continuous space language models for text processing, and gives us an intuitively understandable representation of query reformulations as their offset vectors in the embedding space. More speci\ufb01cally, we de\ufb01ne the reformulation from query q1 to q2 as, \f7.2. Session Context Modelling for Query Auto-Completion 183 ref(q1,q2) =\u20d7 vq2 \u2212\u20d7 vq1 = \u20d7 vq2 \u2225\u20d7 vq2\u2225\u2212\u20d7 vq1 \u2225\u20d7 vq1\u2225 (7.6) Where \u20d7 vq1 and \u20d7 vq2 are the CDSSM vector embeddings of the two queries, respectively. This explicit vector representation provides a framework for studying frequently occurring query reformulation patterns. To illustrate this, we randomly sample approximately 65K pairs of queries that were observed in succession in Bing\u2019s logs. For each pair, we compute the offset vector using a CDSSM model. We then run a simple k-means clustering (k = 100) and examine the top clusters. Example reformulations from \ufb01ve of the biggest clusters are shown in Table 7.5. A further study of these reformulation vectors can reveal important insights about user behaviour, such as the popularity of certain reformulation patterns. For example, we randomly sampled 100,000 adjacent pairs of queries from Bing\u2019s logs that were observed in search sessions. Our analysis show that there are more pairs similar to the narrowing reformulation \"new york\" \u2192\"things to do in new york\" in the sampled set, than its inverse. Similarly, the misspelling \"fcebook\" followed by \"facebook\" is a more commonly observed pattern than the other way around, as illustrated in Figure 7.5. Next, we list qualitative examples in Table 7.6 to demonstrate the predictive aspect of these reformulation vectors. Similar to the analogy based test proposed by Mikolov et al. [191], these examples show that we can obtain intuitively understandable results by performing simple algebraic operations in the embedding space. For example, we compute the vector sum of the projections (normalized to their unit norm) of the queries \"new york\" and \"newspaper\". \u20d7 vtarget =\u20d7 vnewyork +\u20d7 vnewspaper = \u20d7 vnewyork \u2225\u20d7 vnewyork\u2225+ \u20d7 vnewspaper \u2225\u20d7 vnewspaper\u2225 (7.7) Then from a \ufb01xed set of candidates we \ufb01nd the query whose embedding has the highest cosine similarity with \u20d7 vtarget. For our analysis we picked the top one mil\f184 Chapter 7. Learning to Rank for Query Auto-Completion Table 7.6: Examples of simple syntactic and semantic relationships in the query embedding space. The nearest neighbour search is performed on a candidate set of one million most popular queries from one day of Bing\u2019s logs. Query vector Nearest neighbour \u20d7 vchicago +\u20d7 vnewspaper \u20d7 vchicago suntimes \u20d7 vnew york +\u20d7 vnewspaper \u20d7 vnew york times \u20d7 vsan francisco +\u20d7 vnewspaper \u20d7 vla times \u20d7 vbeyonce +\u20d7 vpictures \u20d7 vbeyonce images \u20d7 vbeyonce +\u20d7 vvideos \u20d7 vbeyonce videos \u20d7 vbeyonce +\u20d7 vnet worth \u20d7 vjaden smith net worth \u20d7 vwww.facebook.com \u2212\u20d7 vfacebook +\u20d7 vtwitter \u20d7 vwww.twitter.com \u20d7 vwww.facebook.com \u2212\u20d7 vfacebook +\u20d7 vgmail \u20d7 vwww.googlemail.com \u20d7 vwww.facebook.com \u2212\u20d7 vfacebook +\u20d7 vhotmail \u20d7 vwww.hotmail.xom \u20d7 vhow tall is tom cruise \u2212\u20d7 vtom cruise +\u20d7 vtom selleck \u20d7 vhow tall is tom selleck \u20d7 vhow old is gwen stefani \u2212\u20d7 vgwen stefani +\u20d7 vmeghan trainor \u20d7 vhow old is meghan trainor \u20d7 vhow old is gwen stefani \u2212\u20d7 vgwen stefani +\u20d7 variana grande \u20d7 vhow old is ariana grande 2014 \u20d7 vuniversity of washington \u2212\u20d7 vseattle +\u20d7 vchicago \u20d7 vchicago state university \u20d7 vuniversity of washington \u2212\u20d7 vseattle +\u20d7 vdenver \u20d7 vuniversity of colorado \u20d7 vuniversity of washington \u2212\u20d7 vseattle +\u20d7 vdetroit \u20d7 vnorthern illinois university lion most popular queries from one day of Bing\u2019s logs as the candidate set. In this query set, the closest query vector to \u20d7 vtarget corresponds to the query \"new york times\". Similarly, the nearest neighbour search for \u20d7 vhow old is gwen stefani \u2212 \u20d7 vgwen stefani +\u20d7 vmeghan trainor yields a vector close to \u20d7 vhow old is meghan trainor. These examples show that the vector representation captures simple syntactic as well as semantic relationships. We intentionally also include some examples where the nearest neighbour search yields unexpected results (e.g.,\u20d7 vbeyonce+\u20d7 vnet worth) to highlight that these predictions are often noisy. 7.2.3 Experiments Our empirical evaluations are based on the learning to rank framework proposed by Shokouhi [439] for personalized query auto-completions. In this setup, we learn a supervised ranking model based on training data generated from implicit user feedback. The output of the CDSSM models, described in the previous section, are \f7.2. Session Context Modelling for Query Auto-Completion 185 used to generate additional features for this supervised ranking model. The baseline ranking model (henceforth referred to simply as the baseline model) contains both the non-contextual and the (non-CDSSM based) contextual features. We compare all models using the MRR metric, and the study is repeated on two different testbeds to further con\ufb01rm the validity of the results. Testbeds We conduct our experiments on a large scale search query dataset sampled from the logs of the Bing search engine. We also reproduce our results using the publicly available AOL query logs [85]. In the rest of this paper we refer to these two datasets as the Bing testbed and the AOL testbed, respectively. \u2022 Bing testbed Bing\u2019s logs contain a record of all the queries submitted by its users associated with the corresponding anonymized user IDs, timestamps and any clicked Web results4 (the URL and the displayed title). We sampled queries from these logs for the duration of the last week of October, 2014 and use this as the background data, for computing the feature values and training the CDSSM models. From the \ufb01rst week of November, we sampled 175,392 queries from two consecutive days for training the supervised ranking models, and from the following two individual days we sampled 79,000 queries for validation and 74,663 queries for testing, respectively. \u2022 AOL testbed This dataset contains queries sampled between 1 March, 2006 and 31 May, 2006. For each query, the data includes an anonymized user ID and a timestamp. If a result was clicked then the rank of the clicked item and the domain portion of its URL are also included. In aggregate, the data contains 16,946,938 query submissions and 36,389,567 document clicks by 657,426 users. We consider all queries before 1 May, 2006 as the background data. All queries from the next two weeks of data are used for training the supervised ranking models, and the remaining two sets, consisting of one week of data each, is used for validation and testing, respectively. 4For impressions with multiple clicked results we consider only the last clicked document. \f186 Chapter 7. Learning to Rank for Query Auto-Completion To have a separation of users in training and test datasets, on both the testbeds we use only the users with even user IDs for training and validation, and those with odd numbered user IDs for testing. Also, in all the datasets the queries are lowercased and the punctuations are removed. Learning to rank To generate the training, the validation and the test sets we sample query impressions from the corresponding portions of the logs. For each query impression, a pre\ufb01x is generated by splitting the query at a randomly selected position5. For each pre\ufb01x a positive relevance judgment is assigned to the suggestion candidate that matches the \ufb01nal submitted query and all the others are labelled as irrelevant. The training data collected in the above process consists of labelled pre\ufb01xquery pairs. With respect to the choice of learning-to-rank algorithms, we chose LambdaMART [440], a boosted tree version of LambdaRank [459], that won the Yahoo! Learning to Rank Challenge (2010) [460] and is considered as one of the state-of-the-art learning algorithms. We train 500 trees across all our experiments with the same set of \ufb01xed parameters tuned using standard training and validation on separate sets. We consider the top 10 million most popular queries in the background data as the pre-computed list of suggestion candidates and \ufb01lter out all the impressions where the \ufb01nal submitted query is not present in this list. For each impression in the training, the validation and the test sets we retain a maximum of 20 suggestion candidates the submitted query as the positive candidate and 19 other most frequently observed queries from the background data that starts with the same pre\ufb01x, as the negative examples. Furthermore, for each impression up to 10 previous queries from the same session are made available for computing the session context features. Similar to other previous work [87, 461] we de\ufb01ne the end of a session by a 30 minute window of user inactivity. For our \ufb01nal evaluation we report the Mean Reciprocal Rank of the submitted query averaged over all sampled impressions on each of the two testbeds. 5The pre\ufb01xes in our study are strictly shorter than the original query and limited to no more than 30 characters in length. \f7.2. Session Context Modelling for Query Auto-Completion 187 Table 7.7: Comparison of QAC ranking models trained with CDSSM based features against the MPC model and the supervised baseline ranker model. All the reported MRR improvements are statistically signi\ufb01cant by the t-test (p < 0.01) over the MPC baseline and the baseline model. Additionally, corresponding to each of the different CDSSM models, the ranking model containing both the similarity and the reformulation features shows statistically signi\ufb01cant (p < 0.01) improvements in MRR over the model containing only the similarity features on both the testbeds. The three highest MRR improvements per testbed are shown in bold below. Bing AOL Models % Improv. MRR % Improv. Baselines MostPopularCompletion 0.5110 Baseline Model +48.6 0.7983 +56.2 CDSSM (query-document pairs) All features +55.9 Reformulation features +54.3 Similarity features +55.3 CDSSM (Asymmetric session query pairs) All features +58.0 0.8775 +71.7 Reformulation features +57.4 0.8747 +71.2 Similarity features +54.2 0.8580 +67.9 CDSSM (Symmetric session query pairs) All features +59.0 0.8801 +72.2 Reformulation features +57.2 0.8744 +71.1 Similarity features +55.8 0.8636 +69.0 7.2.4 Features The baseline contextual and non-contextual features, as well as the features based on the CDSSM outputs are described in this section. \u2022 Non-contextual features The MostPopularCompletion (MPC) model is one of the baselines for our study. We also use the output of this model as a feature for the supervised ranking model. Other non-contextual features include the pre\ufb01x length (in characters), the suggestion length (in both characters and words), the vowels to alphabets ratio in the suggestion, and a boolean feature indicating whether the suggestion contains numeric characters. \u2022 N-gram similarity features We compute the character n-gram similarity (n=3) between the suggestion candidate and the previous queries from the same user session. This is an implementation of the short history features described by Shokouhi [439]. A maximum of 10 previous queries are considered. \f188 Chapter 7. Learning to Rank for Query Auto-Completion \u2022 Pairwise frequency feature From the background data, we generate the top 10 million most popular adjacent pairs of queries observed in search sessions. For a given impression, the previous query and the suggestion candidate pair is matched against this dataset and the corresponding frequency count is used as the feature value. If no matches are found, then the feature value is set to zero. \u2022 CDSSM topical similarity features The CDSSM models are trained as described in Section 7.2.2 using the background portion of the data on each testbed. The cosine similarity between the CDSSM vectors corresponding to the suggestion candidate and a maximum of previous 10 queries from the same session are computed and used as 10 distinct features in the QAC ranking model. Training on the session query pairs data produces a pair of pre-post CDSSM models. When trained on the asymmetric data, the premodel is used for projecting the user\u2019s previous queries and the postmodel is used for projecting the suggestion candidates for the cosine similarity computation. For the symmetric data however, both the preand the postmodels are equivalent, and hence we use only the premodel in our experiments. The AOL logs contains only the domain portion of the clicked results. Hence we are unable to get the corresponding document titles. Therefore we only train the session pairs based CDSSM models on this testbed and report those results in this paper. \u2022 CDSSM reformulation features We compute the n-dimensional (n=32) vector representation of the reformulation from the previous query to the suggestion candidate. The raw values from this vector are used as n distinct features into the supervised ranking model. For both the session pair based models, the premodel is used for projecting the suggestion candidates, as well as the previous query. \f7.2. Session Context Modelling for Query Auto-Completion 189 7.2.5 Results Table 7.7 compares the results of training the supervised QAC ranking model with the different CDSSM based session context features. Due to the proprietary nature of Bing\u2019s data, we report only relative improvements of each of the models over the MPC baseline for this testbed. On the AOL testbed, however, we report both the absolute MRR values and the relative improvements for all the models. On both the testbeds, the baseline model which also contains session context features (the n-gram similarity and the pairwise frequency) shows a large improvement over the MPC baseline, which is expected. All the models trained with the CDSSM based contextual features show further statistically signi\ufb01cant improvements over the baseline model. Both the CDSSM models trained on session pairs perform better than the models trained on clickthrough data, with the model trained on the symmetric session pairs performing slightly better overall. Table 7.9 lists examples of cases from one of the test sets where the ranking model with the CDSSM based contextual features perform better compared to both the baselines. The supervised ranking models trained with both the CDSSM based similarity features and the CDSSM based reformulation features perform better than the corresponding models trained with the similarity features alone. The improvements are statistically signi\ufb01cant and demonstrate the additional information provided by the reformulation features to the ranking model over the CDSSM based similarity features. The reformulation features perform particularly superior when the CDSSM model has been trained on the session pairs dataset. Table 7.8 shows the impact of considering different number of previous queries in the session for computing the CDSSM based similarity features. The results indicate that considering the previous query alone achieves most of the improvements observed from these similarity features. We also compare the improvements from the different models based on the length of the input pre\ufb01xes. Bar-Yossef and Kraus [429] have previously reported that non-contextual QAC systems generally perform poorly when the user has typed only a few characters due to the obvious ambiguity in user intent. Figure 7.6 illus\f190 Chapter 7. Learning to Rank for Query Auto-Completion Table 7.8: Comparison of QAC ranking models with CDSSM similarity features computed considering different maximum number of previous queries in the same session. The results show that most of the improvements from short-term history similarity features can be achieved by considering just the immediately previous query. Bing AOL Models % Improv. MRR % Improv. Baselines MostPopularCompletion 0.5110 Baseline Model +48.6 0.7983 +56.2 CDSSM (Symmetric session query pairs) Previous 1 query +55.2 0.8631 +68.9 Previous 3 queries +56.1 0.8639 +69.1 Previous 5 queries +56.1 0.8642 +69.1 Previous 10 queries +55.8 0.8636 +69.0 trates this behaviour on the AOL testbed. Both the supervised ranking models, the baseline and the model with the CDSSM features, show signi\ufb01cantly large improvements over the MPC baseline on short pre\ufb01xes. After the user has typed a few more characters in the search box, the set of suggestion candidates reduce signi\ufb01cantly and the performance of the MPC model improves. Therefore the improvements on the longer pre\ufb01xes are smaller for both the supervised ranking models. The supervised ranking model with the CDSSM features, however, show statistically significant better MRR compared to both the MPC baseline and the supervised baseline ranking model on all the pre\ufb01x length based segments. Finally, Figure 7.7 shows that better MRR can be achieved by training the CDSSM model with a higher number of output dimensions. 7.2.6 Discussion We demonstrated signi\ufb01cant improvements in the query auto-completion ranking task using the CDSSM based session context features. We now discuss potential implications of these vector representations on session modelling and list some of the assumptions and limitations of the evaluation framework used in this study. Implications for session modelling The distributed representation of queries and query reformulations provides an interesting framework for thinking about sessions and task context. The sequence of queries (and documents) in a search session can \f7.2. Session Context Modelling for Query Auto-Completion 191 Long Medium Short All 0.0 0.2 0.4 0.6 0.8 1.0 MPC Baseline CLSM 0.91 0.84 0.82 0.84 0.74 0.6 0.9 0.85 0.28 0.87 0.8 0.51 MRR Figure 7.6: Comparison of the MPC model, the baseline ranker model and the experimental ranker model with the CDSSM based features (the CDSSM model considered here is trained on symmetric session pairs with all features) across different pre\ufb01x lengths on the AOL testbed. Pre\ufb01xes less than 4 characters are considered as short, 4 to 10 characters as medium, and greater than 10 characters as long. Both the supervised ranking models contain contextual features (CDSSM based or otherwise) and hence show large improvements on the short pre\ufb01xes where the ambiguity is maximum. Across all pre\ufb01x lengths the model with CDSSM based features out-perform the baseline ranking model. All reported differences in MRR are statistically signi\ufb01cant by the t-test (p < 0.01). be considered as a directed path in the embedding space. What are the common attributes shared by these session paths? What properties of these paths vary depending on the type of the user task or information need? These are examples of research questions that may be interesting to study under the distributed representation framework. Hassan et al. [462], for example, studied long search sessions and compared user behaviours when the user is struggling in their information task to when they are exploring. Features based on the CDSSM projections of queries and documents, such as the types of user reformulations in the session and the similarity between submitted queries and viewed documents, can be explored to improve the \f192 Chapter 7. Learning to Rank for Query Auto-Completion 50 55 60 65 70 \u25cf \u25cf \u25cf \u25cf \u25cf 23 24 25 26 27 Dimensions % MRR Improv. over MPC Figure 7.7: Evaluation of the impact of training the CDSSM models with different number of dimensions. Except for the pair of CDSSM models trained with 32 and 64 dimensions, all other reported differences in MRR are statistically signi\ufb01cant by the t-test (p < 0.01). prediction accuracy for such session classi\ufb01cation tasks. In this paper we have examined individual query reformulations. Studying reformulation chains may teach us further about how user intents evolve during a session and support the design of future models for session search. For example, White and Huang [463] have explored the value of search trails, over the origins and the destinations. While we have only examined the representation of queries and reformulations in this paper, CDSSM also allows for documents to be represented in the same embedding space. A uni\ufb01ed study of queries, reformulations and viewed (searched or browsed) documents using the vector representation framework is an area for future work. In the query change retrieval model (QCM) proposed by Guan et al. [444], we can explore using the reformulation vectors for representing the user agent\u2019s actions. Similarly, we may be able to gain further insights by conducting a similar study as Hollink et al. [453] by examining query changes under the vector representation framework. \f7.2. Session Context Modelling for Query Auto-Completion 193 Exploring Struggling 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 Avg. query similarity Figure 7.8: Average similarity between the \ufb01rst \ufb01ve queries to the \ufb01rst query in search sessions annotated by crowdsourcing judges as exploring or struggling. The similarity was computed using the distributed representation learnt by the CDSSM model trained on the symmetric session query pairs data. All differences are statistically signi\ufb01cant at the p < 0.05 level according to a two-tailed t-test. Generating a distributed representation of users based on their search and other online activities is also an interesting problem. Other potential directions for future studies using the vector framework includes examining how query reformulations differ based on the search expertise of the user and the kind of device the search is performed on. Assumptions and limitations We have based our empirical study on the supervised ranking framework proposed by Shokouhi [439]. In doing so, we inherit some of the assumptions in the designs of that framework. Firstly, we assume that the user has a pre-determined query in mind for input and would be satis\ufb01ed if it appears in the QAC suggestions list. However Hofmann et al. [73] have shown that due to the high examination bias towards top-ranked results, sub-optimal QAC ranking can negatively affect the quality of the query submitted by the user. As many Web search engines implement some form of an auto-completion feature, it is likely that \f194 Chapter 7. Learning to Rank for Query Auto-Completion Table 7.9: Examples from the win-loss analysis on one of the test sets. For a given pre\ufb01x and the previous query from the same user session, the top ranked suggestion by the different models are shown below. The actual submitted query is denoted by the checkmark (\u2713). The CDSSM features include both the similarity and the reformulation features and the CDSSM model is trained on the symmetric session pairs dataset. Previous query the \ufb01ghter airline tickets Pre\ufb01x amer amer MPC american express american express Supervised baseline american express american express Supervised \\w CDSSM Features american psycho movie \u2713 Previous query usairways 2007 toyota yaris Pre\ufb01x us us MPC us elections 2014 predictions us elections 2014 predictions Supervised baseline usps.com usaa Supervised \\w CDSSM Features usairways.com \u2713 used cars \u2713 those QAC systems in\ufb02uenced the actual query observed in the logs. We ignore this effect in the generation of our training and test sets. The generation of the pre\ufb01xes also assumes that each query was typed completely by the user in a strictly left-to-right progression and the user is equally likely to examine and engage with the QAC system after each character is typed. In practice, however, users are often aided in the query formulation process (partially or completely) by various features of the search engine, such as QAC or related query recommendations. Users also often correct already entered text during the query formulation process. In these cases the generation of all possible pre\ufb01xes from the submitted query does not accurately re\ufb02ect the actual pre\ufb01xes typed by the user. Li et al. [464] and Mitra et al. [72] have also shown that user engagement with QAC varies with different factors such as whether the user is at a word boundary or the distance of the next character to be typed on the keyboard. This suggests that pre\ufb01xes should be sampled with different importance depending on the likelihood that the user would examine the QAC suggestions for that pre\ufb01x. Li et al. [464] proposed a two-dimensional click model for QAC, demonstrating that in the presence of keystroke level logging of QAC sessions the click model can be used to \ufb01lter out \f7.2. Session Context Modelling for Query Auto-Completion 195 pre\ufb01x impressions with low expected probability of examination. However, as the testbeds we consider for this study do not all have the keystroke level granularity of records, we do not pursue this line of experimentation. Lastly, Shokouhi [439] generates all the possible pre\ufb01xes of each query in the log data. This results in an obvious over-representation of long pre\ufb01xes in the generated datasets. To avoid this issue we extract a single pre\ufb01x per query by splitting at a random position within the query. Despite the different underlying assumptions, the framework proposed by Shokouhi [439] provides a reasonable setup to learn a baseline context-aware ranking model for QAC, and hence we adopt it for this study. 7.2.7 Conclusion We have demonstrated that the distributed representation of queries by the CDSSM holds useful information about inter-query relationships. The reformulation vectors exhibit regularities that makes them interesting for modelling session context for query suggestion tasks. Our experiments show that using features based on the reformulation vectors improves MRR for QAC ranking over using features based on the query vectors alone. The best improvements, however, are achieved by the combination of features based on both these vector representations. We have also demonstrated that training the latent semantic models on session query pairs produces further improvements over the model trained on query-document pairs. While the biggest improvements are observed on short pre\ufb01xes, the ranking model containing the CDSSM based features perform better than the supervised ranking baseline on all the pre\ufb01x length based segments. We have also studied the effects of considering different number of previous queries within the session for context and the number of dimensions used to represent the query and reformulation vectors on the model performance. While we evaluate these models on the query autocompletion ranking task, the features we described in this paper may also be useful for generating context sensitive related query recommendations and query rewriting. Furthermore, by projecting documents to this same embedding space, future studies may be able to extend these contextual features to document ranking in Web search. \f196 Chapter 7. Learning to Rank for Query Auto-Completion Lastly, the reformulation vectors provide an interesting framework for studying sessions and intent progressions. We anticipate that these distributed representations of queries, documents and reformulations will become more frequently used as tools for future studies on search personalization and session search. \fChapter 8 Benchmarking for neural IR Neural IR is an emerging \ufb01eld. In recognition of the signi\ufb01cant impact of deep learning on other application areas, we organized a workshop titled Neu-IR [373, 465] (pronounced \u201cnew IR\u201d) at SIGIR 2016. The purpose was to provide a forum for new and early work relating to deep learning and other neural approaches to IR, and discuss the main challenges facing this line of research. Since then, research publication in the area has been increasing (see Figure 8.1 and [466]), along with relevant workshops [467\u2013469], tutorials [3\u20136, 470], and plenary talks [471, 472]. 2014 2015 2016 2017 2018 2019 2020 1 % 4 % 8 % 23 % 42 % 58 % 79 % 0 20 40 60 80 100 Year % of SIGIR papers related to neural IR Figure 8.1: The percentage of neural IR papers at the ACM SIGIR conference\u2014as determined by a manual inspection of the papers\u2014shows a clear trend in the growing popularity of the \ufb01eld. \f198 Chapter 8. Benchmarking for neural IR While there has been signi\ufb01cant interest in deep learning for ad-hoc ranking [1], the work till recently has largely been done with small data, proprietary data or synthetic data. With small data, there has been some discussion about whether deep learning methods really outperform strong traditional IR baselines [473]. Using a proprietary set of document ranking data with 200,000 training queries we beat a traditional IR baseline in 2017, as reported in Chapter 4, but it was impossible for others to follow up on the work without a data release. Dietz et al. [414] have a TREC task with enough training data to investigate such \ufb01ndings, but on synthetic rather than human-labeled data. Since signi\ufb01cant questions remain about baselines and the required volume of human-labeled data, we initiated an effort to benchmark IR models in the presence of large scale training data at TREC 2019. TREC provides a good forum to study such issues. The IR community can submit strong baselines at TREC and there is a blind one-shot evaluation to avoid over\ufb01tting. We present our \ufb01ndings from the TREC 2019 Deep Learning track [15] in this Chapter. 8.1 TREC Deep Learning track The TREC 2019 Deep Learning Track has two tasks: Document retrieval and passage retrieval. Each task has a dataset that is new to TREC, although the passage task is similar to the MS MARCO passage ranking leaderboard [52], but with a new test set in the TREC version with more comprehensive labeling. Both tasks are ad-hoc retrieval, meaning that there is a \ufb01xed document set, and the goal of the information retrieval system is to respond to each new query with results that would satisfy the querying user\u2019s information need. Ad-hoc retrieval is a very common scenario in real-world search applications and in TREC. The main goals of the track are: (i) To provide large reusable datasets for training and evaluation of deep learning and traditional ranking methods in a large training data regime, (ii) To perform a rigorous blind single-shot evaluation, where test labels don\u2019t even exist until after all runs are submitted, to compare different ranking methods, and (iii) To study this in both a traditional TREC setup with end\u2013 \f8.1. TREC Deep Learning track 199 to-end retrieval and in a re-ranking setup that matches how some models may be deployed in practice. The track has two tasks: Document retrieval and passage retrieval. Participants were allowed to submit up to three runs per task, although this was not strictly enforced. Participants were provided with an initial set of 200 test queries, then NIST later selected 43 queries during the pooling and judging process, based on budget constraints and with the goal of producing a reusable test collection. The same 200 queries were used for submissions in both tasks, while the selected 43 queries for each task were overlapping but not identical. When submitting each run, participants also indicated what external data, pretrained models and other resources were used, as well as information on what style of model was used. Below we provide more detailed information about the document retrieval and passage retrieval tasks, as well as the datasets provided as part of these tasks. Document retrieval task The \ufb01rst task focuses on document retrieval\u2014with two subtasks: (i) Full retrieval and (ii) top-100 reranking. In the full retrieval subtask, the runs are expected to rank documents based on their relevance to the query, where documents can be retrieved from the full document collection provided. This subtask models the end-to-end retrieval scenario. Note, although most full retrieval runs had 1000 results per query, the reranking runs had 100, so to make the MAP and MRR results more comparable across subtasks we truncated full retrieval runs by taking the top-100 results per query by score. In the reranking subtask, participants were provided with an initial ranking of 100 documents, giving all participants the same starting point. The 100 were retrieved using Indri [156] on the full corpus with Krovetz stemming and stopwords eliminated. Participants were expected to rerank the candidates w.r.t. their estimated relevance to the query. This is a common scenario in many real-world retrieval systems that employ a telescoping architecture [111, 112]. The reranking subtask allows participants to focus on learning an effective relevance estimator, without the need for implementing an end-to-end retrieval system. It also makes the reranking \f200 Chapter 8. Benchmarking for neural IR runs more comparable, because they all rerank the same set of 100 candidates. For judging, NIST\u2019s pooling was across both subtasks, and they also identi\ufb01ed additional documents for judging via classi\ufb01er. Further, for queries with many relevant documents, additional documents were judged. These steps were carried out to identify a suf\ufb01ciently comprehensive set of relevant results, to allow reliable future dataset reuse. Judgments were on a four-point scale: [3] Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. [2] Highly relevant: The content of this document provides substantial information on the query. [1] Relevant: Document provides some information relevant to the query, which may be minimal. [0] Irrelevant: Document does not provide any useful information about the query. Passage retrieval task Similar to the document retrieval task, the passage retrieval task includes (i) a full retrieval and (ii) a top-1000 reranking tasks. In the full retrieval subtask, given a query, the participants were expected to retrieve a ranked list of passages from the full collection based on their estimated likelihood of containing an answer to the question. Participants could submit up to 1000 passages per query for this end-to-end retrieval task. In the top-1000 reranking subtask, 1000 passages per query query were provided to participants, giving all participants the same starting point. The sets of 1000 were generated based on BM25 retrieval with no stemming as applied to the full collection. Participants were expected to rerank the 1000 passages based on their estimated likelihood of containing an answer to the query. In this subtask, we can compare different reranking methods based on the same initial set of 1000 candidates, with the same rationale as described for the document reranking subtask. For judging, NIST\u2019s pooling was across both subtasks, and they also identi\ufb01ed additional passages for judging via classi\ufb01er. Further, for queries with many rel\f8.2. Datasets 201 evant passages, additional passages were judged. These steps were carried out to identify a suf\ufb01ciently comprehensive set of relevant results, to allow reliable future dataset reuse. Judgments were on a four-point scale: [3] Perfectly relevant: The passage is dedicated to the query and contains the exact answer. [2] Highly relevant: The passage has some answer for the query, but the answer may be a bit unclear, or hidden amongst extraneous information. [1] Related: The passage seems related to the query but does not answer it. [0] Irrelevant: The passage has nothing to do with the query. 8.2 Datasets Both tasks have large training sets based on human relevance assessments, derived from MS MARCO. These are sparse, with no negative labels and often only one positive label per query, analogous to some real-world training data such as click logs. In the case of passage retrieval, the positive label indicates that the passage contains an answer to a query. In the case of document retrieval, we transferred the passage-level label to the corresponding source document that contained the passage. We do this under the assumption that a document with a relevant passage is a relevant document, although we note that our document snapshot was generated at a different time from the passage dataset, so there can be some mismatch. Despite this, in the document retrieval task machine learning models seem to bene\ufb01t from using the labels, when evaluated using NIST\u2019s non-sparse, non-transferred labels. This suggests the transferred document labels are meaningful for our TREC task. The passage corpus is the same as in MS MARCO passage retrieval leaderboard. The document corpus is newly released for use in TREC. Each document has three \ufb01elds: (i) URL, (ii) title, and (iii) body text. Table 8.1 provides descriptive statistics for the datasets. More details about the datasets\u2014including directions for download\u2014is available on the TREC 2019 Deep \f202 Chapter 8. Benchmarking for neural IR Table 8.1: Summary of statistics on TREC 2019 Deep Learning Track datasets. Document retrieval Passage retrieval File description # of records # of records Collection 3,213,835 8,841,823 Train queries 367,013 502,939 Train qrels 384,597 532,761 Validation queries 5,193 6,980 Validation qrels 5,478 7,437 Test queries 200 \u219243 200 \u219243 Table 8.2: Summary of statistics of runs for the two retrieval tasks at the TREC 2019 Deep Learning Track. Document retrieval Passage retrieval Number of groups 10 11 Number of total runs 38 37 Number of runs w/ category: nnlm 15 18 Number of runs w/ category: nn 12 8 Number of runs w/ category: trad 11 11 Number of runs w/ category: rerank 10 11 Number of runs w/ category: fullrank 28 26 Learning Track website1. Interested readers are also encouraged to refer to [52] for details on the original MS MARCO dataset. 8.3 Results and analysis Submitted runs A total of 15 groups participated in the TREC 2019 Deep Learning Track, with an aggregate of 75 runs submitted across both tasks. Based run submission surveys, we classify each run into one of three categories: \u2022 nnlm: if the run employs large scale pre-trained neural language models, such as BERT [327] or XLNet [474] \u2022 nn: if the run employs some form of neural network based approach\u2014e.g., Duet or using word embeddings [394]\u2014but does not fall into the \u201cnnlm\u201d category 1https://microsoft.github.io/TREC-2019-Deep-Learning/ \f8.3. Results and analysis 203 \u2022 trad: if the run exclusively uses traditional IR methods like BM25 [80] and RM3 [160]. We placed 33 (44%) runs in the \u201cnnlm\u201d category (32 using BERT and one using XLNet), 20 (27%) in the \u201cnn\u201d category, and the remaining 22 (29%) in the \u201ctrad\u201d category. We further categorize runs based on subtask: \u2022 rerank: if the run reranks the provided top-k candidates, or \u2022 fullrank: if the run employs their own phase 1 retrieval system. We \ufb01nd that only 21 (28%) submissions fall under the \u201crerank\u201d category\u2014while the remaining 54 (72%) are \u201cfullrank\u201d. Table 8.2 breaks down the submissions by category and task. We also encouraged some participants to run strong traditional IR baselines, and submit them as additional runs under the \u201cBASELINE\u201d group. Overall results Our main metric in both tasks is Normalized Discounted Cumulative Gain (NDCG)\u2014speci\ufb01cally, NDCG@10, since it makes use of our 4-level judgments and focuses on the \ufb01rst results that users will see. To analyse if any of the fullrank runs recall more relevant candidates in phase 1 compared to those provided for the reranking subtask, we also report Normalized Cumulative Gain (NCG) at rank 100 and 1000 for the document and passage ranking tasks, respectively. We choose to report NCG because it discriminates between recalling documents with different positive relevance grades and is a natural complement to NDCG, our main metric. Although NCG is not of\ufb01cially supported by trec_eval, we con\ufb01rm that it correlates strongly with the recall metric for these analysed runs. Deep learning vs. traditional ranking methods An important goal of this track is to compare the performance of different types of model, using large human-labeled training sets, for the core IR task of ad-hoc search. Indeed this is the \ufb01rst time a TREC-style blind evaluation has been carried out to compare state-of-the-art neural and traditional IR methods. \f204 Chapter 8. Benchmarking for neural IR 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best nnlm run best nn run best trad run nnlm nn trad (a) Document retrieval task 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best nnlm run best nn run best trad run nnlm nn trad (b) Passage retrieval task Figure 8.2: NDCG@10 results, broken down by run type. Runs of type \u201cnnlm\u201d, meaning they use language models such as BERT, performed best on both tasks. Other neural network models \u201cnn\u201d and non-neural models \u201ctrad\u201d had relatively lower performance. More iterations of evaluation and analysis would be needed to determine if this is a general result, but it is a strong start for the argument that deep learning methods may take over from traditional methods in IR applications. Figure 8.2a plots the NDCG@10 performance of the different runs for the document retrieval task, broken down by model type. In general, runs in the category \u201cnnlm\u201d outperform the \u201cnn\u201d runs, which outperform the \u201ctrad\u201d runs. The best performing run of each category is indicated, with the best \u201cnnlm\u201d and \u201cnn\u201d models outperforming the best \u201ctrad\u201d model by 29.4% and 14.8% respectively. The passage retrieval task reveals similar pattern. In Figure 8.2b, the gap between the best \u201cnnlm\u201d and \u201cnn\u201d runs and the best \u201ctrad\u201d run is larger, at 37.4% and 23.7% respectively. One explanation for this could be that vocabulary mismatch between queries and relevant results is more likely in short text, so neural methods that can overcome such mismatch have a relatively greater advantage in passage retrieval. Another explanation could be that there is already a public leaderboard, albeit without test labels from NIST, for the passage task. Some TREC participants may have submitted neural models multiple times to the public leaderboard, and are well practiced for the passage ranking task. In query-level win-loss analysis for the document retrieval task (Figure 8.3) the best \u201cnnlm\u201d model outperforms the best \u201ctrad\u201d run on 36 out of 43 test queries (i.e., 83.7%). Passage retrieval shows a similar pattern in Figure 8.4. Neither task has a large class of queries where the \u201cnnlm\u201d model performs worse. However, \f8.3. Results and analysis 205 more iterations of rigorous blind evaluation with strong \u201ctrad\u201d baselines, plus more scrutiny of the benchmarking methods, would be required to convince us that this is true in general. Next, we analyze the runs by representing each run as a vector of 43 NDCG@10 scores. In this vector space, two runs are similar if their NDCG vectors are similar, meaning they performed well and badly on the same queries. Using t-SNE [475] we then plot the runs in two dimensions, which gives us a visualization where similar runs will be closer together and dissimilar results further apart. This method of visualizing inter-model similarity was \ufb01rst proposed by Mitra et al. [7] and we employ it to generate the plots in Figure 8.5. On both document and passage retrieval tasks, the runs appear to be \ufb01rst clustered by group\u2014see Figures 8.5b and 8.5d. This is expected, as different runs from the same group are likely to employ variations of the same approach. In Figures 8.5a and 8.5c, runs also cluster together based on their categorization as \u201cnnlm\u201d, \u201cnn\u201d, and \u201ctrad\u201d. End-to-end retrieval vs. reranking. Our datasets include top-k candidate result lists, with 100 candidates per query for document retrieval and 1000 candidates per query for passage retrieval. Runs that simply rerank the provided candidates are \u201crerank\u201d runs, whereas runs that perform end-to-end retrieval against the corpus, with millions of potential results, are \u201cfullrank\u201d runs. We would expect that a \u201cfullrank\u201d run should be able to \ufb01nd a greater number of relevant candidates than we provided, achieving higher NCG@k. A multi-stage \u201cfullrank\u201d run should also be able to optimize the stages jointly, such that early stages produce candidates that later stages are good at handling. According to Figure 8.6, \u201cfullrank\u201d did not achieve much better NDCG@10 performance than \u201crerank\u201d runs. While it was possible for \u201cfullrank\u201d to achieve better NCG@k, it was also possible to make NCG@k worse, and achieving signi\ufb01cantly higher NCG@k does not seem necessary to achieve good NDCG@10. Speci\ufb01cally, for the document retrieval task, the best \u201cfullrank\u201d run achieves only 0.9% higher NDCG@10 over the best \u201crerank\u2019 run. For the passage retrieval \f206 Chapter 8. Benchmarking for neural IR 0.0 0.2 0.4 0.6 0.8 1.0 NDCG@10 how is the weather in jamaica who is robert gray what is famvir prescribed for difference between rn and bsn what is a active margin difference between a mcdouble and a double cheeseburger types of dysarthria from cerebral palsy how to find the midsegment of a trapezoid example of monotonic function medicare's definition of mechanical ventilation lps laws definition how long is life cycle of flea is cdg airport in main paris do goldfish grow definition of a sigmet causes of left ventricular hypertrophy right pelvic pain causes what is theraderm used for anthropological definition of environment hydrogen is a liquid below what temperature when was the salvation army founded tracheids are part of _____. axon terminals or synaptic knob definition what is physical description of spruce cost of interior concrete flooring define visceral? what is wifi vs bluetooth causes of military suicide definition declaratory judgment what is durable medical equipment consist of how are some sharks warm blooded what is an aml surveillance analyst what is the most popular food in switzerland why did the us volunterilay enter ww1 what can contour plowing reduce what types of food can you cook sous vide rsa definition key how many liberty ships were built in brunswick what are the social determinants of health what is the daily life of thai people who formed the commonwealth of independent states exons definition biology how long to hold bow in yoga nnlm trad Figure 8.3: Comparison of the best \u201cnnlm\u201d and \u201ctrad\u201d runs on individual test queries for the document retrieval task. Queries are sorted by difference in mean performance between \u201cnnlm\u201d and \u201ctrad\u201druns. Queries on which \u201cnnlm\u201d wins with large margin are at the top. \f8.3. Results and analysis 207 0.0 0.2 0.4 0.6 0.8 1.0 NDCG@10 how is the weather in jamaica causes of left ventricular hypertrophy when was the salvation army founded how long is life cycle of flea what are the social determinants of health rsa definition key right pelvic pain causes what is theraderm used for what is an aml surveillance analyst difference between a mcdouble and a double cheeseburger anthropological definition of environment causes of military suicide hydrogen is a liquid below what temperature does legionella pneumophila cause pneumonia what is famvir prescribed for axon terminals or synaptic knob definition definition declaratory judgment definition of a sigmet what is the daily life of thai people why did the us volunterilay enter ww1 lps laws definition cost of interior concrete flooring what is wifi vs bluetooth is cdg airport in main paris what is physical description of spruce tracheids are part of _____. what types of food can you cook sous vide do goldfish grow what is a active margin how are some sharks warm blooded what can contour plowing reduce what is durable medical equipment consist of medicare's definition of mechanical ventilation who formed the commonwealth of independent states types of dysarthria from cerebral palsy how to find the midsegment of a trapezoid what are the three percenters? example of monotonic function exons definition biology what is the most popular food in switzerland difference between rn and bsn define visceral? who is robert gray nnlm trad Figure 8.4: Comparison of the best \u201cnnlm\u201d and \u201ctrad\u201d runs on individual test queries for the passage retrieval task. Queries are sorted by difference in mean performance between \u201cnnlm\u201d and \u201ctrad\u201druns. Queries on which \u201cnnlm\u201d wins with large margin are at the top. \f208 Chapter 8. Benchmarking for neural IR latent dimension 1 latent dimension 2 nn nnlm trad (a) By model type on document retrieval task latent dimension 1 latent dimension 2 BASELINE BITEM_DL CCNU_IRGroup CMU IDST Microsoft TU-Vienna UCAS h2oloo srchvrs uogTr (b) By group name on document retrieval task latent dimension 1 latent dimension 2 nn nnlm trad (c) By model type on passage retrieval task latent dimension 1 latent dimension 2 BASELINE Brown CCNU_IRGroup ICTNET IDST Microsoft TREMA-UNH TU-Vienna TUA1 h2oloo srchvrs udel_fang (d) By group name on passage retrieval task Figure 8.5: Visualizing inter-run similarity using t-SNE. Each run is represented by a 43-dimensional vector of NDCG@10 performance on corresponding 43 test queries. The 43-dimensional vector is then reduced to two-dimensions and plotted using t-SNE. Runs that are submitted by the same group generally cluster together. Similarly, \u201cnnlm\u201d, \u201cnn\u201d, and \u201ctrad\u201d runs also demonstrate similarities. task, the difference is 3.6%. The best NCG@100 for the document retrieval task is achieved by a welltuned combination of BM25 [80] and RM3 [160] on top of document expansion using doc2query [476]\u2014which improves by 22.9% on the metric relative to the set of 100 candidates provided for the reranking task. For the passage retrieval task, the best NCG@1000 is 20.7% higher than that of the provided reranking candidate set. Given this was the \ufb01rst ever Deep Learning Track at TREC, we are not yet seeing a strong advantage of \u201cfullrank\u201d over \u201crerank\u201d. However, we hope that as the body of literature on neural methods for phase 1 retrieval (e.g., [13, 389, 405, 476]) \f8.3. Results and analysis 209 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best fullrank run best rerank run fullrank rerank (a) NDCG@10 for runs on the document retrieval task 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best fullrank run best rerank run fullrank rerank (b) NDCG@10 for runs on the passage retrieval task 0.1 0.2 0.3 0.4 0.5 0.6 0.7 NCG@100 fullrank rerank (c) NCG@100 for runs on the document retrieval task 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 NCG@1000 fullrank rerank (d) NCG@1000 for runs on the passage retrieval task Figure 8.6: Analyzing the impact of \u201cfullrank\u201d vs. \u201crerank\u201d settings on retrieval performance. Figure (a) and (b) show the performance of different runs on the document and passage retrieval tasks, respectively. Figure (c) and (d) plot the NCG@100 and NCG@1000 metrics for the same runs for the two tasks, respectively. The runs are ordered by their NDCG@10 performance along the x-axis in all four plots. We observe, that the best run under the \u201cfullrank\u201d setting outperforms the same under the \u201crerank\u201d setting for both document and passage retrieval tasks\u2014although the gaps are relatively smaller compared to those in Figure 8.2. If we compare Figure (a) with (c) and Figure (b) with (d), we do not observe any evidence that the NCG metric is a good predictor of NDCG@10 performance. grows, we would see a larger number of runs with deep learning as an ingredient for phase 1 in future editions of this TREC track. NIST labels vs. Sparse MS MARCO labels. Our baseline human labels from MS MARCO often have one known positive result per query. We use these labels for training, but they are also available for test queries. Although our of\ufb01cial evaluation uses NDCG@10 with NIST labels, we now compare this with reciprocal rank (RR) using MS MARCO labels, and MRR using NIST labels. Our goal is to understand how changing the labeling scheme and metric affects the overall results of the track, \f210 Chapter 8. Benchmarking for neural IR 0.45 0.50 0.55 0.60 0.65 0.70 NDCG@10 0.25 0.30 0.35 0.40 0.45 0.50 RR (MS) group IDST h2oloo TU-Vienna UCAS uogTr Microsoft srchvrs CMU BASELINE CCNU_IRGroup BITEM_DL (a) Document retrieval task. 0.45 0.50 0.55 0.60 0.65 0.70 0.75 NDCG@10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 RR (MS) group IDST h2oloo Brown TUA1 udel_fang TU-Vienna ICTNET srchvrs Microsoft BASELINE CCNU_IRGroup TREMA-UNH (b) Passage retrieval task. Figure 8.7: Metrics agreement scatter plot, broken down by group. MRR (MS) is reciprocal rank calculated with the sparse MS MARCO labels, while NDCG@10 is calculated using NIST labels. \f8.3. Results and analysis 211 0.25 0.30 0.35 0.40 0.45 0.50 RR (MS) 0.75 0.80 0.85 0.90 0.95 1.00 RR = 0.68 0.2 0.4 0.6 RR (MS) 0.50 0.55 0.60 0.65 0.70 0.75 NDCG@10 = 0.69 0.8 1.0 RR = 0.73 0.4 0.6 0.8 NDCG@10 neural nnlm nn trad Figure 8.8: Metrics agreement analysis, broken down by model type, for the document retrieval task. Kendall correlation (\u03c4) indicates agreement between metrics on system ordering. MRR (MS) is calculated using MS MARCO sparse labels, while MRR and NDCG@10 are calculated using NIST labels. but if there is any disagreement we believe the NDCG results are more valid, since they evaluate the ranking more comprehensively and a ranker that can only perform well on labels with exactly the same distribution as the training set is not robust enough for use in real-world applications, where real users will have opinions that are not necessarily identical to the preferences encoded in sparse training labels. In Figure 8.8 and 8.9, We observe general agreement between results using MS MARCO and NIST labels\u2013i.e., runs that perform well on MS MARCO-style evaluation also tends to achieve good performance when evaluated under traditional TREC settings, and vice versa. This is good news, validating the MS MARCO leaderboard results are at least somewhat indicative of results that are found with pooled judging. \f212 Chapter 8. Benchmarking for neural IR 0.2 0.3 0.4 0.5 RR (MS) 0.6 0.7 0.8 0.9 RR = 0.82 0.2 0.4 RR (MS) 0.5 0.6 0.7 0.8 NDCG@10 = 0.68 0.6 0.8 1.0 RR = 0.77 0.4 0.6 0.8 NDCG@10 neural nnlm nn trad Figure 8.9: Metrics agreement analysis, broken down by model type, for the passage retrieval task. Kendall correlation (\u03c4) indicates agreement between metrics on system ordering. MRR (MS) is calculated using MS MARCO sparse labels, while MRR and NDCG@10 are calculated using NIST labels. 8.4 Conclusion The TREC 2019 Deep Learning Track introduced two large training datasets, for a document retrieval task and a passage retrieval task, generating two ad hoc test collections with good reusability. For both tasks, in the presence of large training data, non-neural network runs were outperformed by neural network runs. Among the neural approaches, the best-performing runs tended to use transfer learning, employing a pretrained language model such as BERT. In future it will be interesting to con\ufb01rm and extend these results, understanding what mix of data and multi-stage training lead to the best overall performance. We compared reranking approaches to end-to-end retrieval approaches, and there was not a huge difference, with some runs performing well in both regimes. This is another result that would be interesting to track in future, since we would \f8.4. Conclusion 213 expect that end-to-end retrieval should perform better if it can recall documents that are unavailable in a reranking subtask. In the \ufb01rst year of the track there were not many non-neural runs, so it would be important in subsequent year\u2019s track to see more runs of all types, to further understand the relative performance of different approaches. Although the test collections are of high quality, meaning that they are likely to give meaningful results when reused, over\ufb01tting can still be a problem if the test set is used multiple times during the development of a new retrieval approach. The most convincing way to show that a new approach is good is to submit TREC runs. There is no chance of over\ufb01tting, or any kind of repeated testing, because the test labels are not generated until after the submission deadline. Through a combination of test collection reuse (from past years) and blind evaluation (submitting runs) the Deep Learning Track is offering a framework for studying ad hoc search in the large data regime. \f\fChapter 9 General Conclusions \u09b9\u09be\u0981\u09b8\u09bf\u099b\u09b2, \u09b8\u099c\u09be\u01af, (\u09ac\u00e2\u09be\u0995\u09b0\u09a3\u09ae\u09be\u09bf\u09a8\u09a8\u09be), \u09b9\u09c7\u09df\u0232\u0997\u09b2'\u09b9\u09be\u0981\u09b8\u099c\u09be\u01af' \u0232\u0995\u09ae\u09c7\u09a8\u09a4\u09be\u099c\u09be\u09bf\u09a8\u09a8\u09be\u09f7 Was a duck, porcupine (to grammar I bow not) Became Duckupine, but how I know not. \u2014 Sukumar Ray, Khichuri (Translation by Prasenjit Gupta) Unlike traditional IR methods, where relevance is estimated largely by counting occurrences of query terms in document text, the neural methods described in this thesis focus on learning useful text representations guided by optimization objectives that correspond to tasks such as ranking and language modeling. Based on the empirical evidence presented in this thesis\u2014and the substantial body of neural IR literature that has been emerging over the recent years\u2014it is safe to conclude that these representation learning methods are able to demonstrate sizeable improvements over traditional IR methods in the presence of large training corpora. Ongoing new research efforts in this area may be concerned with further improving result quality (effectiveness) while lowering compute and memory costs (ef\ufb01ciency), and even coming up with more elaborate measures of successful retrieval outcomes (e.g., exposure-based metrics) that these models can be optimized towards. However, this emerging family of neural methods may be causing more fundamental shifts in the \ufb01eld of IR. For example, we argue that after at least two decades \f216 Chapter 9. General Conclusions Figure 9.1: Sukumar Ray\u2019s illustration of a ``\u09b9\u09be\u0981\u09b8\u099c\u09be\u01af'' (pronounced: \u201chaashjaru\u201d) or a duckupine, a \ufb01ctional animal from his poem \u201cKhichuri\u201d. of largely unsuccessful attempts at leveraging models and artifacts from NLP to improve IR tasks [477\u2013481], we are now witnessing surprisingly huge bene\ufb01ts from applications of deep NLP models in retrieval. These new NLP artifacts, however, are not in the form thesauri or parts of speech tags, but rather in the form of pretrained language models and latent text representations. While, these black box language models may pick up certain linguistic regularities from training on large corpora, it is also possible, if not likely, that these learned latent representations encode relationships and attributes that are very different to our own notion of linguistic properties. By simply modeling observed regularities in unlabeled corpora, a language model may in fact learn that \u201cduck\u201d and \u201cporcupine\u201d are similar given they appear in similar contexts\u2014such as, \u201chow much does a duck weigh?\u201d and \u201chow much does a porcupine weigh?\u201d. If our goal is to maximize some averaged relevance metrics for a query auto-completion task, it may indeed be reasonable that \u201cduck\u201d and \u201cporcupine\u201d have similar latent representations. Similarly, the latent space may be able to encode seemingly nonsensical concepts such as a \u201cduckupine\u201d even if it has no meaningful counterpart in the real world, except may be in literary \ufb01ction (see Figure 9.1). This poses an interesting challenge for the research community. While, we are reasonably good at measuring how effective these black box models are at improving retrieval, it is signi\ufb01cantly harder to articulate exactly what knowledge and \f9.1. A summary of our contributions 217 world view these models encode (and do not encode), and even more dif\ufb01cult to quantify the progress the IR community is making with regards to better understanding of retrieval tasks from the application of these models. This is not to imply that the learned latent representations must be perfectly interpretable to qualify as scienti\ufb01c progress, but rather we are making a case for viewing the contributions of neural IR through a much broader lens that encourages its usage to aid the development of new IR theory and improved understanding of retrieval tasks. On that note, we conclude this thesis by summarize the contribution of our own work, as described in the earlier chapters, in Section 9.1, and identifying key future challenges and opportunities for the \ufb01eld in Section 9.2. 9.1 A summary of our contributions This thesis summarizes a substantial body of work on neural methods for text retrieval. We ground our contributions by presenting a thorough survey of the \ufb01eld. We highlight the challenges that are unique to IR and use them to motivate novel learning approaches and model architectures. We begin with Duet\u2014a neural model that gathers evidence of a document\u2019s relevance to a query by inspecting patterns of query term matches in the document as well as learning latent query and document representation for matching. The proposed model achieves state-of-the-art performance on several public and proprietary benchmarks\u2014on IR tasks that involve ranking long text documents or short passages. The performance of the model is particularly promising when large quantities of examples are available for training. The scope of impact of neural IR models is limited, if restricted only to late stage re-ranking. Therefore, we incorporate a query term independence assumption to re-design the Duet model. The re-architected model is amenable to full precomputation while retaining all the effectiveness of the original Duet architecture. This opens the opportunity to employ deep neural models, like Duet and BERTbased ranking, for ef\ufb01cient retrieval from the full collection. While, learning to rank methods traditionally focus on producing a static rank\f218 Chapter 9. General Conclusions ing, we also explore an optimization strategy for stochastic ranking. We argue that in real world retrieval systems, it makes sense to measure and optimize towards expected exposure of retrieved items, in the pursuit of fairness and diversity related outcomes. We demonstrate the usefulness of deep neural network based approaches to IR tasks beyond document and passage retrieval, such as query auto-completion and session modeling. Finally, we initiate a large-scale benchmarking effort for neural IR methods at TREC and report our key \ufb01ndings. The body of work described in this thesis was not conducted in isolation. We conducted several other studies, in collaboration, focused on neural IR that we do not describe here. These efforts focused on exploring schemes for explicit regularization [141, 144] during model training, studying reinforcement learning based approaches to retrieval [79], designing neural ranking models for structured documents [389], prototyping proactive retrieval systems [62], and even contributing to general purpose neural toolkits [339]. 9.2 The Future of neural IR An ideal IR model would be able to infer the meaning of a query from context. Given a query about the Prime Minister of UK, for example, it may be obvious from context whether it refers to John Major or Teresa May\u2014perhaps due to the time period of the corpus, or it may need to be disambiguated based on other context such as the other query terms or the user\u2019s short or long-term history. If the model learns a representation that encodes this context, perhaps making Prime Minister close to Teresa May in a latent space, it is like a library. To scale to a large corpus, this memorization would need to cover a massive number of connections between entities and contexts, which could potentially be limited by model capacity. Memorization could also cause update problems\u2014e.g., if there is a new Prime Minister but the model still refers to the old one\u2014or encode problematic societal biases [482]. To avoid these problems, another design could avoid memorizing connections in the corpus, and instead perform some per-query process that reads the \f9.2. The Future of neural IR 219 corpus and perhaps even reasons about the content, like a librarian. Many of the breakthroughs in deep learning have been motivated by the needs of speci\ufb01c application areas. Convolutional neural networks, for example, are commonly employed by the vision community, whereas recurrent architectures \ufb01nd more applications in speech recognition and NLP. It is likely that the speci\ufb01c nature of IR tasks and data will inform our choice of neural architectures and drive us towards new designs. Future IR explorations may also be motivated by developments in related areas, such as NLP. Neural architectures that have been evaluated on nonIR tasks [483\u2013487] can be investigated in the retrieval context. New methods for training neural IR models\u2014e.g., using reinforcement [79, 488, 489] or adversarial learning [141, 289]\u2014may also emerge as important directions for future explorations. In particular, large scale unsupervised training of language models\u2014e.g., BERT [327]\u2014have already demonstrated signi\ufb01cant jump in retrieval performance on public benchmarks [15]. However, given the pace at which the area of deep learning is growing, in terms of the number of new architectures and training regimes, we should be wary of the combinatorial explosion of trying every model on every IR task. We should not disproportionately focus on maximizing quantitative improvements and in the process, neglect theoretical understanding and qualitative insights. It would be a bad outcome for the \ufb01eld if these explorations do not grow our understanding of the fundamental principles of machine learning and information retrieval. Neural models should not be the hammer that we try on every IR task, or we may risk reducing every IR task to a nail.1 Rather, these new models should also be the lens through which researchers gain new insights into the underlying principles of IR tasks. This may imply that sometimes we prefer neural models that, if not interpretable, then at least are amenable to analysis and interrogation. We may elicit more insights from simpler models while more sophisticated models may achieve state-of-the-art performances. As a community, we may need to focus on both to achieve results that are both impactful as well as insightful. 1https://en.wikipedia.org/wiki/Law_of_the_instrument \f220 Chapter 9. General" + }, + { + "url": "http://arxiv.org/abs/2011.07368v2", + "title": "Conformer-Kernel with Query Term Independence at TREC 2020 Deep Learning Track", + "abstract": "We benchmark Conformer-Kernel models under the strict blind evaluation\nsetting of the TREC 2020 Deep Learning track. In particular, we study the\nimpact of incorporating: (i) Explicit term matching to complement matching\nbased on learned representations (i.e., the \"Duet principle\"), (ii) query term\nindependence (i.e., the \"QTI assumption\") to scale the model to the full\nretrieval setting, and (iii) the ORCAS click data as an additional document\ndescription field. We find evidence which supports that all three\naforementioned strategies can lead to improved retrieval quality.", + "authors": "Bhaskar Mitra, Sebastian Hofstatter, Hamed Zamani, Nick Craswell", + "published": "2020-11-14", + "updated": "2021-02-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction The Conformer-Kernel (CK) model [Mitra et al., 2020] builds upon the Transformer-Kernel (TK) [Hofst\u00e4tter et al., 2019] architecture, that demonstrated strong competitive performance compared to BERT-based [Devlin et al., 2019] ranking methods, but notably at a fraction of the compute and GPU memory cost, at the TREC 2019 Deep Learning track [Craswell et al., 2020b]. Notwithstanding these strong results, the TK model suffers from two clear de\ufb01ciencies. Firstly, because the TK model employs stacked Transformers for query and document encoding, it is challenging to incorporate long body text into this model as the GPU memory requirement of Transformers\u2019 self-attention layers grows quadratically with respect to input sequence length. So, for example, to increase the limit on the maximum input sequence length by 4\u00d7 from 128 to 512 we would require 16\u00d7 more GPU memory for each of the self-attention layers in the model. Considering that documents can contain thousands of terms, this limits the model to inspecting only a subset of the document text which may have negative implications, such as poorer retrieval quality and under-retrieval of longer documents [Hofst\u00e4tter et al., 2020]. Secondly, the original TK model was designed for the reranking task and requires that every document in a given candidate set be evaluated individually with respect to the query. This is problematic if we want to use the model to retrieve from the full collection which may contain millions, if not billions, of documents. Zamani et al. [2018a] raised this concern for the \ufb01rst time and addressed it by learning sparse representations for query and documents for inverted indexing. Later, Mitra et al. [2019] proposed an alternative solution based on the query term independence (QTI) assumption, which was adopted by Mitra et al. [2020]. They replaced the Transformer layers with novel Conformer counterparts and incorporated the QTI assumption into the model design. In their original paper, Mitra et al. [2020] compared their model to other retrieval methods, under the full retrieval setting, based on the test set from the TREC 2019 Deep Learning track [Craswell et al., 2020b] for which both the queries and relevance labels are currently available publicly. This evaluation is less stringent than participating in the of\ufb01cial annual TREC benchmarking because: (a) it allows the experimenter to run multiple evaluations against the test set which may lead to over\ufb01tting, and (b) it uses pre-collected labels which may not cover additional relevant documents \u2217Work done while at Microsoft. arXiv:2011.07368v2 [cs.IR] 11 Feb 2021 \fd1 d2 d3 d4 d5 Stacked Conformers q1 q2 q3 Aggregator with Windowed Kernel-Pooling Aggregator with Windowed Kernel-Pooling Aggregator with Windowed Kernel-Pooling + Embed Embed Embed Embed Embed Embed Embed Embed Figure 1: The NDRM1 variant of the CK model with QTI. that a new model may surface and consequently under-report the performance of dramatically new approaches [Yilmaz et al., 2020]. Therefore, in this work, we evaluate the model under the stricter TREC benchmarking setting in the 2020 edition of the Deep Learning track [Craswell et al., 2020c]. 2 TREC 2020 Deep Learning track The TREC 2020 Deep Learning track [Craswell et al., 2020c] uses the same training data as the previous year [Craswell et al., 2020b], which was originally derived from the MS MARCO dataset [Bajaj et al., 2016]. However, the track provides a new blind test set for the second year. In our work, we only consider the document ranking task, although the track also allows participants to evaluate their models on passage ranking. The training data for the document ranking task consists of 384, 597 positively labeled query-document pairs. The test set comprised of 200 queries out of which 45 queries were selected by NIST for judging. We report four relevance metrics\u2014NDCG@10 [J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002], NCG@100 [Rosset et al., 2018], AP [Zhu, 2004], and RR [Craswell, 2009]\u2014computed over these 45 queries. Under the rerank setting, each model is expected to re-order a set of 100 candidate documents provided per query, and under the fullrank setting each model must retrieve a ranked list of maximum hundred documents from a collection containing 3, 213, 835 documents in response to each query. 3 Conformer-Kernel with Query Term Independence The CK models combine novel Conformer layers with several other existing ideas from the neural information retrieval literature [Mitra, 2021, Mitra and Craswell, 2018, Guo et al., 2020]. We use the publicly available implementation2 of CK models in our work, and adopt the same model taxonomy as in the code to describe the different variants. The NDRM1 variant builds on the TK architecture [Hofst\u00e4tter et al., 2019] by incorporating two key changes: (i) It replaces the Transformer layers with Conformer layers, and (ii) factorizes the model to incorporate the QTI assumption. Figure 1 visualizes the NDRM1 architecture. Unlike other attempts [Hofst\u00e4tter et al., 2020] at extending the TK architecture to long text by treating the document as a collection of passages, the Conformer layer replaces the standard self-attention mechanism with a separable self-attention mechanism whose memory complexity of O(n \u00d7 dkey)\u2014where n is input sequence length and dkey is the size of the learned key embeddings\u2014is a signi\ufb01cant improvement over the quadratic O(n2) complexity of standard self-attention. Furthermore, the Conformer layer complements the self-attention with an additional convolutional layer to more accurately model local context within the text. Next, to incorporate query term independence, the model evaluates the relevance of the document to each query term independently and then linearly combines those relevance estimates to obtain the aggregated estimate for the full query. By incorporating 2https://github.com/bmitra-msft/TREC-Deep-Learning-Quick-Start 2 \fTable 1: Of\ufb01cial TREC results. All metrics are computed at a rank threshold of 100, unless explicitly speci\ufb01ed. Run description Run ID Subtask NDCG@10 NCG@100 AP RR NDRM1 ndrm1-full fullrank 0.5991 0.6280 0.3858 0.9333 NDRM1 ndrm1-re rerank 0.6161 0.6283 0.4150 0.9333 NDRM3 ndrm3-re rerank 0.6162 0.6283 0.4122 0.9333 NDRM3 ndrm3-full fullrank 0.6162 0.6626 0.4069 0.9333 NDRM3 + ORCAS ndrm3-orc-re rerank 0.6217 0.6283 0.4194 0.9241 NDRM3 + ORCAS ndrm3-orc-full fullrank 0.6249 0.6764 0.4280 0.9444 the QTI assumption, we can precompute all term-document scores at indexing time and employ an inverted index data structure to perform fast retrieval at query time. The NDRM2 model can be described as a learned relevance function that only inspects the count of exact matches of query terms in the document and bears a similar form as BM25 [Robertson et al., 2009]. Similar to BM25, the NDRM2 model is also compliant with the QTI assumption. A linear combination of NDRM1 and NDRM2 gives us the NDRM3 model. This strategy of combining an exact term matching subnetwork with a representation learning based matching subnetwork has been previously studied in the context of the Duet architecture [Mitra et al., 2017, Mitra and Craswell, 2019a, Nanni et al., 2017, Mitra and Craswell, 2019b], and have been reported to be speci\ufb01cally effective under the full retrieval setting [Mitra et al., 2016, 2020, Kuzi et al., 2020, Gao et al., 2020, Wrzalik and Krechel, 2020]. Because of the limit on the number of run submission to TREC, we only evaluate the NDRM1 and NDRM3 models in this work, although we have con\ufb01rmed on the TREC 2019 test set that the NDRM2 model is competitive with a well-tuned BM25 baseline. For the second edition of the TREC Deep Learning track, participants were also provided a click log dataset called ORCAS [Craswell et al., 2020a] that can be used in any way the participants deem appropriate. We use clicked queries in the ORCAS data as additional meta description for the corresponding documents to complement the intrinsic document content in the form of URL, title, and body text. While previous work [Zamani et al., 2018b] have explored using \ufb01elded document input representation in the context of deep neural ranking models, in this work we simply concatenate the text from different \ufb01elds to produce a \ufb02at unstructured input representation of the document that is fed into the model. We test each model variant under both the rerank and the fullrank settings of the document ranking task in the Deep Learning track. We use the same hyperparameters and other con\ufb01guration settings as prescribed by Mitra et al. [2020]. 4 Results This year at TREC, we focus our study on the following four research questions. RQ1. How does our best run perform competitively compared to other submissions? Table 1 summarizes the relevance metrics corresponding to all the submitted runs. According to the taxonomy proposed by Craswell et al. [2020b], the CK models can be described as \u201cnn\u201d models\u2014i.e., neural models without large scale pretraining as has been popularized by models like BERT [Devlin et al., 2019]. Figure 2 shows that our best run \u201cndrm3-orc-full\u201d was also the best performing \u201cnn\u201d run on both NDCG@10 and NCG@100. Furthermore, on NDCG@10 our best run outperforms two-third of the \u201cnnlm\u201d runs while also requiring signi\ufb01cantly less resources to train and evaluate compared to those models. It is also noteworthy, that \u201cndrm3-orc-full\u201d employs a single-stage ranking, whereas all the runs that outperform it implement some form of cascaded ranking [Wang et al., 2011, Matveeva et al., 2006] with multiple rank-and-prune stages. With respect to the full retrieval setting, we note that \u201cndrm3-orc-full\u201d improves NCG@100 by +0.0481 over the provided candidates for the reranking setting, which puts it among the 10 top performing runs according to NCG@100. Finally, Figure 3 shows the per-query performance of our best and worst performing runs compared to the median performance. This \ufb01gure provides further evidence that the CK models achieve competitive retrieval quality among all the track submissions this year. RQ2. Does explicit term matching improve retrieval quality? To shed light on this question, we compare the NDRM1 and the NDRM3 models, where the only difference between the two models is that the latter incorporates the explicit term matching signal while former does not. We \ufb01nd that under the reranking setting\u2014i.e., when comparing the \u201cndrm1-re\u201d and the \u201cndrm3-re\u201d runs\u2014there is no clear evidence that the explicit term matching is bene\ufb01cial. This is likely because the candidate documents for reranking were generated by a \ufb01rst-stage BM25 ranker and hence the explicit term matching signal is already part of the end-to-end retrieval stack. However, under the fullrank setting\u2014i.e., 3 \f0.3 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 best nnlm run best msai run best other nn run best trad run nnlm msai other nn trad (a) NDCG@10 0.4 0.5 0.6 0.7 0.8 0.9 NCG@100 best nnlm run best trad run best msai run rerank runs nnlm trad msai other nn (b) NCG@100 Figure 2: Comparing our runs with runs submitted by other groups. We adopt the same \u201cnnlm\u201d, \u201cnn\u201d, and \u201ctrad\u201d taxonomy for models as in the track overview [Craswell et al., 2020c]. All our runs are \u201cnn\u201d runs under this classi\ufb01cation but we label them speci\ufb01cally as \u201cmsai\u201d to distinguish from \u201cother nn runs\u201d. The runs in each plot are sorted independently based on the corresponding metric. 4 \f0.0 0.2 0.4 0.6 0.8 1.0 NDCG@10 difference between a hotel and motel average salary for dental hygienist in nebraska how long does it take to remove wisdom tooth who is aziz hashim who said no one can make you feel inferior how many sons robert kraft has who is rep scalise? who was the highest career passer rating in the nfl who sings monk theme song average annual income data analyst what type of conflict does della face in o, henry the gift of the magi how often to button quail lay eggs difference between a company's strategy and business model is where is the show shameless filmed why did the ancient egyptians call their land kemet, or black land? what medium do radio waves travel through what is a alm who is thomas m cooley what metal are hip replacements made of do google docs auto save what is mamey why is pete rose banned from hall of fame what is chaff and flare does mississippi have an income tax how old is vanessa redgrave meaning of shebang dog day afternoon meaning average wedding dress alteration cost what is reba mcentire's net worth what does a psychological screening consist of for egg donors what is a statutory deed what is chronometer who invented it define: geon what amino produces carnitine why does lacquered brass tarnish how much would it cost to install my own wind turbine why do hunters pattern their shotguns? who killed nicholas ii of russia what temperature and humidity to dry sausage when did family feud come out? when did rock n roll begin? what is a nonconformity? earth science can fever cause miscarriage early pregnancy how much money do motivational speakers make definition of laudable ndrm3-orc-full ndrm1-full median Figure 3: Per-query comparison between our worst performing run (\u201cndrm1-full\u201d) and our best performing run (\u201cndrm3-orc-full\u201d) based on the NDCG@10 metric. Median NDCG@10 across all track submissions also shown for reference. 5 \f0.0 0.2 0.4 0.6 0.8 1.0 NCG@100 who sings monk theme song average salary for dental hygienist in nebraska what is chaff and flare how long does it take to remove wisdom tooth what is a nonconformity? earth science average wedding dress alteration cost what is a statutory deed average annual income data analyst who said no one can make you feel inferior can fever cause miscarriage early pregnancy how old is vanessa redgrave what is a alm what type of conflict does della face in o, henry the gift of the magi what temperature and humidity to dry sausage what medium do radio waves travel through what is reba mcentire's net worth how many sons robert kraft has why did the ancient egyptians call their land kemet, or black land? what metal are hip replacements made of who killed nicholas ii of russia where is the show shameless filmed why do hunters pattern their shotguns? when did family feud come out? what does a psychological screening consist of for egg donors difference between a company's strategy and business model is what is chronometer who invented it do google docs auto save difference between a hotel and motel definition of laudable does mississippi have an income tax what amino produces carnitine who is rep scalise? who is thomas m cooley how often to button quail lay eggs why is pete rose banned from hall of fame who was the highest career passer rating in the nfl meaning of shebang how much money do motivational speakers make when did rock n roll begin? why does lacquered brass tarnish who is aziz hashim dog day afternoon meaning what is mamey define: geon how much would it cost to install my own wind turbine ndrm3-full ndrm1-full Figure 4: Per-query comparison between the \u201cndrm1-full\u201d and the \u201cndrm3-full\u201d runs based on the NCG@100 metric. 6 \fwhen comparing the \u201cndrm1-full\u201d and the \u201cndrm3-full\u201d runs\u2014we see moderate improvements across all metrics: 2.9% improvement in NDCG@10 and 5.5% improvement in both AP and NCG@100. These observations are supported by Kuzi et al. [2020], who \ufb01nd that exact term matching are important for the fullrank setting, and also by Xiong et al. [2020] who observe that their proposed model which does not incorporate exact matching fare better in the rerank setting than on the fullrank subtask. Figure 4 compares how the \u201cndrm1-full\u201d and the \u201cndrm3-full\u201d runs perform on the 45 different queries in the test set. Based on a qualitative inspection of the queries, it appears that exact term matching may be important for queries containing named entities\u2014e.g., \u201cwho is aziz hashim\u201d and \u201cwhy is pete rose banned from hall of fame\u201d\u2014where it is necessary to ensure that the retrieved documents are about the correct entity. RQ3. How does the retrieval quality differ for our model between the fullrank and the rerank setting? As expected, we \ufb01nd that without exact term matching, the retrieval quality for CK models are lower under the fullrank setting compared to the rerank setting\u2014i.e., \u201cndrm1-re\u201d is better than \u201cndrm1-full\u201d. In contrast, when exact term matching is incorporated, the CK model achieves 5.5% improvement in NCG, which is a recall-oriented metric, in the fullrank setting (\u201cndrm3-full\u201d) compared to its counterpart under the rerank setting (\u201cndrm3-re\u201d). However, on all the other metrics we see no difference (NDCG@10 and RR) or small regression (1.3% for AP) under the fullrank setting. Finally, if we introduce the ORCAS data\u2014i.e., compare \u201cndrm3-orc-full\u201d and \u201cndrm3-orc-re\u201d\u2014we see improvements under the fullrank setting across all metrics: 7.7% for NCG@100, 2.2% for RR, 2.1% for AP, and 0.5% for NDCG@10. In adhoc retrieval, a common strategy involves sequentially cascading multiple rank-and-prune stages [Matveeva et al., 2006, Wang et al., 2011, Chen et al., 2017, Gallagher et al., 2019, Nogueira et al., 2019] for better effectivenessef\ufb01ciency trade-offs. Following a similar strategy, we may be able to improve on these results by introducing additional reranking stages on top of a \ufb01rst stage retrieval using query term independent CK models. We anticipate that this may be an interesting area for future exploration. RQ4. Does using ORCAS queries as an additional document description \ufb01eld improve retrieval quality? Finally, we want to study if the incorporation of click log datasets, such as ORCAS [Craswell et al., 2020a], can be bene\ufb01cial for retrieval quality. We \ufb01nd that on the rerank subtask, both NDCG@10 and AP improve by 0.9% and 1.7%, respectively, although RR degrades by 1%. On the fullrank subtask, the addition of ORCAS signal seems to improve all metrics: AP by 5.2%, NCG@100 by 2.1%, NDCG@10 by 1.4%, and RR by 1.2%. These results indicate that ORCAS, and other similar click log datasets, may be useful for achieving better retrieval relevance. 5" + }, + { + "url": "http://arxiv.org/abs/2007.10434v1", + "title": "Conformer-Kernel with Query Term Independence for Document Retrieval", + "abstract": "The Transformer-Kernel (TK) model has demonstrated strong reranking\nperformance on the TREC Deep Learning benchmark---and can be considered to be\nan efficient (but slightly less effective) alternative to BERT-based ranking\nmodels. In this work, we extend the TK architecture to the full retrieval\nsetting by incorporating the query term independence assumption. Furthermore,\nto reduce the memory complexity of the Transformer layers with respect to the\ninput sequence length, we propose a new Conformer layer. We show that the\nConformer's GPU memory requirement scales linearly with input sequence length,\nmaking it a more viable option when ranking long documents. Finally, we\ndemonstrate that incorporating explicit term matching signal into the model can\nbe particularly useful in the full retrieval setting. We present preliminary\nresults from our work in this paper.", + "authors": "Bhaskar Mitra, Sebastian Hofstatter, Hamed Zamani, Nick Craswell", + "published": "2020-07-20", + "updated": "2020-07-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction In the inaugural year of the TREC Deep Learning track [Craswell et al., 2019], Transformer-based [Vaswani et al., 2017] ranking models demonstrated substantial improvements over traditional information retrieval (IR) methods. Several of these approaches\u2014e.g., [Yilmaz et al., 2019, Yan et al., 2019]\u2014employ BERT [Devlin et al., 2018], with large-scale pretraining, as their core architecture. Diverging from the trend of BERT-scale models, Hofst\u00e4tter et al. [2020b] propose the Transformer-Kernel (TK) model with few key distinctions: (i) TK uses a shallower model with only two Transformer layers, (ii) The parameters of the model are randomly initialized prior to training (skipping the computation-intensive pretraining step), and \ufb01nally (iii) TK encodes the query and the document independently of each other allowing for of\ufb02ine precomputations for faster response times. Consequently, TK achieves competitive performance at a fraction of the training and inference cost of its BERT-based peers. Notwithstanding these ef\ufb01ciency gains, the TK model shares two critical drawbacks with other Transformer-based models. Firstly, the memory complexity of the self-attention layers is quadratic O(n2) with respect to the length n of the input sequence. This restricts the number of document terms that the model can inspect under \ufb01xed GPU memory budget. A trivial workaround involves inspecting only the \ufb01rst k terms of the document. This approach can not only negatively impact retrieval quality, but has been shown to speci\ufb01cally under-retrieve longer documents [Hofst\u00e4tter et al., 2020a]. Secondly, in any real IR system, it is impractical to evaluate every document in the collection for every query\u2014and therefore these systems typically either enforce some sparsity property to drastically narrow down the set of documents that should be evaluated or \ufb01nd ways to prioritize the candidates for evaluation. TK employs a nonlinear matching function over query-document pairs. Therefore, it is not obvious how the TK function can be directly used to retrieve from the full collection without exhaustively comparing every document to the query. This restricts TK\u2019s scope of application to late stage reranking of smaller candidate sets that may have been identi\ufb01ed by simpler retrieval models. In this work, we extend the TK architecture to enable direct retrieval from the full collection of documents. Towards that goal, we incorporate three speci\ufb01c changes: 1. To scale to long document text, we replace each instance of the Transformer layer with a novel Conformer layer whose memory complexity is O(n \u00d7 dkey), instead of O(n2). arXiv:2007.10434v1 [cs.IR] 20 Jul 2020 \f2. To enable fast retrieval with TK, we incorporate the query term independence assumption [Mitra et al., 2019] into the architecture. 3. And \ufb01nally, as Mitra et al. [2016, 2017] point out, lexical term matching can complement latent matching models, and the combination can be particularly effective when retrieving from the full collection of candidates. So, we extend TK with an explicit term matching submodel to minimize the impact of false positive matches in the latent space. We describe the full model and present preliminary results from our work in this paper. 2 Related work 2.1 Scaling self-attention to long text The self-attention layer, as proposed by Vaswani et al. [2017], can be described as follows: Self-Attention(Q, K, V ) = \u03a6(QK\u22ba \u221adk ) \u00b7 V (1) Where, Q \u2208Rn\u00d7dkey, K \u2208Rn\u00d7dkey, and V \u2208Rn\u00d7dvalue are the query, key, and value matrices\u2014and dkey and dvalue are the dimensions of the key and value embeddings, respectively, and n is the length of the input sequence. We use \u03a6 to denote the softmax operation applied along the last dimension of the input matrix. The quadratic O(n2) memory complexity of self-attention is a direct consequence of the component QK\u22bathat produces a matrix of size n\u00d7n. Recently, an increasing number of different approaches have been proposed in the literature to get around this quadratic complexity. Broadly speaking, most of these approaches can be classi\ufb01ed as either: (i) Restricting self-attention to smaller windows over the input sequence which results in a memory complexity of O(n \u00d7 m), where m is the window size\u2014e.g., [Parmar et al., 2018, Dai et al., 2019, Yang et al., 2019, Sukhbaatar et al., 2019], or (ii) Operating under the assumption that the attention matrix is low rank r and hence \ufb01nding alternatives to explicitly computing the QK\u22bamatrix to achieve a complexity of O(n \u00d7 r)\u2014e.g., [Kitaev et al., 2019, Roy et al., 2020, Tay et al., 2020, Wang et al., 2020], or (iii) A hybrid of both approaches\u2014e.g., [Child et al., 2019, Beltagy et al., 2020, Wu et al., 2020]. In the IR literature, recently Hofst\u00e4tter et al. [2020a] have extended the TK model to longer text using the local window-based attention approach. Other more general approaches to reducing the memory footprint of very deep models, such as model parallelization have also been extended to Transformer models [Shoeybi et al., 2019]. For more general primer on self-attention and Transformer architectures, we point the reader to Weng [2018, 2020]. 2.2 Full retrieval with deep models Ef\ufb01cient retrieval using complex machine learned relevance functions is an important challenge in neural IR [Mitra and Craswell, 2018, Guo et al., 2019]. One family of approaches involves the dual encoder architecture where the query and document are encoded independently of each other, and ef\ufb01cient retrieval is achieved using approximate nearestneighbour search [Lee et al., 2019, Chang et al., 2020, Karpukhin et al., 2020, Ahmad et al., 2019, Khattab and Zaharia, 2020] or by employing other data structures, such as learning an inverted index based on latent representations [Zamani et al., 2018]. Precise matching of terms or concepts may be dif\ufb01cult using query-independent latent document representations [Luan et al., 2020], and therefore these models are often combined with explicit term matching methods [Nalisnick et al., 2016, Mitra et al., 2017]. Xiong et al. [2020] have recently demonstrated that the training data distribution can also signi\ufb01cantly in\ufb02uence the performance of dual encoder models under the full retrieval setting. Auxilliary optimization objectives can also help guide the training of latent matching models to \ufb01nd solutions that emphasize more precise matching of terms and concepts [Rosset et al., 2019]. An alternative approach assumes query term independence (QTI) in the design of the neural ranking model [Mitra et al., 2019]. For these family of models, the estimated relevance score Sq,d is factorized as a sum of the estimated relevance of the document to each individual query term. Sq,d = X t\u2208q st,d (2) Readers should note that the QTI assumption is already baked into several classical IR models, like BM25 [Robertson et al., 2009]. Relevance models with QTI assumption can be used to precompute all term-document scores of\ufb02ine. The precomputed scores can be subsequently leveraged for ef\ufb01cient search using inverted-index data structures. 2 \fSeveral recent neural IR models [Mitra et al., 2019, Dai and Callan, 2019b,a, Mackenzie et al., 2020, Dai and Callan, MacAvaney et al., 2020] that incorporate the QTI assumption have obtained promising results under the full retrieval setting. Document expansion based methods [Nogueira et al., 2019b,a], using large neural language models, can also be classi\ufb01ed as part of this family of approaches, assuming the subsequent retrieval step employs a traditional QTI model like BM25. In all of these approaches, the focus of the machine learned function is to estimate the impact score of the document with respect to individual terms in the vocabulary, which can be precomputed of\ufb02ine during index creation. An obvious alternative to document expansion based methods is to use the neural model to reformulate the query [Nogueira and Cho, 2017, Van Gysel et al., 2017, Ma et al., 2020]\u2014although these approaches have not yet demonstrated retrieval performance that can be considered competitive to other methods considered here. Finally, when the relevance of items are known, or a reliable proxy metric exists, machine learned policies [Kraska et al., 2018, Oosterhuis et al., 2018, Rosset et al., 2018] can also be effective for ef\ufb01cient search over indexes but these methods are not directly relevant to our current discussion. 3 Conformer-Kernel with QTI We begin by brie\ufb02y describing the original TK model as outlined in Fig 1a. The initial word embedding layer in TK maps both query and document to their respective sequences of term embeddings. These sequences are then passed through one or more stacked Transformer layers to derive contextualized vector representations for query and document terms. The learnable parameters of both query and document encoders are shared\u2014which includes the initial term embeddings as well as the Transformer layers. Based on the contextualized term embeddings, TK creates an interaction matrix X, such that Xij is the cosine similarity between the contextualized embeddings of the ith query term qi and the jth document term dj. Xij = cos( \u20d7 vqi, \u20d7 vdj) (3) The Kernel-Pooling stage then creates k distinct features per query term as follows: Kik = log |d| X j exp (\u2212(Xij \u2212\u00b5k)2 2\u03c32 ) (4) Finally, the query-document relevance is estimated by a nonlinear function\u2014typically implemented as stacked feedforward layers\u2014over these features. Next, we describe the proposed changes to this base architecture. 3.1 Conformer In Section 2.1, we note that the quadratic memory complexity of the self-attention layers w.r.t. the length of the input sequence is a direct result of explicitly computing the attention matrix QK\u22ba\u2208Rn\u00d7n. In this work, we propose a new separable self-attention layer that allows us to avoid instantiating the full term-term attention matrix as follows: Separable-Self-Attention(Q, K, V ) = \u03a6(Q) \u00b7 A (5) where, A = \u03a6(K\u22ba) \u00b7 V (6) As previously, \u03a6 denotes the softmax operation along the last dimension of the input matrix. Note that, however, in this separable self-attention mechanism, the softmax operation is employed twice: (i) \u03a6(Q) computes the softmax along the dkey dimension, and (ii) \u03a6(K\u22ba) computes the softmax along the n dimension. By computing A \u2208Rdkey\u00d7dvalue \ufb01rst, we avoid explicitly computing the full term-term attention matrix. The memory complexity of the separable self-attention layer is O(n \u00d7 dkey), which is a signi\ufb01cant improvement when dkey \u226an. We modify the standard Transformer block as follows: 1. We replace the standard self-attention layer with the more memory ef\ufb01cient separable self-attention layer. 2. Furthermore, we apply grouped convolution before the separable self-attention layers to better capture the local context based on the window of neighbouring terms. 3 \fd1 d2 d3 d4 d5 Embed Stacked Transformers q1 q2 q3 Stacked Transformers Aggregator with Kernel-Pooling Embed Embed Embed Embed Embed Embed Embed (a) Transformer-Kernel (TK) d1 d2 d3 d4 d5 Stacked Conformers q1 q2 q3 Aggregator with Windowed Kernel-Pooling Aggregator with Windowed Kernel-Pooling Aggregator with Windowed Kernel-Pooling + Embed Embed Embed Embed Embed Embed Embed Embed (b) Conformer-Kernel (CK) with QTI Figure 1: A comparison of the TK and the proposed CK-with-QTI architectures. In addition to replacing the Transformer layers with Conformers, the latter also simpli\ufb01es the query encoding to non-contextualized term embedding lookup and incorporates a windowed Kernel-Pooling based aggregation that is employed independently per query term. 4 \fWe refer to this combination of grouped convolution and Transformer with separable self-attention as a Conformer. We incorporate the Conformer layer into TK as a direct replacement for the existing Transformer layers and name the new architecture as a Conformer-Kernel (CK) model. In relation to handling long input sequences, we also replace the standard Kernel-Pooling with windowed Kernel-Pooling [Hofst\u00e4tter et al., 2020a] in our proposed architecture. 3.2 Query term independence assumption To incorporate the QTI assumption into TK, we make a couple of simple modi\ufb01cations to the original architecture. Firstly, we simplify the query encoder by getting rid of all the Transformer layers and only considering the noncontextualized embeddings for the query terms. Secondly, instead of applying the aggregation function over the full interaction matrix, we apply it to each row of the matrix individually, which corresponds to individual query terms. The scalar outputs from the aggregation function are linearly combined to produce the \ufb01nal query-document score. These proposed changes are shown in Fig 1b. 3.3 Explicit term matching We adopt the Duet [Nanni et al., 2017, Mitra and Craswell, 2019b,a] framework wherein the term-document score is a linear combination of outputs from a latent and and an explicit matching models. st,d = w1 \u00b7 BN(s(latent) t,d ) + w2 \u00b7 BN(s(explicit) t,d ) + b (7) Where, {w1, w2, b} are learnable parameters and BN denotes the BatchNorm operation [Ioffe and Szegedy, 2015]. BN(x) = x \u2212E[x] p Var[x] (8) We employ the CK model to compute s(latent) t,d and de\ufb01ne a new lexical matching function modeled on BM25 for s(explicit) t,d . s(explicit) t,d = IDFt \u00b7 BS(TFt,d) BS(TFt,d) + ReLU(wdlen \u00b7 BS(|d|) + bdlen) + \u03f5 (9) Where, IDFt, TFt,d, and |d| denote the inverse-document frequency of the term t, the term-frequency of t in document d, and the length of the document, respectively. The wdlen and bdlen are the only two leanrable parameters of this submodel and \u03f5 is a small constant added to prevent a divide-by-zero error. The BatchScale (BS) operation is de\ufb01ned as follows: BS(x) = x E[x] + \u03f5 (10) 4 Experiments 4.1 Task and data We conduct preliminary experiments on the document retrieval benchmark provided as part of the TREC Deep Learning track [Craswell et al., 2019]. The benchmark is based on the MS MARCO dataset [Bajaj et al., 2016] and provides a collection of 3, 213, 835 documents and a training dataset with 384, 597 positively labeled query-document pairs. Recently, the benchmark also made available a click log dataset, called ORCAS [Craswell et al., 2020], that can be employed as an additional document description \ufb01eld. We refer the reader to the track website1 for further details about the benchmark. Because we are interested in the full ranking setting, we do not make use of the provided document candidates and instead use the proposed model to retrieve from the full collection. We compare different runs based on following three metrics: mean reciprocal rank (MRR) [Craswell, 2009], normalized discounted cumulative gain (NDCG) [J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002], and normalized cumulative gain (NCG) [Rosset et al., 2018]. 1https://microsoft.github.io/TREC-2020-Deep-Learning/ 5 \fTable 1: Full retrieval results based on the TREC 2019 Deep Learning track test set. Model MRR NDCG@10 NCG@100 Non-neural baselines BM25+RM3 run with best NDCG@10 0.807 0.549 0.559 Non-neural run with best NDCG@10 0.872 0.561 0.560 Neural baselines DeepCT run with best NDCG@10 0.872 0.554 0.498 BERT-based document expansion + reranking run with best NCG@10 0.899 0.646 0.637 BERT-based document expansion + reranking run with best NDCG@10 0.961 0.726 0.580 Our models Conformer-Kernel 0.845 0.554 0.464 Conformer-Kernel + learned BM25 0.906 0.603 0.533 Conformer-Kernel + learned BM25 + ORCAS \ufb01eld 0.898 0.620 0.547 4.2 Model training We consider the \ufb01rst 20 terms for every query and the \ufb01rst 4000 terms for every document. When incorporating the ORCAS data as an additional document \ufb01eld, we limit the maximum length of the \ufb01eld to 2000 terms. We pretrain the word embeddings using the word2vec [Mikolov et al., 2013a,b,c] implementation in FastText [Joulin et al., 2016]. We use a concatenation of the IN and OUT embeddings [Nalisnick et al., 2016, Mitra et al., 2016] from word2vec to initialize the embedding layer parameters. The document encoder uses 2 Conformer layers and we set all the hidden layer sizes to 256. We set the window size for the grouped convolution layers to 31 and the number of groups to 32. Correspondingly, we also set the number of attention heads to 32. We set the number of kernels k to 10. For windowed Kernel-Pooling, we set the window size to 300 and the stride to 100. Finally, we set the dropout rate to 0.2. For further details, please refer to the publicly released model implementation in PyTorch.2 All models are trained on four Tesla P100 GPUs, with 16 GB memory each, using data parallelism. We train the model using the RankNet objective [Burges et al., 2005]. For every positively labeled query-document pair in the training data, we randomly sample one negative document from the provided top 100 candidates corresponding to the query and two negative documents from the full collection. In addition to making pairs between the positively labeled document and the three negative documents, we also create pairs between the negative document sampled from the top 100 candidates and those sampled from the full collection, treating the former as more relevant. This can be interpreted as incorporating a form of weak supervision [Dehghani et al., 2017] as the top candidates were previously generated using a traditional IR function. 5 Results Table 1 presents our main experiment results. As speci\ufb01ed earlier, we evaluate our models on the full ranking setting without any explicit reranking step. The full model\u2014with both Conformer-Kernel and explicit matching submodel\u2014 performs signi\ufb01cantly better on NDCG@10 and MRR compared to the best traditional runs from the 2019 edition of the track. The model also outperforms the DeepCT baseline which is a QTI-based baseline using BERT. The other BERT-based baselines outperform our model by signi\ufb01cant margins. We believe this observation should motivate future exploration on how to incorporate pretraining in the Conformer-Kernel model. Finally, we also notice improvements from incorporating the ORCAS data as an additional document descriptor \ufb01eld. To demonstrate how the GPU memory consumption scales with respect to input sequence length, we plot the peak memory, across all four GPUs, for our proposed architecture using Transformer and Conformer layers, respectively, keeping all other hyperparameters and architecture choices \ufb01xed. Fig 2 shows the GPU memory requirement grows linearly with increasing sequence length for the Conformer, while quadratically when Transformer layers are employed. 6 Discussion and future work The proposed CK-with-QTI architecture provides several advantages, with respect to inference cost, compared to its BERT-based peers. In addition to a shallower model and more memory-ef\ufb01cient Conformer layers, the model allows for of\ufb02ine pre-encoding of documents during indexing. It is notable, that the document encoder, containing the stacked Conformer layers, is the computationally costliest part of the model. In the proposed architecture, the document 2https://github.com/bmitra-msft/TREC-Deep-Learning-Quick-Start 6 \f0 2500 5000 7500 10000 12500 15000 0 500 1000 1500 2000 2500 3000 3500 4000 4500 Document length Transformer Conformer Figure 2: Comparison of peak GPU Memory Usage in MB, across all four GPUs, when employing Transformers vs. Conformers in our proposed architecture. For the Transformer-based model, we only plot till sequence length of 512, because for longer sequences we run out of GPU memory when using Tesla P100s with 16 GB of memory. encoder needs to be evaluated only once per every document in the collection. This is in contrast to once per every query-document pair in the case of BERT-based ranking models that accepts a concatenation of query and document as input [Nogueira and Cho, 2019], and once per every term-document pair in the case of BERT-based ranking models with QTI [Mitra et al., 2019]. While the present study demonstrates promising progress towards using TK-style architectures for retrieval from the full collection, it is worthwhile to highlight several challenges that needs further explorations. More in depth analysis of the distribution of term-document scores is necessary which may divulge further insights about how sparsity properties and discretization can be enforced for practical operationlization of these models. Large scale pretraining in the the context of these models also presents itself as an important direction for future studies. Finally, for the full retrieval setting, identifying appropriate negative document sampling strategies during training poses as an important challenge that can strongly help or curtail the success these models achieve on these tasks. In the \ufb01rst year of the TREC Deep Learning track, there was a stronger focus on the reranking setting\u2014although some submissions explored document expansion and other QTI-based strategies. We anticipate that in the 2020 edition of the track, we will observe more submissions using neural methods for the full retrieval setting, which may further improve the reusability of the TREC benchmark [Yilmaz et al., 2020] for comparing these emerging family of approaches, and provide additional insights for our line of exploration." + }, + { + "url": "http://arxiv.org/abs/1912.04471v1", + "title": "Duet at TREC 2019 Deep Learning Track", + "abstract": "This report discusses three submissions based on the Duet architecture to the\nDeep Learning track at TREC 2019. For the document retrieval task, we adapt the\nDuet model to ingest a \"multiple field\" view of documents---we refer to the new\narchitecture as Duet with Multiple Fields (DuetMF). A second submission\ncombines the DuetMF model with other neural and traditional relevance\nestimators in a learning-to-rank framework and achieves improved performance\nover the DuetMF baseline. For the passage retrieval task, we submit a single\nrun based on an ensemble of eight Duet models.", + "authors": "Bhaskar Mitra, Nick Craswell", + "published": "2019-12-10", + "updated": "2019-12-10", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction The Duet architecture was proposed by Mitra et al. [2017] for document ranking. Fig. 7 from The original paper show that the retrieval effectiveness of the model is still improving as the size of the training data approaches 217 samples. The training data employed in that paper is a proprietary dataset from Bing. A similar plot was later reproduced on a public benchmark by Nanni et al. [2017], but in the context of a passage ranking dataset with synthetic queries. Variations of the Duet model [Mitra and Craswell, 2019, Mitra et al., 2019, Cohen et al., 2018] have since then been evaluated on other public passage ranking datasets. However, the lack of large scale training data prevented the public evaluation of Duet for document ranking. The deep learning track at TREC 2019 makes large training datasets\u2014suitable for traininig deep models with large number of learnable parameters\u2014publicly available in the context of a document ranking and a passage ranking tasks. We benchmark the Duet model on both tasks. In the context of the document ranking task, we adapt the Duet model to ingest a \u201cmultiple \ufb01eld\u201d view of the documents, based on \ufb01ndings from Zamani et al. [2018]. We refer to this new architecture as Duet with Multiple Fields (DuetMF) in the paper. Furthermore, we combine the relevance estimates from DuetMF with several other traditional and neural retrieval methods in a learning-to-rank (LTR) [Liu, 2009] framework. For the passage ranking task, we submit a single run based on an ensemble of eight Duet models. The architecture and the training scheme resembles that of the \u201cDuet V2 (Ensembled)\u201d baseline listed on the MS MARCO leaderboard1. 2 TREC 2019 deep learning track The TREC 2019 deep learning track introduces: (i) a document retrieval task and (ii) a passage retrieval task. For both tasks, participants are provided a set of candidates\u2014100 documents and 1000 passages, respectively\u2014per query that should be ranked. Participants can choose to either rerank provided candidates or retrieve from the full collection. For the passage retrieval task, the track reuses the set of 500K+ manually-assessed binary training labels released as part of the Microsoft Machine Reading COmprehension (MS MARCO) challenge [Bajaj et al., 2016]. For the document retrieval task, the passage-level labels are transferred to their corresponding source documents\u2014producing a training dataset of size close to 400K labels. 1http://www.msmarco.org/leaders.aspx arXiv:1912.04471v1 [cs.IR] 10 Dec 2019 \fTable 1: Of\ufb01cial TREC results. The recall metric is computed at position 100 for the document retrieval task and at position 1000 for the passage retrieval task. Run description Run ID Subtask MRR NDCG@10 MAP Recall Document retrieval task LTR w/ DuetMF as feature ms_ensemble fullrank 0.876 0.578 0.237 0.368 DuetMF model ms_duet rerank 0.810 0.533 0.229 0.387 Passage retrieval task Ensemble of 8 Duet models ms_duet_passage rerank 0.806 0.614 0.348 0.694 For evaluation, a shared test set of 200 queries is provided for both tasks, of which two different overlapping set of 43 queries were later selected for manual NIST assessments corresponding to the two tasks. Full details of all datasets is available on the track website2 and in the track overview paper [Craswell et al., 2019]. 3 Methods and results The Duet model proposed by Mitra et al. [2017] employs two deep neural networks trained jointly towards a retrieval task: (i) The \u201cdistributed\u201d sub-model learns useful representations of text for matching and (ii) the \u201clocal\u201d sub-model estimates relevance based on patterns of exact term matches between query and document. Mitra and Craswell [2019] propose several modi\ufb01cations to the original Duet model that show improved performance on the MS MARCO passage ranking challenge. We adopt the updated Duet model from Mitra and Craswell [2019] and incorporate additional modi\ufb01cations, in particular to consider multiple \ufb01elds for the document retrieval task. Table 1 summarizes the of\ufb01cial evaluation results for all three runs. Duet model with Multiple Fields (DuetMF) for document ranking. Zamani et al. [2018] study neural ranking models in the context of documents with multiple \ufb01elds. In particular, they make the following observations: Obs. 1: It is more effective to summarize the match between query and individual document \ufb01elds by a vector\u2014as opposed to a single score\u2014before aggregating to estimate full document relevance to the query. Obs. 2: It is better to learn different query representations corresponding to each document \ufb01eld under consideration. Obs. 3: Structured dropout (e.g., \ufb01eld-level dropout) is effective for regularization during training. We incorporate all of these ideas to modify the Duet model from Mitra and Craswell [2019]. The updated model is shown in Fig. 1. Documents in the deep learning track dataset contains three text \ufb01elds: (i) URL, (ii) title, and (iii) body. We employ the Duet architecture to match the query against each individual document \ufb01elds. In line with Obs. 1 from [Zamani et al., 2018], the \ufb01eld-speci\ufb01c Duet architecture outputs a vector instead of a single score. We do not share the parameters of the Duet architectures between the \ufb01eld-speci\ufb01c instances based on Obs. 2. Following Obs. 3, we introduce structured dropouts at different stages of the model. We randomly dropout each of the local sub-models for 50% of the training samples. Similarly, we also dropout different combinations of \ufb01eld-level models uniformly at random\u2014taking care that at least one \ufb01eld-level model is always retained. We consider the \ufb01rst 20 terms for queries and for document URLs and titles. For document body text, we consider the \ufb01rst 2000 terms. Similar to Mitra and Craswell [2019], we employ pretrained word embeddings as the input text representation for the distributed sub-models. We train the word embeddings using a standard word2vec [Mikolov et al., 2013] implementation in FastText [Joulin et al., 2016] on a combination of the MS MARCO document corpus and training queries. Similar to previous work [Mitra et al., 2017, Mitra and Craswell, 2019], the query and document \ufb01eld embeddings are learned by deep convolutional-pooling layers. We set the hidden layer size at all stages of the model to 300 and dropout rate for different layers to 0.5. For training, we employ the RankNet loss [Burges et al., 2005] over < q, dpos, dneg > triples and the Adam optimizer [Kingma and Ba, 2014]\u2014with a minibatch size of 128 and a learning rate of 0.0001 for training. We sample dneg uniformly at random from the top 100 candidates provided that are not positively labeled. When employing structured dropout, the same sub-models are masked for both dpos and dneg. In light of the recent success of large pretrained language models\u2014e.g., [Nogueira and Cho, 2019]\u2014we also experiment with an unsupervised pretraining scheme using the MS MARCO document collection. The pretraining is performed 2https://microsoft.github.io/TREC-2019-Deep-Learning/ 2 \fgenerate embedding doc URL text interaction matrix query text generate embedding 1 local sub-model distributed submodel hadamard product sum w/ local submodel dropout aggregate generate embedding doc title text interaction matrix generate embedding 2 hadamard product generate embedding doc body text interaction matrix hadamard product generate embedding 3 local sub-model distributed submodel sum w/ local submodel dropout aggregate local sub-model distributed submodel sum w/ local submodel dropout aggregate sum w/ field-level dropout aggregate Figure 1: The modi\ufb01ed Duet model (DuetMF) that considers multiple document \ufb01elds. over < qpseudo, dpos, dneg >\u2014where dpos and dneg are randomly sampled from the collection and a pseudo-query qpseudo is generated by picking the URL or the title of dpos randomly (with equal probability) and masking the corresponding \ufb01eld on the document side for both dpos and dneg. We see faster convergence during supervised training when the DuetMF model is pretrained in this fashion on the MS MARCO document collection. We posit that a more formal study should be performed in the future on pretraining Duet models on large collections, such as Wikipedia and the BookCorpus [Zhu et al., 2015]. Learning-to-rank model for document ranking. We train a neural LTR model with two hidden layers\u2014each with 1024 hidden nodes. The LTR run reranks a set of 100 document candidates retrieved by query likelihood (QL) [Ponte and Croft, 1998] with Dirichlet smoothing (\u00b5 = 1250) [MacKay and Peto, 1995]. Several ranking algorithms based on neural and inference networks act as features: (i) DuetMF, (ii) Sequential Dependence Model (SDM) [Metzler and Croft, 2005], and (iii) Pseudo-Relevance Feedback (PRF) [Lavrenko and Croft, 2001, Lavrenko, 2008], (iv) BM25, [Robertson et al., 2009], and (v) Dual Embedding Space Model (DESM) [Nalisnick et al., 2016, Mitra et al., 2016]. We employ SDM with an order of 3, combine weight of 0.90, ordered window weight of 0.034, and an unordered window weight of 0.066 as our base candidate scoring function. We use these parameters to retrieve from the target corpus as well as auxiliary corpora of English language Wikipedia (enwiki-20180901-pages-articles-multistream.xml.bz2), LDC Gigaword (LDC2011T07). For PRF, initial retrievals\u2014from either of the target, wikipedia, or gigaword corpora\u2014adopted the SDM parameters above, however are used to rank 75-word passages with a 25-word overlap. These passages are then interpolated using the top m passages and standard relevance modeling techniques, from which we select the top 50 words to use as an expanded query for the \ufb01nal ranking of the target candidates. We do not explicitly adopt RM3 [Abdul-Jaleel et al., 2004] because our LTR model implicitly combines our initial retrieval score and score from the expanded query. All code for the SDM and PRF feature computation is available at https://github.com/diazf/indri. We evaluate two different BM25 models with hyperparameters < k1 = 0.9, b = 0.4 > and < k1 = 3.44, b = 0.87 >. 3 \fCorresponding to each of the DuetMF, SDM, PRF, and BM25 runs we generate two features based on the score and the rank that the model predicts for a document w.r.t. the target query. We generate eight features by comparing the query against two different document \ufb01elds (title and body) and using different DESM similarity estimates (INxIN, INxOUT, OUTxIN, OUTxOUT). Lastly, we add couple of features based on query length and domain quality\u2014where the latter is de\ufb01ned simply as a ratio between how often documents from a given domain appear in the positively labeled training data and in the overall document collection. Ensemble of Duet models for passage ranking. For the passage ranking task, we adopt the exact same model and training procedure from [Mitra and Craswell, 2019]. Our \ufb01nal submission is an ensemble of eight Duet models. 4 Discussion and conclusion One of the main goals of the deep learning track is to create a public reusable dataset for benchmarking the growing body of neural information retrieval literature [Mitra and Craswell, 2018]. We submit three runs based on the Duet architecture for the two\u2014document and passage\u2014retrieval tasks. Our main goal is to enrich the set of pooled documents for NIST assessments with documents that a Duet based architecture is likely to rank highly. As a secondary goal, we are also interested in benchmarking Duet against other state-of-the-art neural and traditional methods. A more detailed comparison of the performance of these Duet runs with other TREC submissions is provided in the track overview paper [Craswell et al., 2019]." + }, + { + "url": "http://arxiv.org/abs/1907.03693v1", + "title": "Incorporating Query Term Independence Assumption for Efficient Retrieval and Ranking using Deep Neural Networks", + "abstract": "Classical information retrieval (IR) methods, such as query likelihood and\nBM25, score documents independently w.r.t. each query term, and then accumulate\nthe scores. Assuming query term independence allows precomputing term-document\nscores using these models---which can be combined with specialized data\nstructures, such as inverted index, for efficient retrieval. Deep neural IR\nmodels, in contrast, compare the whole query to the document and are,\ntherefore, typically employed only for late stage re-ranking. We incorporate\nquery term independence assumption into three state-of-the-art neural IR\nmodels: BERT, Duet, and CKNRM---and evaluate their performance on a passage\nranking task. Surprisingly, we observe no significant loss in result quality\nfor Duet and CKNRM---and a small degradation in the case of BERT. However, by\noperating on each query term independently, these otherwise computationally\nintensive models become amenable to offline precomputation---dramatically\nreducing the cost of query evaluations employing state-of-the-art neural\nranking models. This strategy makes it practical to use deep models for\nretrieval from large collections---and not restrict their usage to late stage\nre-ranking.", + "authors": "Bhaskar Mitra, Corby Rosset, David Hawking, Nick Craswell, Fernando Diaz, Emine Yilmaz", + "published": "2019-07-08", + "updated": "2019-07-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG" + ], + "main_content": "Introduction Many traditional information retrieval (IR) ranking functions\u2014e.g., [Robertson et al., 2009, Ponte and Croft, 1998]\u2014 manifest the query-term independence property\u2014i.e., the documents can be scored independently w.r.t. each query term, and then the scores accumulated. Given a document collection, these term-document scores can be precomputed and combined with specialized IR data structures, such as inverted indexes [Zobel and Moffat, 2006], and clever organization strategies (e.g., impact-ordering [Anh et al., 2001]) to aggressively prune the set of documents that need to be assessed per query. This dramatically speeds up query evaluations enabling fast retrieval from large collections, containing billions of documents. Recent deep neural architectures\u2014such as BERT [Nogueira and Cho, 2019], Duet [Mitra et al., 2017], and CKNRM [Dai et al., 2018]\u2014have demonstrated state-of-the-art performance on several IR tasks. However, the superior retrieval effectiveness comes at the cost of evaluating deep models with tens of millions to hundreds of millions of parameters at query evaluation time. In practice, this limits the scope of these models to late stage re-ranking. Like traditional IR \u2217Both authors contributed equally to this research. \fA PREPRINT JULY 9, 2019 models, we can incorporate the query term independence assumption into the design of the deep neural model\u2014which would allow of\ufb02ine precomputation of all term-document scores. The query evaluation then involves only their linear combination\u2014alleviating the need to run the computation intensive deep model at query evaluation time. We can further combine these precomputed machine-learned relevance estimates with an inverted index, to retrieve from the full collection. This signi\ufb01cantly increases the scope of potential impact of neural methods in the retrieval process. We study this approach in this work. Of course, by operating independently per query term, the ranking model has access to less information compared to if it has the context of the full query. Therefore, we expect the ranking model to show some loss in retrieval effectiveness under this assumption. However, we trade this off with the expected gains in ef\ufb01ciency of query evaluations and the ability to retrieve, and not just re-rank, using these state-of-the-art deep neural models. In this preliminary study, we incorporate the query term independence assumption into three state-of-the-art neural ranking models\u2014BERT [Nogueira and Cho, 2019], Duet [Mitra et al., 2017], and CKNRM [Dai et al., 2018]\u2014and evaluate their effectiveness on the MS MARCO passage ranking task [Bajaj et al., 2016]. We surprisingly \ufb01nd that the two of the models suffer no statistically signi\ufb01cant adverse affect w.r.t. ranking effectiveness on this task under the query term independence assumption. While the performance of BERT degrades under the strong query term independence assumption\u2014the drop in MRR is reasonably small and the model maintains a signi\ufb01cant performance gap compared to other non-BERT based approaches. We conclude that at least for a certain class of existing neural IR models, incorporating query term independence assumption may result in signi\ufb01cant ef\ufb01ciency gains in query evaluation at minimal (or no) cost to retrieval effectiveness. 2 Related work Several neural IR methods\u2014e.g., [Ganguly et al., 2015, Kenter and De Rijke, 2015, Nalisnick et al., 2016, Guo et al., 2016]\u2014already operate under query term independence assumption. However, recent performance breakthroughs on many IR tasks have been achieved by neural models [Hu et al., 2014, Pang et al., 2016, Mitra et al., 2017, Dai et al., 2018, Nogueira and Cho, 2019] that learn latent representations of the query or inspect interaction patterns between query and document terms. In this work, we demonstrate the potential to incorporate query term independence assumption in these recent representation learning and interaction focused models. Some neural IR models [Huang et al., 2013, Gao et al., 2011] learn low dimensional dense vector representations of query and document that can be computed independently during inference. These models are also amenable to precomputation of document representations\u2014and fast retrieval using approximate nearest neighbor search [Aum\u00fcller et al., 2017, Boytsov et al., 2016]. An alternative involves learning higher dimensional but sparse representations of query and document [Salakhutdinov and Hinton, 2009, Zamani et al., 2018a] that can also be employed for fast lookup. However, these approaches\u2014where the document representation is computed independently of the query\u2014do not allow for interactions between the query term and document representations. Early interaction between query and document representation is important to many neural architectures [Hu et al., 2014, Pang et al., 2016, Mitra et al., 2017, Dai et al., 2018, Nogueira and Cho, 2019]. The approach proposed in this study allows for interactions between individual query terms and documents. Finally, we refer the reader to [Mitra and Craswell, 2018] for a more general survey of neural methods for IR tasks. 3 Neural Ranking Models with Query Term Independence Assumption IR functions that assume query term independence observe the following general form: Sq,d = X t\u2208q st,d (1) Where, s \u2208R|V |\u00d7|C| \u22650 is the set of positive real-valued scores as estimated by the relevance model corresponding to documents d \u2208C in collection C w.r.t. to terms t \u2208V in vocabulary V \u2014and Sq,d denotes the aggregated score of document d w.r.t. to query q. For example, in case of BM25 [Robertson et al., 2009]: st,d = idft \u00b7 tftd \u00b7 (k1 + 1) tftd + k1 \u00b7 \u0010 1 \u2212b + b \u00b7 |d| avgdl \u0011 (2) 2 \fA PREPRINT JULY 9, 2019 Where, tf and idf denote term-frequency and inverse document frequency, respectively\u2014and k1 and b are the free parameters of the BM25 model. Deep neural models for ranking, in contrast, do not typically assume query term independence. Instead, they learn complex matching functions to compare the candidate document to the full query. The parameters of such a model \u03c6 is typically learned discriminatively by minimizing a loss function of the following form: L = Eq\u223c\u03b8q, d+\u223c\u03b8d+,d\u2212\u223c\u03b8d\u2212[\u2113(\u2206q,d+,d\u2212)] (3) where, \u2206q,d+,d\u2212= \u03c6q,d+ \u2212\u03c6q,d\u2212 (4) We use d+ and d\u2212to denote a pair of relevant and non-relevant documents, respectively, w.r.t. query q. The instance loss \u2113in Equation 3 can take different forms\u2014e.g., ranknet [Burges et al., 2005] or hinge [Herbrich et al., 2000] loss. \u2113ranknet(\u2206q,d+,d\u2212) = log(1 + e\u2212\u03c3\u00b7\u2206q,d+,d\u2212) (5) \u2113hinge(\u2206q,d+,d\u2212) = max{0, \u01eb \u2212\u2206q,d+,d\u2212} (6) Given a neural ranking model \u03c6, we de\ufb01ne \u03a6\u2014the corresponding model under the query term independence assumption\u2014as: \u03a6q,d = X t\u2208q \u03c6t,d (7) The new model \u03a6 preserves the same architecture as \u03c6 but estimates the relevance of a document independently w.r.t. each query term. The parameters of \u03a6 are learned using the modi\ufb01ed loss: L = Eq\u223c\u03b8q, d+\u223c\u03b8d+,d\u2212\u223c\u03b8d\u2212[\u2113(\u03b4q,d+,d\u2212)] (8) where, \u03b4q,d+,d\u2212= X t\u2208q \u03c6t,d+ \u2212\u03c6t,d\u2212 (9) Given collection C and vocabulary V , we precompute \u03c6t,d for all t \u2208V and d \u2208C. In practice, the total number of combinations of t and d may be large but we can enforce additional constraints on which \u27e8t, d\u27e9pairs to evaluate, and assume no contributions from remaining pairs. During query evaluation, we can lookup the precomputed score \u03c6t,d without dedicating any additional time and resource to evaluate the deep ranking model. We employ an inverted index, in combination with the precomputed scores, to perform retrieval from the full collection using the learned relevance function \u03a6. We note that several IR data structures assume that \u03c6t,d be always positive which may not hold for any arbitrary neural architecture. But this can be addressed by applying a recti\ufb01ed linear unit activation on the model\u2019s output. The remainder of this paper describes our empirical study and summarizes our \ufb01ndings. 4 Experiments 4.1 Task description We study the effect of the query term independence assumption on deep neural IR models in the context of the MS MARCO passage ranking task [Bajaj et al., 2016]. We \ufb01nd this ranking task to be suitable for this study for several reasons. Firstly, with one million question queries sampled from Bing\u2019s search logs, 8.8 million passages extracted from web documents, and 400,000 positively labeled query-passage pairs for training, it is one of the few large datasets available today for benchmarking deep neural IR methods. Secondly, the challenge leaderboard2\u2014with 18 entries as of March 3, 2019\u2014is a useful catalog of approaches that show state-of-the-art performance on this task. Conveniently, several of these high-performing models include public implementations for the ease of reproducibility. The MS MARCO passage ranking task comprises of one thousand passages per query that the IR model, being evaluated, should re-rank. Corresponding to every query, one or few passages have been annotated by human editors as 2http://www.msmarco.org/leaders.aspx 3 \fA PREPRINT JULY 9, 2019 Table 1: Comparing ranking effectiveness of BERT, Duet, and CKNRM with the query independence assumption (denoted as \u201cTerm ind.\u201d) with their original counterparts (denoted as \u201cFull\u201d). The difference between the median MRR for \u201cfull\u201d and \u201cterm ind.\u201d models are not statistically signi\ufb01cant based on a student\u2019s t-test (p < 0.05) for Duet and CKNRM. The difference in MRR is statistically signi\ufb01cant based on a student\u2019s t-test (p < 0.05) for BERT (single run). The BM25 baseline (single run) is included for reference. Model MRR@10 Mean (\u00b1 Std. dev) Median BERT Full 0.356 0.356 Term ind. 0.333 0.333 Duet Full 0.239 (\u00b10.002) 0.240 Term ind. 0.244 (\u00b10.002) 0.244 CKNRM Full 0.223 (\u00b10.004) 0.224 Term ind. 0.222 (\u00b10.005) 0.221 BM25 0.167 0.167 containing the answer relevant to the query. The rank list produced by the model is evaluated using the mean reciprocal rank (MRR) metric against the ground truth annotations. We use the MS MARCO training dataset to train all baseline and treatment models, and report their performance on the publicly available development set which we consider\u2014and hereafter refer to\u2014as the test set for our experiments. This test set contains about seven thousand queries which we posit is suf\ufb01cient for reliable hypothesis testing. Note that the thousand passages per query were originally retrieved using BM25 from a collection that is provided as part of the MS MARCO dataset. This allows us to also use this dataset in a retrieval setting\u2014in addition to the re-ranking setting used for the of\ufb01cial challenge. We take advantage of this in our study. 4.2 Baseline models We begin by identifying models listed on the MS MARCO leaderboard that can serve as baselines for our work. We only consider the models with public implementations. We \ufb01nd that a number of top performing entries\u2014e.g., [Nogueira and Cho, 2019]\u2014are based on recently released large scale language model called BERT [Devlin et al., 2018]. The BERT based entries are followed in ranking by the Duet [Mitra et al., 2017] and the Convolutional Kernelbased Neural Ranking Model (CKNRM) [Dai et al., 2018]. Therefore, we limit this study to BERT, Duet, and CKNRM. BERT Nogueira and Cho [2019] report state-of-the-art retrieval performance on the MS MARCO passage reranking task by \ufb01ne tuning BERT [Devlin et al., 2018] pretrained models. In this study, we reproduce the results from their paper corresponding to the BERT Base model and use it as our baseline. Under the term independence assumption, we evaluate the BERT model once per query term\u2014wherein we input the query term as sentence A and the passage as sentence B. Duet The Duet [Mitra et al., 2017] model estimates the relevance of a passage to a query by a combination of (i) examining the patterns of exact matches of query terms in the passage, and (ii) computing similarity between learned latent representations of query and passage. Duet has previously demonstrated state-of-the-art performance on TREC CAR [Nanni et al., 2017] and is an of\ufb01cial baseline for the MS MARCO challenge. The particular implementation of Duet listed on the leaderboard includes modi\ufb01cations3 to the original model [Mitra and Craswell, 2019]. We use this provided implementation for our study. Besides evaluating the model once per query term, no additional changes were necessary to its architecture under the query term independence assumption. CKNRM The CKNRM model combines kernel pooling based soft matching [Xiong et al., 2017] with a convolutional architecture for comparing n-grams. CKNRM uses kernel pooling to extract ranking signals from interaction matrices of query and passage n-grams. Under the query term independence assumption, the model considers one 3https://github.com/dfcf93/MSMARCO/blob/master/Ranking/Baselines/Duet.ipynb 4 \fA PREPRINT JULY 9, 2019 Table 2: Comparing Duet (with query term independence assumption) and BM25 under the full retrieval settings on a subset of MS MARCO dev queries. The differences in recall and MRR between Duet (term ind.) and BM25 are statistically signi\ufb01cant according to student\u2019s t-test (p < 0.01). Model Recall@1000 MRR@10 BM25 0.80 0.169 Duet (term ind.) 0.85 0.218 query term at a time\u2014and therefore we only consider the interactions between the query unigrams and passage ngrams. We base our study on the public implementation4 of this model. For all models we re-use the published hyperparameter values and other settings from the MS MARCO website. 5 Results Table 1 compares the BERT, the Duet, and the CKNRM models trained under the query term independence assumption to their original counterparts on the passage re-ranking task. We train and evaluate the Duet and the CKNRM based models \ufb01ve and eight times, respectively, using different random seeds\u2014and report mean and median MRR. For the BERT based models, due to long training time we only report results based on a single training and evaluation run. As table 1 shows, we observe no statistically signi\ufb01cant difference in effectiveness from incorporating the query term independence assumptions in either Duet or CKNRM. The query term independent BERT model performs slightly worse than its original counterpart on MRR but the performance is still superior to other non-BERT based approaches listed on the public leaderboard. We posit that models with query term independence assumption\u2014even when slightly less effective compared to their full counterparts\u2014are likely to retrieve better candidate sets for re-ranking. To substantiate this claim, we conduct a small-scale retrieval experiment based on a random sample of 395 queries from the test set. We use the Duet model with the query term independence assumption to precompute the term-passage scores constrained to (i) the term appears at least once in the passage, and (ii) the term does not appear in more than 5% of the passage collection. Table 2 compares Duet and BM25 on their effectiveness as a \ufb01rst stage retrieval method in a potential telescoping setting [Matveeva et al., 2006]. We observe a 6.25% improvement in recall@1000 from Duet over the BM25 baseline. To perform similar retrieval from the full collection using the full Duet model, unlike its query-term-independent counterpart, is prohibitive because it involves evaluating the model on every passage in the collection against every incoming query. 6 Discussion and conclusion The emergence of compute intensive ranking models, such as BERT, motivates rethinking how these models should be evaluated in large scale IR systems. The approach proposed in this paper moves the burden of model evaluation from the query evaluation stage to the document indexing stage. This may have further consequences on computational ef\ufb01ciency by allowing batched model evaluation that more effectively leverages GPU (or TPU) parallelization. This preliminary study is based on three state-of-the-art deep neural models on a public passage ranking benchmark. The original design of all three models\u2014BERT, Duet, and CKNRM\u2014emphasize on early interactions between query and passage representations. However, we observe that limiting the interactions to passage and individual query terms has reasonably small impact on their effectiveness. These results are promising as they support the possibility of dramatically speeding up query evaluation for some deep neural models, and even employing them to retrieve from the full collection. The ability to retrieve\u2014and not just re-rank\u2014using deep models has signi\ufb01cant implications for neural IR research. Any loss in retrieval effectiveness due to incorporating strong query term independence assumptions may be further recovered by additional stages of re-ranking in a telescoping approach [Matveeva et al., 2006]. This study is focused on the passage ranking task. The trade-off between effectiveness and ef\ufb01ciency may be different for document retrieval and other IR tasks. Traditional IR methods in more complex retrieval settings\u2014e.g., when the document is represented by multiple \ufb01elds [Robertson et al., 2004]\u2014also observe the query term independence assumption. So, studying the query term independence assumption in the context of corresponding neural models\u2014 e.g., [Zamani et al., 2018b]\u2014may also be appropriate. We note these as important future directions for our research. 4https://github.com/thunlp/Kernel-Based-Neural-Ranking-Models 5 \fA PREPRINT JULY 9, 2019 The \ufb01ndings from this study may also be interpreted as pointing to a gap in our current state-of-the-art neural IR models that do not take adequate advantage of term proximity signals for matching. This is another \ufb01nding that may hold interesting clues for IR researchers who want to extract more retrieval effectiveness from deep neural methods." + }, + { + "url": "http://arxiv.org/abs/1903.07666v1", + "title": "An Updated Duet Model for Passage Re-ranking", + "abstract": "We propose several small modifications to Duet---a deep neural ranking\nmodel---and evaluate the updated model on the MS MARCO passage ranking task. We\nreport significant improvements from the proposed changes based on an ablation\nstudy.", + "authors": "Bhaskar Mitra, Nick Craswell", + "published": "2019-03-18", + "updated": "2019-03-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "Introduction In information retrieval (IR), traditional learning to rank [Liu, 2009] models estimate the relevance of a document to a query based on hand-engineered features. The input to these models typically includes, among others, features based on patterns of exact matches of query terms in the document. Recently proposed deep neural IR models [Mitra and Craswell, 2018], in contrast, accept the raw query and document text as input. The input text is represented as one-hot encoding of words (or sub-word components [Kim et al., 2016, Jozefowicz et al., 2016, Sennrich et al., 2015])\u2014and the deep neural models focus primarily on learning latent representations of text that are effective for matching query and document. Mitra et al. [2017] posit that deep neural ranking models should focus on both: (i) representation learning for text matching, as well as on (ii) feature learning based on patterns of exact matches of query terms in the document. They demonstrate that a neural ranking model called Duet1\u2014with two distinct sub-models that consider both matches in the term space (the local sub-model) and the learned latent space (the distributed submodel)\u2014is more effective at estimating query-document relevance. In this work, we evaluate a duet model on the MS MARCO passage ranking task [Bajaj et al., 2016]. We propose several simple modi\ufb01cations to the original Duet architecture and demonstrate through an ablation study that incorporating these changes results in signi\ufb01cant improvements on the passage ranking task. 2 Passage re-ranking on MS MARCO The MS MARCO passage ranking task [Bajaj et al., 2016] requires a model to rank approximately thousand passages for each query. The queries are sampled from Bing\u2019s search logs, and then manually annotated to restrict them to questions with speci\ufb01c answers. A BM25 [Robertson et al., 2009] model is employed to retrieve the top thousand candidate passages for each query from the collection. For each query, zero or more candidate passages are deemed relevant based on manual annotations. The ranking model is evaluated on this passage re-ranking task using the mean reciprocal rank (MRR) metric [Craswell, 2009]. Participants are required to submit the ranked list of passages per query for a development (dev) set and a heldout (eval) set. The ground truth annotations for the development set are available publicly, while the corresponding annotations for the evaluation set are heldout to avoid over\ufb01tting. A public leaderboard2 presents all submitted runs from different participants on this task. 1 While Mitra et al. [2017] propose a speci\ufb01c neural architecture, they refer more broadly to the family of neural architectures that operate on both term space and learned latent space as duet. We refer to the speci\ufb01c architecture proposed by Mitra et al. [2017] as Duet\u2014to distinguish it from the general family of such architectures that we refer to as duet (note the difference in capitilization). 2http://www.msmarco.org/leaders.aspx \f3 The updated Duet model In this section, we brie\ufb02y describe several modi\ufb01cations to the Duet model. A public implementation of the updated Duet model using PyTorch [Paszke et al., 2017] is available online3. Word embeddings We replace the character level n-graph encoding in the input of the distributed model with word embeddings. We see signi\ufb01cant reduction in training time given a \ufb01xed number of minibatches and a \ufb01xed minibatch size. This change primarily helps us to train on a signi\ufb01cantly larger amount of data under \ufb01xed training time constraints. We initialize the word embeddings using pre-trained GloVe [Pennington et al., 2014] embeddings before training the Duet model. Inverse document frequency weighting In contrast to some of the other datasets on which the Duet model has been previously evaluated [Mitra et al., 2017, Nanni et al., 2017], the MS MARCO dataset contains a relatively larger percentage of natural language queries and the queries are considerably longer on average. In traditional IR models, the inverse document frequency (IDF) [Robertson, 2004] of a query term provides an effective mechanism for weighting the query terms by their discriminative power. In the original Duet model, the input to the local sub-model corresponding to a query q and a document d is a binary interaction matrix X \u2208R|q|\u00d7|d| de\ufb01ned as follows: Xij = \u001a1, if qi = dj 0, otherwise (1) We incorporate IDF in the Duet model by weighting the interaction matrix by the IDF of the matched terms. We adopt the Robertson-Walker de\ufb01nition of IDF [Jones et al., 2000] normalized to the range [0, 1]. X\u2032 ij = \u001aIDF(qi), if qi = dj 0, otherwise (2) IDF(t) = log(N/nt) log(N) (3) Where, N is the total number of passages in the collection and nt is the number of passages in which the term t appears at least once. Non-linear combination of local and distributed models Zamani et al. [2018] show that when combining different sub-models in a neural ranking model, it is more effective if each sub-model produce a vector output that are further combined by additional multi-layer perceptrons (MLP). In the original Duet model, the local and the distributed submodels produce a single score that are linearly combined. In our updated architecture, both models produce a vector that are further combined by an MLP\u2014with two hidden layers\u2014to generate the estimated relevance score. Recti\ufb01er Linear Units (ReLU) We replace the Tanh non-linearities in the original Duet model with ReLU [Glorot et al., 2011] activations. Bagging We observe some additional improvements from combining multiple Duet models\u2014trained with different random seeds and on different random sample of the training data\u2014using bagging [Breiman, 1996]. 4 Experiments The MS MARCO task provides a pre-processed training dataset\u2014called \u201ctriples.train.full.tsv\u201d\u2014where each training sample consists of a triple \u27e8q, p+, p\u2212\u27e9, where q is a query and p+ and p\u2212are a pair of passages, with p+ being more relevant to q than p\u2212. Similar to the original Duet model, we employ the cross-entropy with softmax loss to learn the parameters of our model M: 3https://github.com/dfcf93/MSMARCO/blob/master/Ranking/Baselines/Duet.ipynb 2 \fTable 1: Comparison of the different Duet variants and other state-of-the-art approaches from the public MS MARCO leaderboard. The update Duet model\u2014referred to as Duet v2\u2014bene\ufb01ts signi\ufb01cantly from the modi\ufb01cations proposed in this paper. Model MRR@10 Dev Eval Other approaches BM25 0.165 0.167 Single CKNRM [Dai et al., 2018] model 0.247 0.247 Ensemble of 8 CKNRM [Dai et al., 2018] models 0.290 0.271 IRNet (a proprietary deep neural model) 0.278 0.281 BERT [Nogueira and Cho, 2019] 0.365 0.359 Duet variants Single Duet v2 w/o IDF weighting for interaction matrix 0.163 Single Duet v2 w/ Tanh non-linearity (instead of ReLU) 0.179 Single Duet v2 w/o MLP to combine local and distributed scores 0.208 Single Duet v2 model 0.243 0.245 Ensemble of 8 Duet v2 models 0.252 0.253 L = Eq,p+,p\u2212\u223c\u03b8[\u2113(Mq,p+ \u2212Mq,p\u2212)] (4) where, \u2113(\u2206) = log(1 + e\u2212\u03c3\u00b7\u2206) (5) Where, Mq,p is the relevance score for the pair \u27e8q, p\u27e9as estimated by the model M. Note, that by considering a single negative passage per sample, our loss is equivalent to the RankNet loss [Burges et al., 2005]. We use the Adam optimizer with default parameters and a learning rate of 0.001. We set \u03c3 in Equation 5 to 0.1 and dropout rate for the model to 0.5. We trim all queries and passages to their \ufb01rst 20 and 200 words, respectively. We restrict our input vocabulary to the 71, 486 most frequent terms in the collection and set the size of all hidden layers to 300. We use minibatches of size 1024 and train the model for 1024 minibatches. Finally, for bagging we train eight different Duet models with different random seeds and on different samples of the training data. We train and evaluate our models using a Tesla K40 GPU\u2014on which it takes a total of only 1.5 hours to train each single Duet model and to evaluate it on both dev and eval sets. 5 Results Table 1 presents the MRR@10 corresponding to all the Duet variants we evaluated on the dev set. The updated Duet model with all the modi\ufb01cations described in Section 3\u2014referred hereafter as Duet v2\u2014achieves an MRR@10 of 0.243. We perform an ablation study by leaving out one of the three modi\ufb01cations\u2014(i) IDF weighting for interaction matrix, (ii) ReLU non-linearity instead of Tanh, and (iii) LP to combine local and distributed scores,\u2014out at a time. We observe a 33% degradation in MRR by not incorporating the IDF weighting alone. It is interesting to note that the Github implementations4 of the KNRM [Xiong et al., 2017] and CKNRM [Dai et al., 2018] models also indicate that their MS MARCO submissions incorporated IDF term-weighting\u2014potentially indicating the value of IDF weighting across multiple architectures. Similarly, we also observe a 26% degradation in MRR by using Tanh non-linearity instead of ReLU. Using a linear combination of scores from the local and the distributed model instead of combining their vector outputs using an MLP results in 14% degradation in MRR. Finally, we observe a 3% improvement in MRR by ensembling eight Duet v2 models using bagging. We also submit the individual Duet v2 model and the ensemble of eight Duet v2 models for evaluation on the heldout set and observe similar numbers. We include the MRR numbers for other non-Duet based approaches that are available on the public leaderboard in Table 1. As of writing this paper, BERT [Devlin et al., 2018] based approaches\u2014e.g., [Nogueira and Cho, 2019]\u2014 are outperforming other approaches by a signi\ufb01cant margin. Among the non-BERT based approaches, a proprietary deep neural model\u2014called IRNet\u2014currently demonstrates the best performance on the heldout evaluation set. This is followed, among others, by an ensemble of CKNRM [Dai et al., 2018] models and the single CKNRM model. The single Duet v2 model achieves comparable MRR to the single CKNRM model on the eval set. The ensemble of Duet v2 models, however, performs slightly worse than the ensemble of the CKNRM models on the same set. 4 https://github.com/thunlp/Kernel-Based-Neural-Ranking-Models 3 \f6 Discussion and conclusion In this paper, we describe several simple modi\ufb01cations to the original Duet model that result in signi\ufb01cant improvements over the original architecture on the MS MARCO task. The updated architecture\u2014we call Duet v2\u2014 achieves comparable performance to other non-BERT based top performing approaches, as listed on the public MS MARCO leaderboard. We note, that the Duet v2 model we evaluate contains signi\ufb01cantly fewer learnable parameters\u2014approximately 33 million\u2014compared to other top performing approaches, such as BERT based models [Nogueira and Cho, 2019] and single CKNRM model [Dai et al., 2018]\u2014both of which contains few hundred million learnable parameters. Comparing the models based on the exact number of learnable parameters, however, may not be meaningful as most of these parameters are due to large vocabulary size in the input embedding layers. It is not clear how signi\ufb01cantly the vocabulary size impacts model performance\u2014an aspect we may want to analyse in the future. It is worth emphasizing that compared to other top performing approaches, training the Duet v2 model takes signi\ufb01cantly less resource and time\u20141.5 hours to train a single Duet model and to evaluate it on both dev and eval sets using a Tesla K40 GPU\u2014which may make the model an attractive starting point for new MS MARCO participants. The model performance on the MS MARCO task may be further improved by adding more depth and / or more careful hyperparameter tuning." + }, + { + "url": "http://arxiv.org/abs/1705.01509v1", + "title": "Neural Models for Information Retrieval", + "abstract": "Neural ranking models for information retrieval (IR) use shallow or deep\nneural networks to rank search results in response to a query. Traditional\nlearning to rank models employ machine learning techniques over hand-crafted IR\nfeatures. By contrast, neural models learn representations of language from raw\ntext that can bridge the gap between query and document vocabulary. Unlike\nclassical IR models, these new machine learning based approaches are\ndata-hungry, requiring large scale training data before they can be deployed.\nThis tutorial introduces basic concepts and intuitions behind neural IR models,\nand places them in the context of traditional retrieval models. We begin by\nintroducing fundamental concepts of IR and different neural and non-neural\napproaches to learning vector representations of text. We then review shallow\nneural IR methods that employ pre-trained neural term embeddings without\nlearning the IR task end-to-end. We introduce deep neural networks next,\ndiscussing popular deep architectures. Finally, we review the current DNN\nmodels for information retrieval. We conclude with a discussion on potential\nfuture directions for neural IR.", + "authors": "Bhaskar Mitra, Nick Craswell", + "published": "2017-05-03", + "updated": "2017-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "Introduction Since the turn of the decade, there have been dramatic improvements in performance in computer vision, speech recognition, and machine translation tasks, witnessed in research and in real-world applications [112]. These breakthroughs were largely fuelled by recent advances in neural network models, usually with multiple hidden layers, known as deep architectures [8, 49, 81, 103, 112]. Exciting novel applications, such as conversational agents [185, 203], have also emerged, as well as game-playing agents with human-level performance [147, 180]. Work has now begun in the information retrieval (IR) community to apply these neural methods, leading to the possibility of advancing the state of the art or even achieving breakthrough performance as in these other \ufb01elds. Retrieval of information can take many forms. Users can express their information need in the form of a text query\u2014by typing on a keyboard, by selecting a query suggestion, or by voice recognition\u2014or the query can be in the form of an image, or in some cases the need can even be implicit. Retrieval can involve ranking existing pieces of content, such as documents or short-text answers, or composing new responses incorporating retrieved information. Both the information need and the retrieved results may use the same modality (e.g., retrieving text documents in response to keyword queries), or different ones (e.g., image search using text queries). Retrieval systems may consider user history, physical location, temporal changes in information, or other context when ranking results. They may also help users formulate their intent (e.g., via query auto-completion or query suggestion) and/or extract succinct summaries of results for easier inspection. Neural IR refers to the application of shallow or deep neural networks to these retrieval tasks. This tutorial serves as an introduction to neural methods for ranking documents in response to a query, an \u2217The author is a part-time PhD student at University College London. DRAFT. Copyright is held by the author(s). May, 2017. arXiv:1705.01509v1 [cs.IR] 3 May 2017 \f2014 2015 2016 2017 1 % 4 % 8 % 21 % 0 5 10 15 20 25 30 Year % of SIGIR papers related to neural IR Figure 1: The percentage of neural IR papers at the ACM SIGIR conference\u2014as determined by a manual inspection of the paper titles\u2014shows a clear trend in the growing popularity of the \ufb01eld. important IR task. A search query may typically contain a few terms, while the document length, depending on the scenario, may range from a few terms to hundreds of sentences or more. Neural models for IR use vector representations of text, and usually contain a large number of parameters that needs to be tuned. ML models with large set of parameters typically require a large quantity of training data [196]. Unlike traditional learning to rank (L2R) approaches that train ML models over a set of hand-crafted features, neural models for IR typically accept the raw text of a query and document as input. Learning suitable representations of text also demands large-scale datasets for training [141]. Therefore, unlike classical IR models, these neural approaches tend to be data-hungry, with performance that improves with more training data. Text representations can be learnt in an unsupervised or supervised fashion. The supervised approach uses IR data such as labeled query-document pairs, to learn a representation that is optimized end-toend for the task at hand. If suf\ufb01cient IR labels are not available, the unsupervised approach learns a representation using just the queries and/or documents. In the latter case, different unsupervised learning setups may lead to different vector representations, that differ in the notion of similarity that they capture between represented items. When applying such representations, the choice of unsupervised learning setup should be carefully considered, to yield a notion of text similarity that is suitable for the target task. Traditional IR models such as Latent Semantic Analysis (LSA) [48] learn dense vector representations of terms and documents. Neural representation learning models share some commonalities with these traditional approaches. Much of our understanding of these traditional approaches from decades of research can be extended to these modern representation learning models. In other \ufb01elds, advances in neural networks have been fuelled by speci\ufb01c datasets and application needs. For example, the datasets and successful architectures are quite different in visual object recognition, speech recognition, and game playing agents. While IR shares some common attributes with the \ufb01eld of natural language processing, it also comes with its own set of unique challenges. IR systems must deal with short queries that may contain previously unseen vocabulary, to match against documents that vary in length, to \ufb01nd relevant documents that may also contain large sections of irrelevant text. IR systems should learn patterns in query and document text that indicate relevance, even if query and document use different vocabulary, and even if the patterns are task-speci\ufb01c or context-speci\ufb01c. The goal of this tutorial is to introduce the fundamentals of neural IR, in context of traditional IR research, with visual examples to illustrate key concepts and a consistent mathematical notation for describing key models. Section 2 presents a survey of IR tasks, challenges, metrics and non-neural models. Section 3 provides a brief overview of neural IR models and a taxonomy for different neural approaches to IR. Section 4 introduces neural and non-neural methods for learning term embeddings, without the use of supervision from IR labels, and with a focus on the notion of similarity. Section 5 surveys some speci\ufb01c approaches for incorporating such embeddings in IR. Section 6 introduces the fundamentals of deep models that are used in IR so far, including popular architectures and toolkits. 2 \fSection 7 surveys some speci\ufb01c approaches for incorporating deep neural networks in IR. Section 8 is our discussion, including future work, and conclusion. Motivation for this tutorial Neural IR is an emerging \ufb01eld. Research publication in the area has been increasing (Figure 1), along with relevant workshops [42\u201344], tutorials [97, 119, 140], and plenary talks [41, 129]. Because this growth in interest is fairly recent, some researchers with IR expertise may be unfamiliar with neural models, and other researchers who have already worked with neural models may be unfamiliar with IR. The purpose of this tutorial is to bridge the gap, by describing the relevant IR concepts and neural methods in the current literature. 2 Fundamentals of text retrieval We focus on text retrieval in IR, where the user enters a text query and the system returns a ranked list of search results. Search results may be passages of text or full text documents. The system\u2019s goal is to rank the user\u2019s preferred search results at the top. This problem is a central one in the IR literature, with well understood challenges and solutions. This section provides an overview of those, such that we can refer to them in subsequent sections. 2.1 IR tasks Text retrieval methods for full text documents and for short text passages have application in ad hoc retrieval systems and question answering systems respectively. Ad-hoc retrieval Ranked document retrieval is a classic problem in information retrieval, as in the main task of the Text Retrieval Conference [205], and performed by popular search engines such as Google, Bing, Baidu, or Yandex. TREC tasks may offer a choice of query length, ranging from a few words to a few sentences, whereas search engine queries tend to be at the shorter end of the range. In an operational search engine, the retrieval system uses specialized index structures to search potentially billions of documents. The results ranking is presented in a search engine results page (SERP), with each result appearing as a summary and a hyperlink. The engine can instrument the SERP, gathering implicit feedback on the quality of search results such as click decisions and dwell times. A ranking model can take a variety of input features. Some ranking features may depend on the document alone, such as how popular the document is with users, how many incoming links it has, or to what extent document seems problematic according to a Web spam classi\ufb01er. Other features depend on how the query matches the text content of the document. Still more features match the query against document metadata, such as referred text of incoming hyperlink anchors, or the text of queries from previous users that led to clicks on this document. Because anchors and click queries are a succinct description of the document, they can be a useful source of ranking evidence, but they are not always available. A newly created document would not have much link or click text. Also, not every document is popular enough to have past links and clicks, but it still may be the best search result for a user\u2019s rare or tail query. In such cases, when text metadata is unavailable, it is crucial to estimate the document\u2019s relevance primarily based on its text content. In the text retrieval community, retrieving documents for short-text queries by considering the long body text of the document is an important challenge. The ad-hoc and Web tracks2 at the popular Text REtrieval Conference (TREC) [204] focus speci\ufb01cally on this task. The TREC participants are provided a set of, say \ufb01fty, search queries and a document collection containing 500-700K newswire and other documents. Top ranked documents retrieved for each query from the collection by different competing retrieval systems are assessed by human annotators based on their relevance to the query. Given a query, the goal of the IR model is to rank documents with better assessor ratings higher than the rest of the documents in the collection. In Section 2.4, we describe popular IR metrics for quantifying model performance given the ranked documents retrieved by the model and the corresponding assessor judgments for a given query. Question-answering Question-answering tasks may range from choosing between multiple choices (typically entities or binary true-or-false decisions) [78, 80, 165, 212] to ranking spans of text or 2http://www10.wwwconference.org/cdrom/papers/317/node2.html 3 \fpassages [3, 55, 162, 206, 221], and may even include synthesizing textual responses by gathering evidence from one or more sources [145, 154]. TREC question-answering experiments [206] has participating IR systems retrieve spans of text, rather than documents, in response to questions. IBM\u2019s DeepQA [55] system\u2014behind the Watson project that famously demonstrated human-level performance on the American TV quiz show, \"Jeopardy!\"\u2014also has a primary search phase, whose goal is to \ufb01nd as many potentially answer-bearing passages of text as possible. With respect to the question-answering task, the scope of this tutorial is limited to ranking answer containing passages in response to natural language questions or short query texts. Retrieving short spans of text pose different challenges than ranking documents. Unlike the long body text of documents, single sentences or short passages tend to be on point with respect to a single topic. However, answers often tend to use different vocabulary than the one used to frame the question. For example, the span of text that contains the answer to the question \"what year was Martin Luther King Jr. born?\" may not contain the term \"year\". However, the phrase \"what year\" implies that the correct answer text should contain a year\u2014such as \u20181929\u2019 in this case. Therefore, IR systems that focus on the question-answering task need to model the patterns expected in the answer passage based on the intent of the question. 2.2 Desiderata of IR models Before we describe any speci\ufb01c IR model, it is important for us to discuss the attributes that we desire from a good retrieval system. For any IR system, the relevance of the retrieved items to the input query is of foremost importance. But relevance measurements can be nuanced by the properties of robustness, sensitivity and ef\ufb01ciency that we expect the system to demonstrate. These attributes not only guide our model designs but also serve as yard sticks for comparing the different neural and non-neural approaches. Semantic understanding Most traditional approaches for ad-hoc retrieval count repititions of the query terms in the document text. Exact term matching between the query and the document text, while simple, serves as a foundation for many IR systems. Different weighting and normalization schemes over these counts leads to a variety of TF-IDF models, such as BM25 [166]. However, by only inspecting the query terms the IR model ignores all the evidence of aboutness from the rest of the document. So, when ranking for the query \u201cAustralia\u201d, only the occurrences of \u201cAustralia\u201d in the document are considered, although the frequency of other words like \u201cSydeny\u201d or \u201ckangaroo\u201d may be highly informative. In the case of the query \u201cwhat channel are the seahawks on today\u201d, the query term \u201cchannel\u201d implies that the IR model should pay attention to occurrences of \u201cESPN\u201d or \u201cSky Sports\u201d in the document text\u2014none of which appears in the query itself. Semantic understanding, however, goes beyond mapping query terms to document terms. A good IR model may consider the terms \u201chot\u201d and \u201cwarm\u201d related, as well as the terms \u201cdog\u201d and \u201cpuppy\u201d\u2014but must also distinguish that a user who submits the query \u201chot dog\u201d is not looking for a \"warm puppy\" [118]. At the more ambitious end of the spectrum, semantic understanding would involve logical reasons by the IR system\u2014so for the query \u201cconcerts during SIGIR\u201d it associates a speci\ufb01c edition of the conference (the upcoming one) and considers both its location and dates when recommending concerts nearby during the correct week. These examples motivate that IR models should have some latent representations of intent as expressed by the query and of the different topics in the document text\u2014so that inexact matching can be performed that goes beyond lexical term counting. Robustness to rare inputs Query frequencies in most IR setups follow a Zip\ufb01an distribution [216] (see Figure 2). In the publicly available AOL query logs [159], for example, more than 70% of the distinct queries are seen only once in the period of three months from which the queries are sampled. In the same dataset, more than 50% of the distinct documents are clicked only once. A good IR method must be able to retrieve these infrequently searched-for documents, and perform reasonably well on queries containing terms that appear extremely rarely, if ever, in its historical logs. Many IR models that learn latent representations of text from data often naively assume a \ufb01xed size vocabulary. These models perform poorly when the query consists of terms rarely (or never) seen in the training data. Even if the model does not assume a \ufb01xed vocabulary, the quality of the latent 4 \f\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 log10(query ID) log10(query frequency) (a) Distribution of query impressions \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 log10(document ID) log10(document frequency) (b) Distribution of document clicks Figure 2: A Log-Log plot of frequency versus rank for query impressions and document clicks in the AOL query logs [159]. The plots highlight that these quantities follow a Zip\ufb01an distribution. representations may depend heavily on how frequently the terms under consideration appear in the training dataset. Exact matching models, like BM25 [166], on the other hand can precisely retrieve documents containing rare terms. Semantic understanding in an IR model cannot come at the cost of poor retrieval performance on queries containing rare terms. When dealing with a query such as \u201cpekarovic land company\u201d the IR model will bene\ufb01t from considering exact matches of the rare term \u201cpekarovic\u201d. In practice an IR model may need to effectively trade-off exact and inexact matching for a query term. However, the decision of when to perform exact matching can itself be informed by semantic understanding of the context in which the terms appear in addition to the terms themselves. Robustness to corpus variance An interesting consideration for IR models is how well they perform on corpuses whose distributions are different from the data that the model was trained on. Models like BM25 [166] have very few parameters and often demonstrate reasonable performance \u201cout of the box\u201d on new corpuses with little or no additional tuning of parameters. Deep learning models containing millions (or even billions) of parameters, on the other hand, are known to be more sensitive to distributional differences between training and evaluation data, and has been shown to be especially vulnerable to adversarial inputs [194]. Some of the variances in performance of deep models on new corpuses is offset by better retrieval on the test corpus that is distributionally closer to the training data, where the model may have picked up crucial corpus speci\ufb01c patterns. For example, it maybe understandable if a model that learns term representations based on the text of Shakespeare\u2019s Hamlet is effective at retrieving passages relevant to a search query from The Bard\u2019s other works, but performs poorly when the retrieval task involves a corpus of song lyrics by Jay-Z. However, the poor performances on new corpus can also be indicative that the model is over\ufb01tting, or suffering from the Clever Hans3 effect [187]. For example, an IR model trained on recent news corpus may learn to associate \u201cTheresa May\u201d with the query \u201cuk prime minister\u201d and as a consequence may perform poorly on older TREC datasets where the connection to \u201cJohn Major\u201d may be more appropriate. ML models that are hyper-sensitive to corpus distributions may be vulnerable when faced with unexpected changes in distributions or \u201cblack swans\u201d4 in the test data. This can be particularly problematic when the test distributions naturally evolve over time due to underlying changes in the user population or behavior. The models, in these cases, may need to be re-trained periodically, or designed to be invariant to such changes. Robustness to variable length inputs A typical text collection contains documents of varied lengths (see Figure 3). For a given query, a good IR system must be able to deal with documents of different lengths without over-retrieving either long or short documents. Relevant documents may 3https://en.wikipedia.org/wiki/Clever_Hans 4https://en.wikipedia.org/wiki/Black_swan_theory 5 \f0\u221210K 10\u221220K 20\u221230K 30\u221240K 40\u221250K 50\u221260K 60\u221270K 70\u221280K 80\u221290K 90\u2212100K 100\u2212110K 110\u2212120K 120\u2212130K 130\u2212140K 140\u2212150K 150\u2212160K 160\u2212170K 170\u2212180K 180\u2212190K 190\u2212210K 210\u2212220K 220\u2212240K 240\u2212250K 250\u2212260K 0 200 400 600 800 Page length in bytes Number of articles Figure 3: Distribution of Wikipedia featured articles by document length (in bytes) as of June 30, 2014. Source: https://en.wikipedia.org/wiki/Wikipedia:Featured_articles/By_length. contain irrelevant sections, and the relevant content may either be localized in a single section of the document, or spread over different sections. Document length normalization is well-studied in the context of IR models (e.g., pivoted length normalization [181]), and this existing research should inform the design of any new IR models. Robustness to errors in input No IR system should assume error-free inputs\u2014neither when considering the user query nor when inspecting the documents in the text collection. While traditional IR models have typically involved speci\ufb01c components for error correction\u2014such as automatic spell corrections over queries\u2014new IR models may adopt different strategies towards dealing with such errors by operating at the character-level and/or by learning better representations from noisy texts. Sensitivity to context Retrieval in the wild can leverage many implicit and explicit context information.5 The query \u201cweather\u201d can refer to the weather in Seattle or in London depending on where the user is located. An IR model may retrieve different results for the query \u201cdecorations\u201d depending on the time of the year. The query \u201cgiants match highlights\u201d can be better disambiguated if the IR system knows whether the user is a fan of baseball or American football, whether she is located on the East or the West coast of USA, or if the model has knowledge of recent sport \ufb01xtures. In a conversational IR system, the correct response to the question \"When did she become the prime minister?\" would depend on disambiguating the correct entity based on the context of references made in the previous turns of the conversation. Relevance, therefore, in many applications is situated in the user and task context, and is an important consideration in the design of IR systems. Ef\ufb01ciency Ef\ufb01ciency of retrieval is one of the salient points of any retrieval system. A typical commercial Web search engine may deal with tens of thousands of queries per second6\u2014retrieving results for each query from an index containing billions of documents. Search engines typically involve large multi-tier architectures and the retrieval process generally consists of multiple stages of pruning the candidate set of documents [131]. The IR model at the bottom of this telescoping setup may need to sift through billions of documents\u2014while the model at the top may only need to re-rank between tens of promising documents. The retrieval approaches that are suitable at one level of the stack may be highly impractical at a different step\u2014models at the bottom need to be fast but mostly focus on eliminating irrelevant or junk results, while models at the top tend to develop more sophisticated notions of relevance, and focus on distinguishing between documents that are much closer on the relevance scale. So far, much of the focus on neural IR approaches have been limited to re-ranking top-n documents. 5As an extreme example, in the proactive retrieval scenario the retrieval can be triggered based solely on implicit context without any explicit query submission from the user. 6http://www.internetlivestats.com/one-second/#google-band 6 \fTable 1: Notation used in this tutorial. Meaning Notation Single query q Single document d Set of queries Q Collection of documents D Term in query q tq Term in document d td Full vocabulary of all terms T Set of ranked results retrieved for query q Rq Result tuple (document d at rank i) \u27e8i, d\u27e9, where \u27e8i, d\u27e9\u2208Rq Ground truth relevance label of document d for query q relq(d) di is more relevant than dj for query q relq(di) > relq(dj), or succinctly di \u227b q dj Frequency of term t in document d tf(t, d) Number of documents in D that contains term t d f(t) Vector representation of text z \u20d7 vz Probability function for an event E p(E) While this list of desired attributes of an IR model is in no way complete, it serves as a reference for comparing many of the neural and non-neural approaches described in the rest of this tutorial. 2.3 Notation We adopt some common notation for this tutorial shown in Table 1. We use lower-case to denote vectors (e.g., \u20d7 x) and upper-case for tensors of higher dimensions (e.g., X). The ground truth relq(d) in Table 1 may be based on either manual relevance annotations or be implicitly derived from user behaviour on SERP (e.g., from clicks). 2.4 Metrics A large number of IR studies [52, 65, 70, 84, 92, 93, 106, 144] have demonstrated that users of retrieval systems tend to pay attention mostly to top-ranked results. IR metrics, therefore, focus on rank-based comparisons of the retrieved result set R to an ideal ranking of documents, as determined by manual judgments or implicit feedback from user behaviour data. These metrics are typically computed at a rank position, say k, and then averaged over all queries in the test set. Unless otherwise speci\ufb01ed, R refers to the top-k results retrieved by the model. Next, we describe a few popular metrics used in IR evaluations. Precision and recall Precision and recall both compute the fraction of relevant documents retrieved for a query q, but with respect to the total number of documents in the retrieved set Rq and the total number of relevant documents in the collection D, respectively. Both metrics assume that the relevance labels are binary. Precisionq = P \u27e8i,d\u27e9\u2208Rq relq(d) |Rq| (1) Recallq = P \u27e8i,d\u27e9\u2208Rq relq(d) P d\u2208D relq(d) (2) Mean reciprocal rank (MRR) Mean reciprocal rank [40] is also computed over binary relevance judgments. It is given as the reciprocal rank of the \ufb01rst relevant document averaged over all queries. RRq = max \u27e8i,d\u27e9\u2208Rq relq(d) i (3) 7 \fMean average precision (MAP) The average precision [235] for a ranked list of documents R is given by, AvePq = P \u27e8i,d\u27e9\u2208Rq Precisionq,i \u00d7 relq(d) P d\u2208D relq(d) (4) where, Precisionq,i is the precision computed at rank i for the query q. The average precision metric is generally used when relevance judgments are binary, although variants using graded judgments have also been proposed [167]. The mean of the average precision over all queries gives the MAP score for the whole set. Normalized discounted cumulative gain (NDCG) There are few different variants of the discounted cumulative gain (DCGq) metric [90] which can be used when graded relevance judgments are available for a query q\u2014say, on a \ufb01ve-point scale between zero to four. A popular incarnation of this metric is as follows. DCGq = X \u27e8i,d\u27e9\u2208Rq 2relq(d) \u22121 log2(i + 1) (5) The ideal DCG (IDCGq) is computed the same way but by assuming an ideal rank order for the documents up to rank k. The normalized DCG (NDCGq) is then given by, NDCGq = DCGq IDCGq (6) 2.5 Traditional IR models In this section, we introduce a few of the traditionally popular IR approaches. The decades of insights from these IR models not only inform the design of our new neural based approaches, but these models also serve as important baselines for comparison. They also highlight the various desiderata that we expect the neural IR models to incorporate. TF-IDF There is a broad family of statistical functions in IR that consider the number of occurrences of each query term in the document (term-frequency) and the corresponding inverse document frequency of the same terms in the full collection (as an indicator of the informativeness of the term). One theoretical basis for such formulations is the probabilistic model of IR that yielded the popular BM25 [166] ranking function. BM25(q, d) = X tq\u2208q id f(tq) \u00b7 tf(tq, d) \u00b7 (k1 + 1) tf(tq, d) + k1 \u00b7 \u0010 1 \u2212b + b \u00b7 |d| avgdl \u0011 (7) where, avgdl is the average length of documents in the collection D, and k1 and b are parameters that are usually tuned on a validation dataset. In practice, k1 is sometimes set to some default value in the range [1.2, 2.0] and b as 0.75. The id f(t) is popularly computed as, id f(t) = log |D| \u2212d f(t) + 0.5 d f(t) + 0.5 (8) BM25 aggregates the contributions from individual terms but ignores any phrasal or proximity signals between the occurrences of the different query terms in the document. A variant of BM25 [229] also considers documents as composed of several \ufb01elds (such as, title, body, and anchor texts). 8 \fLanguage modelling (LM) In the language modelling based approach [79, 161, 230], documents are ranked by the posterior probability p(d|q). p(d|q) = p(q|d).p(d) P \u00af d\u2208D p(q| \u00af d).p( \u00af d) \u221dp(q|d).p(d) (9) = p(q|d) , assuming p(d) is uniform (10) = Y tq\u2208q p(tq|d) (11) = Y tq\u2208q \u0012 \u03bb\u02c6 p(tq|d) + (1 \u2212\u03bb)\u02c6 p(tq|D) \u0013 (12) = Y tq\u2208q \u0012 \u03bbtf(tq, d) |d| + (1 \u2212\u03bb) P \u00af d\u2208D tf(tq, \u00af d) P \u00af d\u2208D | \u00af d| \u0013 (13) where, \u02c6 p(E) is the maximum likelihood estimate (MLE) of the probability of event E. p(q|d) indicates the probability of generating query q by randomly sampling terms from document d. For smoothing, terms are sampled from both the document d and the full collection D\u2014the two events are treated as mutually exclusive, and their probability is given by \u03bb and (1 \u2212\u03bb), respectively. Both TF-IDF and language modelling based approaches estimate document relevance based on the count of only the query terms in the document. The position of these occurrences and the relationship with other terms in the document are ignored. Translation models Berger and Lafferty [17] proposed an alternative method to estimate p(tq|d) in the language modelling based IR approach (Equation 11), by assuming that the query q is being generated via a \"translation\" process from the document d. p(tq|d) = X td\u2208d p(tq|td) \u00b7 p(td|d) (14) The p(tq|td) component allows the model to garner evidence of relevance from non-query terms in the document. Berger and Lafferty [17] propose to estimate p(tq|td) from query-document paired data similar to popular techniques in statistical machine translation [22, 23]\u2014but other approaches for estimation have also been explored [236]. Dependence model None of the three IR models described so far consider proximity between query terms. To address this, Metzler and Croft [132] proposed a linear model over proximity-based features. DM(q, d) = (1 \u2212\u03bbow \u2212\u03bbuw) X tq\u2208q log (1 \u2212\u03b1d)tf(tq, d) |d| + \u03b1d P \u00af d\u2208D tf(tq, \u00af d) P \u00af d\u2208D | \u00af d| ! + \u03bbow X cq\u2208ow(q) log (1 \u2212\u03b1d)tf#1(cq, d) |d| + \u03b1d P \u00af d\u2208D tf#1(cq, \u00af d) P \u00af d\u2208D | \u00af d| ! + \u03bbuw X cq\u2208uw(q) log (1 \u2212\u03b1d)tf#uwN(cq, d) |d| + \u03b1d P \u00af d\u2208D tf#uwN(cq, \u00af d) P \u00af d\u2208D | \u00af d| ! (15) where, ow(q) and uw(q) are the set of all contiguous n-grams (or phrases) and the set of all bags of terms that can be generated from query q. tf#1 and tf#uwN are the ordered-window and unorderedwindow operators from Indri [186]. Finally, \u03bbow and \u03bbuw are the tunable parameters of the model. 9 \fPseudo relevance feedback (PRF) PRF-based methods, such as Relevance Models (RM) [108, 109], typically demonstrate strong performance at the cost of executing an additional round of retrieval. The set of ranked documents R1 from the \ufb01rst round of retrieval is used to select expansion terms to augment the query for the second round of retrieval. The ranked set R2 from the second round are presented to the user. The underlying approach to scoring a document in RM is by computing the KL divergence [105] between the query language model \u03b8q and the document language model \u03b8d. score(q, d) = \u2212 X t\u2208T p(t|\u03b8q)log p(t|\u03b8q) p(t|\u03b8d) (16) Without PRF, p(t|\u03b8q) = tf(t, q) |q| (17) But under the popular RM3 [2] formulation the new query language model \u00af \u03b8q is estimated by, p(t| \u00af \u03b8q) = \u03b1tf(t, q) |q| + (1 \u2212\u03b1) X d\u2208R1 p(t|\u03b8d)p(d) Y \u00af t\u2208q p(\u00af t|\u03b8d) (18) By expanding the query using the results from the \ufb01rst round of retrieval PRF based approaches tend to be more robust to the vocabulary mismatch problem plaguing many other traditional IR models. 2.6 Learning to rank (L2R) In learning to rank, a query-document pair is represented by a vector of numerical features \u20d7 x \u2208Rn, and a model f : \u20d7 x \u2192R is trained that maps the feature vector to a real-valued score. The training dataset for the model consists of a set of queries and a set of documents per query. Depending on the \ufb02avour of L2R, in addition to the feature vector, each query-document pair in the training data is augmented with some relevance information. Liu [121] categorized the different L2R approaches based on their training objectives. \u2022 In the pointwise approach, the relevance information relq(d) is in the form of a numerical value associated with every query-document pair with feature vector \u20d7 xq,d. The numerical relevance label can be derived from binary or graded relevance judgments or from implicit user feedback, such as clickthrough information. A regression model is typically trained on the data to predict the numerical value relq(d) given \u20d7 xq,d. \u2022 In the pairwise approach, the relevance information is in the form of preferences between pairs of documents with respect to individual queries (e.g., di \u227b q dj). The ranking problem in this case reduces to binary classi\ufb01cation for predicting the more relevant document. \u2022 Finally, the listwise approach involves directly optimizing for a rank-based metric\u2014which is dif\ufb01cult because these metrics are often not continuous (and hence not differentiable) with respect to the model parameters. The input features for L2R models typically belong to one of three categories. \u2022 Query-independent or static features (e.g., PageRank or spam score of the document) \u2022 Query-dependent or dynamic features (e.g., BM25) \u2022 Query-level features (e.g., number of words in query) Many machine learning models\u2014including support vector machines, neural networks, and boosted decision trees\u2014have been employed over the years for the learning to rank task, and a correspondingly 10 \fquery text generate query representation doc text generate doc representation estimate relevance query vector doc vector point of query representation point of match point of doc representation Figure 4: Document ranking typically involves a query and a document representation steps, followed by a matching stage. Neural models can be useful either for generating good representations or in estimating relevance, or both. large number of different loss functions have been explored. Next, we brie\ufb02y describe RankNet [26] that has been a popular choice for training neural L2R models and was also\u2014for many years\u2014an industry favourite, such as at the commercial Web search engine Bing.7 RankNet RankNet [26] is pairwise loss function. For a given query q, a pair of documents \u27e8di, dj\u27e9, with different relevance labels, such that di \u227b q dj, and feature vectors \u27e8\u20d7 xi, \u20d7 xj\u27e9, is chosen. The model f : Rn \u2192R, typically a neural network but can also be any other machine learning model whose output is differentiable with respect to its parameters, computes the scores si = f(\u20d7 xi) and sj = f(\u20d7 xj), such that ideally si > sj. Given the output scores \u27e8si, sj\u27e9from the model corresponding to the two documents, the probability that di would be ranked higher than dj is given by, pij \u2261p(di \u227b q dj) \u2261 1 1 + e\u2212\u03c3(si\u2212sj) (19) where, \u03c3 determines the shape of the sigmoid. Let Sij \u2208{\u22121, 0, +1} be the true preference label between di and dj for the training sample\u2014 denoting di is more, equal, or less relevant than dj, respectively. Then the desired probability of ranking di over dj is given by \u00af pij = 1 2(1 + Sij). The cross-entropy loss L between the desired probability \u00af pij and the predicted probability pij is given by, L = \u2212\u00af pijlog(pij) \u2212(1 \u2212\u00af pij)log(1 \u2212pij) (20) = 1 2(1 \u2212Sij)\u03c3(si \u2212sj) + log(1 + e\u2212\u03c3(si\u2212sj)) (21) = log(1 + e\u2212\u03c3(si\u2212sj)) if, documents are ordered such that di \u227b q dj(Sij = 1) (22) Note that L is differentiable with respect to the model output si and hence the model can be trained using gradient descent. We direct the interested reader to [27] for more detailed derivations for computing the gradients for RankNet and for the evolution to the listwise models LambdaRank [28] and LambdaMART [214]. 3 Anatomy of a neural IR model At a high level, document ranking comprises of performing three primary steps\u2014generate a representation of the query that speci\ufb01es the information need, generate a representation of the document 7https://www.microsoft.com/en-us/research/blog/ranknet-a-ranking-retrospective/ 11 \fquery text doc text generate manually designed features deep neural network for matching (a) Learning to rank using manually designed features (e.g., Liu [121]) query text generate query term vector doc text generate doc term vector generate matching patterns query term vector doc term vector deep neural network for matching (b) Estimating relevance from patterns of exact matches (e.g., [71, 141]) query text generate query embedding doc text generate doc embedding cosine similarity query embedding doc embedding (c) Learning query and document representations for matching (e.g., [88, 143]) query text query expansion using embeddings doc text generate doc term vector query likelihood query term vector doc term vector (d) Query expansion using neural embeddings (e.g., [51, 170]) Figure 5: Examples of different neural approaches to IR. In (a) and (b) the neural network is only used at the point of matching, whereas in (c) the focus is on learning effective representations of text using neural methods. Neural models can also be used to expand or augment the query before applying traditional IR techniques, as shown in (d). 12 \fbanana mango dog (a) Local representation banana mango dog fruit elongate ovate barks has tail (b) Distributed representation Figure 6: Under local representations the terms \u201cbanana\u201d, \u201cmango\u201d, and \u201cdog\u201d are distinct items. But distributed vector representations may recognize that \u201cbanana\u201d and \u201cmango\u201d are both fruits, but \u201cdog\u201d is different. that captures the distribution over the information contained, and match the query and the document representations to estimate their mutual relevance. All existing neural approaches to IR can be broadly categorized based on whether they in\ufb02uence the query representation, the document representation, or in estimating relevance. A neural approach may impact one or more of these stages shown in Figure 4. Neural networks are popular as learning to rank models discussed in Section 2.6. In these models, a joint representation of the query and the document is generated using manually designed features and the neural network is used only at the point of match to estimate relevance, as shown in Figure 5a. In Section 7.4, we will discuss deep neural network models, such as [71, 141], that estimate relevance based on patterns of exact query term matches in the document. Unlike traditional learning to rank models, however, these architectures (shown in Figure 5b) depend less on manual feature engineering and more on automatically detecting regularities in good matching patterns. In contrast, many (shallow and deep) neural IR models depend on learning good low-dimensional vector representations\u2014or embeddings\u2014of query and document text, and using them within traditional IR models or in conjunction with simple similarity metrics (e.g., cosine similarity). These models shown in Figure 5c may learn the embeddings by optimizing directly for the IR task (e.g., [88]), or separately in an unsupervised fashion (e.g., [143]). Finally, Figure 5d shows IR approaches where the neural models are used for query expansion [51, 170]. While the taxonomy of neural approaches described in this section is rather simple, it does provide an intuitive framework for comparing the different neural approaches in IR, and highlights the similarities and distinctions between these different techniques. 4 Term representations 4.1 A tale of two representations Vector representations are fundamental to both information retrieval and machine learning. In IR, terms are typically the smallest unit of representation for indexing and retrieval. Therefore, many IR models\u2014both neural and non-neural\u2014focus on learning good vector representations of terms. Different vector representations exhibit different levels of generalization\u2014some consider every term as distinct entities while others learn to identify common attributes. Different representation schemes derive different notions of similarity between terms from the de\ufb01nition of the corresponding vector spaces. Some representations operate over \ufb01xed-size vocabularies, while the design of others obviate such constraints. They also differ on the properties of compositionality that de\ufb01nes how representations for larger units of information, such as passages and documents, can be derived from individual term vectors. These are some of the important considerations for choosing a term representation suitable for a speci\ufb01c task. Local representations Under local (or one-hot) representations, every term in a \ufb01xed size vocabulary T is represented by a binary vector \u20d7 v \u2208{0, 1}|T |, where only one of the values in the vector is one and all the others are set to zero. Each position in the vector \u20d7 v corresponds to a term. The term \u201cbanana\u201d, under this representation, is given by a vector that has the value one in the position corresponding to \u201cbanana\u201d and zero everywhere else. Similarly, the terms \u201cmango\u201d and \u201cdog\u201d are represented by setting different positions in the vector to one. 13 \fbanana Doc 8 Doc 3 Doc 12 (a) In-document features banana like flies a fruit (b) Neighbouring-word features banana fruit-4 a-1 flies-3 like-2 fruit+1 (c) Neighbouring-word w/ distance features banana nan #ba ana na# ban (d) Character-trigraph features Figure 7: Examples of different feature-based distributed representations of the term \u201cbanana\u201d. The representations in (a), (b), and (c) are based on external contexts in which the term frequently occurs, while (d) is based on properties intrinsic to the term. The representation scheme in (a) depends on the documents containing the term, while the scheme shown in (b) and (c) depends on other terms that appears in its neighbourhood. The scheme (b) ignores inter-term distances. Therefore, in the sentence \u201cTime \ufb02ies like an arrow; fruit \ufb02ies like a banana\u201d, the feature \u201cfruit\u201d describes both the terms \u201cbanana\u201d and \u201carrow\u201d. However, in the representation scheme of (c) the feature \u201cfruit\u22124\u201d is positive for \u201cbanana\u201d, and the feature \u201cfruit+1\u201d for \u201carrow\u201d. Figure 6a highlights that under this scheme each term is a unique entity, and \u201cbanana\u201d is as distinct from \u201cdog\u201d as it is from \u201cmango\u201d. Terms outside of the vocabulary either have no representation, or are denoted by a special \u201cUNK\u201d symbol, under this scheme. Distributed representations Under distributed representations every term is represented by a vector \u20d7 v \u2208R|k|. \u20d7 v can be a sparse or a dense vector\u2014a vector of hand-crafted features or a learnt representation in which the individual dimensions are not interpretable in isolation. The key underlying hypothesis for any distributed representation scheme, however, is that by representing a term by its attributes allows for de\ufb01ning some notion of similarity between the different terms based on the chosen properties. For example, in Figure 6b \u201cbanana\u201d is more similar to \u201cmango\u201d than \u201cdog\u201d because they are both fruits, but yet different because of other properties that are not shared between the two, such as shape. A key consideration in any feature based distributed representation is the choice of the features themselves. A popular approach involves representing terms by features that capture their distributional properties. This is motivated by the distributional hypothesis [75] that states that terms that are used (or occur) in similar context tend to be semantically similar. Firth [56] famously purported this idea of distributional semantics8 by stating \u201ca word is characterized by the company it keeps\u201d. However, both distribution and semantics by themselves are not well-de\ufb01ned and under different context may mean very different things. Figure 7 shows three different sparse vector representations of the term \u201cbanana\u201d corresponding to different distributional feature spaces\u2014documents containing the term (e.g., LSA [48]), neighbouring words in a window (e.g., HAL [125], COALS [168], and [24]), and neighbouring words with distance (e.g., [117]). Finally, Figure 7d shows a vector representation of \u201cbanana\u201d based on the character trigraphs in the term itself\u2014instead of external contexts in which the term occurs. In Section 4.2 we will discuss how choosing different distributional features for term representation leads to different nuanced notions of semantic similarity between them. 8Readers should take note that while many distributed representations take advantage of distributional properties, the two concepts are not synonymous. A term can have a distributed representation based on non-distributional features\u2014e.g., parts of speech classi\ufb01cation and character trigraphs in the term. 14 \fbanana mango dog Figure 8: A vector space representation of terms puts \u201cbanana\u201d closer to \u201cmango\u201d because they share more common attributes than \u201cbanana\u201d and \u201cdog\u201d. When the vectors are high-dimensional, sparse, and based on distributional feature they are referred to as explicit vector representations [117]. On the other hand, when the vectors are dense, small (k \u226a|T|), and learnt from data then they are commonly referred to as embeddings. For both explicit and embedding based representations several distance metrics can be used to de\ufb01ne similarity between terms, although cosine similarity is commonly used. sim(\u20d7 vi,\u20d7 vj) = cos(\u20d7 vi,\u20d7 vj) = \u20d7 v \u22ba i \u20d7 vj \u2225\u20d7 vi\u2225\u2225\u20d7 vj\u2225 (23) Most embeddings are learnt from explicit vector space representations, and hence the discussions in 4.2 about different notions of similarity are also relevant to the embedding models. In Section 4.3 and 4.4 we brie\ufb02y discuss explicit and embedding based representations. With respect to compositionality, it is important to understand that distributed representations of items are often derived from local or distributed representation of its parts. For example, a document can be represented by the sum of the one-hot vectors or embeddings corresponding to the terms in the document. The resultant vector, in both cases, corresponds to a distributed bag-of-word representation. Similarly, the character trigraph representation of terms in Figure 7d is simply an aggregation over the one-hot representations of the constituent trigraphs. In the context of neural models, distributed representations generally refer to learnt embeddings. The idea of \u2018local\u2019 and \u2018distributed\u2019 representations has a speci\ufb01c signi\ufb01cance in the context of neural network models. Each concept, entity, or term can be represented within a neural network by the activation of a single neuron (local representation) or by the combined pattern of activations of several neurons (distributed representation) [82]. 4.2 Notions of similarity Any vector representation inherently de\ufb01nes some notion of relatedness between terms. Is \u201cSeattle\u201d closer to \u201cSydney\u201d or to \u201cSeahawks\u201d? The answer depends on the type of relationship we are interested in. If we want terms of similar type to be closer, then \u201cSydney\u201d is more similar to \u201cSeattle\u201d because they are both cities. However, if we are interested to \ufb01nd terms that co-occur in the same document or passage, then \u201cSeahawks\u201d\u2014Seattle\u2019s football team\u2014should be closer. The former represents a Typical, or type-based notion of similarity while the latter exhibits a more Topical sense of relatedness. If we want to compare \u201cSeattle\u201d with \u201cSydeny\u201d and \u201cSeahawks based on their respective vector representations, then the underlying feature space needs to align with the notion of similarity that we are interested in. It is, therefore, important for the readers to build an intuition about the choice of features and the notion of similarity they encompass. This can be demonstrated by using a toy corpus, such as the one in Table 2. Figure 9a shows that the \u201cin documents\u201d features naturally lend 15 \fTable 2: A toy corpus of short documents that we consider for the discussion on different notions of similarity between terms under different distributed representations. The choice of the feature space that is used for generating the distributed representation determines which terms are closer in the vector space, as shown in Figure 9. Sample documents doc 01 Seattle map doc 09 Denver map doc 02 Seattle weather doc 10 Denver weather doc 03 Seahawks jerseys doc 11 Broncos jerseys doc 04 Seahawks highlights doc 12 Broncos highlights doc 05 Seattle Seahawks Wilson doc 13 Denver Broncos Lynch doc 06 Seattle Seahawks Sherman doc 14 Denver Broncos Sanchez doc 07 Seattle Seahawks Browner doc 15 Denver Broncos Miller doc 08 Seattle Seahawks Ifedi doc 16 Denver Broncos Marshall to a Topical sense of similarity between the terms, while the \u201cneighbouring terms with distances\u201d features in Figure 9c gives rise to a more Typical notion of relatedness. Using \u201cneighbouring terms\u201d without the inter-term distances as features, however, produces a mixture of Topical and Typical relationships. This is because when the term distances are considered in feature de\ufb01nition then the document \u201cSeattle Seahawks Wilson\u201d produces the bag-of-features {Seahawks+1, Wilson+2} for \u201cSeattle\u201d which is non-overlapping with the bag-of-features {Seattle\u22121, Wilson+1} for \u201cSeahawks\u201d. However, when the feature de\ufb01nition ignores the term-distances then there is a partial overlap between the bag-of-features {Seahawks, Wilson} and {Seattle, Wilson} corresponding to \u201cSeattle\u201d and \u201cSeahawks\u201d. The overlap increases signi\ufb01cantly when we use a larger window-size for identifying neighbouring terms pushing the notion of similarity closer to a Topical de\ufb01nition. This effect of the windows size on the Topicality of the representation space was reported by Levy and Goldberg [115] in the context of learnt embeddings. Readers should take note that the set of all inter-term relationships goes far beyond the two notions of Typical and Topical that we discuss in this section. For example, vector representations could cluster terms closer based on linguistic styles\u2014e.g., terms that appear in thriller novels versus in children\u2019s rhymes, or in British versus American English. However, the notions of Typical and Topical similarities popularly come up in discussions in the context of many IR and NLP tasks\u2014sometimes under different names such as Paradigmatic and Syntagmatic relations9\u2014and the idea itself goes back at least as far as Saussure [30, 47, 74, 172]. 4.3 Explicit vector representations Explicit vector representations can be broadly categorized based on their choice of distributional features (e.g., in documents, neighbouring terms with or without distances, etc.) and different weighting schemes (e.g., TF-IDF, positive pointwise mutual information, etc.) applied over the raw counts. We direct the readers to [12, 199] which are good surveys of many existing explicit vector representation schemes. Levy et al. [117] demonstrated that explicit vector representations are amenable to the term analogy task using simple vector operations. A term analogy task involves answering questions of the form \u201cman is to woman as king is to ____?\u201d\u2014the correct answer to which in this case happens to be \u201cqueen\u201d. In NLP, term analogies are typically performed by simple vector operations of the following form followed by a nearest-neighbour search, 9Interestingly, the notion of Paradigmatic (Typical) and Syntagmatic (Topical) relationships show up almost universally\u2014not just in text. In vision, for example, the different images of \u201cnoses\u201d bear a Typical similarity to each other, while they share a Topical relationship with images of \u201ceyes\u201d or \u201cears\u201d. Curiously, Barthes [13] even extended this analogy to garments\u2014where paradigmatic relationships exist between items of the same type (e.g., between hats and between boots) and the proper Syntagmatic juxtaposition of items from these different Paradigms\u2014from hats to boots\u2014 forms a fashionable ensemble . 16 \fSeahawks Denver Broncos Doc 02 Doc 01 Seattle Doc 04 Doc 03 Doc 06 Doc 05 Doc 08 Doc 07 Doc 10 Doc 09 Doc 12 Doc 11 Doc 14 Doc 13 Doc 16 Doc 15 (a) \u201cIn-documents\u201d features Seahawks Denver Broncos Denver Seattle Seattle Broncos Seahawks weather map highlights jerseys Sherman Wilson Ifedi Browner Sanchez Lynch Marshall Miller (b) \u201cNeighbouring terms\u201d features Seahawks Denver Broncos Denver-1 Seattle-1 Seattle Broncos+1 Seahawks+1 weather+1 map+1 highlights+1 jerseys+1 Wilson+2 Wilson+1 Sherman+2 Sherman+1 Browner+2 Browner+1 Ifedi+2 Ifedi+1 Lynch+2 Lynch+1 Sanchez+2 Sanchez+1 Miller+2 Miller+1 Marshall+2 Marshall+1 (c) \u201cNeighbouring terms w/ distances\u201d features Figure 9: The \ufb01gure shows different distributed representations for the four terms\u2014\u201dSeattle\u201d, \u201cSeahawks\u201d, \u201cDenver\u201d, and \u201cBroncos\u201d\u2014based on the toy corpus in Table 2. Shaded circles indicate non-zero values in the vectors\u2014the darker shade highlights the vector dimensions where more than one vector has a non-zero value. When the representation is based on the documents that the terms occur in then \u201cSeattle\u201d is more similar to \u201cSeahawks\u201d than to \u201cDenver\u201d. The representation scheme in (a) is, therefore, more aligned with a Topical notion of similarity. In contrast, in (c) each term is represented by a vector of neighbouring terms\u2014where the distances between the terms are taken into consideration\u2014which puts \u201cSeattle\u201d closer to \u201cDenver\u201d demonstrating a Typical, or type-based, similarity. When the inter-term distances are ignored, as in (b), a mix of Typical and Topical similarities is observed. Finally, it is worth noting that neighbouring-terms based vector representations leads to similarities between terms that do not necessarily occur in the same document, and hence the term-term relationships are less sparse than when only in-document features are considered. 17 \fSeahawks Denver Broncos Seattle Seahawks \u2013 Seattle + Denver Denver Seattle Broncos Seahawks weather map highlights jerseys Sherman Wilson Ifedi Browner Sanchez Lynch Marshall Miller Figure 10: A visual demonstration of term analogies via simple vector algebra. The shaded circles denote non-zero values. Darker shade is used to highlight the non-zero values along the vector dimensions for which the output of \u20d7 vSeahawks \u2212\u20d7 vSeattle + \u20d7 vDenver is positive. The output vector is closest to \u20d7 vBroncos as shown in this toy example. \u20d7 vking \u2212\u20d7 vman + \u20d7 vwoman \u2248\u20d7 vqueen (24) It may be surprising to some readers that the vector obtained by the simple algebraic operations \u20d7 vking \u2212\u20d7 vman + \u20d7 vwoman produces a vector close to the vector \u20d7 vqueen. We present a visual intuition of why this works in practice in Figure 10, but we refer the readers to [7, 117] for a more rigorous mathematical explanation. 4.4 Embeddings While explicit vector representations based on distributional features can capture interesting notions of term-term similarity they have one big drawback\u2014the resultant vector spaces are highly sparse and high-dimensional. The number of dimensions is generally in the same order as the number of documents or the vocabulary size, which is unwieldy for most practical tasks. An alternative is to learn lower dimensional representations of terms from the data that retains similar attributes as the higher dimensional vectors. An embedding is a representation of items in a new space such that the properties of, and the relationships between, the items are preserved. Goodfellow et al. [64] articulate that the goal of an embedding is to generate a simpler representation\u2014where simpli\ufb01cation may mean a reduction in the number of dimensions, an increase in the sparseness of the representation, disentangling the principle components of the vector space, or a combination of these goals. In the context of term embeddings, the explicit feature vectors\u2014like those we discussed in Section 4.3\u2014constitutes the original representation. An embedding trained from these features assimilate the properties of the terms and the inter-term relationships observable in the original feature space. The most popular approaches for learning embeddings include either factorizing the term-feature matrix (e.g. LSA [48]) or using gradient descent based methods that try to predict the features given the term (e.g., [15, 134]). Baroni et al. [11] empirically demonstrate that these feature-predicting models that learn lower dimensional representations, in fact, also perform better than explicit counting based models on different tasks\u2014possibly due to better generalization across terms\u2014although some counter evidence the claim of better performances from embedding models have also been reported in the literature [116]. The sparse feature spaces of Section 4.3 are easier to visualize and leads to more intuitive explanations\u2014while their corresponding embeddings are more practically useful. Therefore, it makes sense to think sparse, but act dense in many scenarios. In the rest of this section, we will describe some of the popular neural and non-neural embedding models. Latent Semantic Analysis (LSA) LSA [48] involves performing singular value decomposition (SVD) [63] on a term-document (or term-passage) matrix X to obtain its low-rank approximation 18 \f[130]. SVD on X involves \ufb01nding a solution to X = U\u03a3V T , where U and V are orthogonal matrices and \u03a3 is a diagonal matrix.10 X U \u03a3 V \u22ba (\u20d7 dj) (\u20d7 dj) \u2193 \u2193 ( \u20d7 t \u22ba i )\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 x1,1 . . . x1,|D| . . . ... . . . x|T |,1 . . . x|T |,|D| \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb = ( \u20d7 t \u22ba i )\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0\u20d7 u1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb. . . \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0\u20d7 ul \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u00b7 \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 \u03c31 . . . 0 . . . ... . . . 0 . . . \u03c3l \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb\u00b7 \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 [ \u20d7 v1 ] . . . [ \u20d7 vl ] \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb (25) where, \u03c31, . . . , \u03c3l, \u20d7 u1, . . . ,\u20d7 ul, and \u20d7 v1, . . . ,\u20d7 vl are the singular values, the left singular vectors, and the right singular vectors, respectively. The k largest singular values, and corresponding singular vectors from U and V , is the rank k approximation of X (Xk = Uk\u03a3kV T k ). The embedding for the ith term is given by \u03a3k\u20d7 ti. While LSA operate on a term-document matrix, matrix factorization based approaches can also be applied to term-term matrices [25, 111, 168]. Neural term embedding models are typically trained by setting up a prediction task. Instead of factorizing the term-feature matrix\u2014as in LSA\u2014neural models are trained to predict the term from its features. Both the term and the features have one-hot representations in the input and the output layers, respectively, and the model learns dense low-dimensional representations in the process of minimizing the prediction error. These approaches are based on the information bottleneck method [197]\u2014discussed in more details in Section 6.2\u2014with the low-dimensional representations acting as the bottleneck. The training data may contain many instances of the same term-feature pair proportional to their frequency in the corpus (e.g., word2vec [134]), or their counts can be pre-aggregated (e.g., GloVe [160]). Word2vec For word2vec [61, 134, 136, 137, 169], the features for a term are made up of its neighbours within a \ufb01xed size window over the text from the training corpus. The skip-gram architecture (see Figure 11a) is a simple one hidden layer neural network. Both the input and the output of the model is in the form of one-hot vectors and the loss function is as follows, Lskip\u2212gram = \u22121 |S| |S| X i=1 X \u2212c\u2264j\u2264+c,j\u0338=0 log(p(ti+j|ti)) (26) where, p(ti+j|ti) = exp ((Wout\u20d7 vti+j)\u22ba(Win\u20d7 vti)) P|T | k=1 exp ((Wout\u20d7 vtk)\u22ba(Win\u20d7 vti)) (27) S is the set of all windows over the training text and c is the number of neighbours we need to predict on either side of the term ti. The denominator for the softmax function for computing p(ti+j|ti) sums over all the words in the vocabulary. This is prohibitively costly and in practice either hierarchicalsoftmax [149] or negative sampling is employed. Also, note that the model has two different weight matrices Win and Wout that are learnable parameters of the models. Win gives us the IN embeddings corresponding to all the input terms and Wout corresponding to the OUT embeddings for the output terms. Generally, only Win is used and Wout is discarded after training, but we will discuss an IR application that makes use of both the IN and the OUT embeddings later in Section 5.1. The continuous bag-of-words (CBOW) architecture (see Figure 11b) is similar to the skip-gram model, except that the task is to predict the middle term given the sum of the one-hot vectors of the neighbouring terms in the window. Given a middle term ti and the set of its neigbours 10The matrix visualization is adapted from https://en.wikipedia.org/wiki/Latent_semantic_ analysis. 19 \fWin Wout ti ti+j (a) Skip-gram Win Wout ti+2 ti+1 ti-2 ti-1 ti* ti (b) Continuous bag-of-words (CBOW) Figure 11: The (a) skip-gram and the (b) continuous bag-of-words (CBOW) architectures of word2vec. The architecture is a neural network with a single hidden layer whose size is much smaller than that of the input and the output layers. Both models use one-hot representations of terms in the input and the output. The learnable parameters of the model comprise of the two weight matrices Win and Wout that corresponds to the embeddings the model learns for the input and the output terms, respectively. The skip-gram model trains by minimizing the error in predicting a term given one of its neighbours. The CBOW model, in contrast, predicts a term from a bag of its neighbouring terms. 20 \f{ti\u2212c, . . . , ti\u22121, ti+1, . . . , ti+c}, the CBOW model creates a single training sample with the sum of the one-hot vectors of all the neighbouring terms as input and the one-hot vector \u20d7 vti, corresponding to the middle term, as the expected output. LCBOW = \u22121 |S| |S| X i=1 log(p(ti| X \u2212c\u2264j\u2264+c,j\u0338=0 ti+j)) (28) Contrast this with the skip-gram model that creates 2 \u00d7 c samples by individually pairing each of the neighbouring terms with the middle term. During training, given the same number of windows of text, the skip-gram model, therefore, trains orders of magnitude slower than the CBOW model [134] because it creates 2 \u00d7 c the number of training samples. Word2vec gained particular popularity for its ability to perform word analogies using simple vector algebra, similar to what we have already discussed in Section 4.3. For domains where the interpretability of the embeddings may be important, Sun et al. [191] introduced an additional constraint in the loss function to encourage more sparseness in the learnt representation. Lsparse\u2212CBOW = Lsparse\u2212CBOW \u2212\u03bb X t\u2208T \u2225\u20d7 vt\u22251 (29) GloVe The skip-gram model trains on individual term-neighbour pairs. If we aggregate all the training samples into a matrix X, such that xij is the frequency of the pair \u27e8ti, tj\u27e9in the training data, then the loss function changes to, Lskip\u2212gram = \u2212 |T | X i=1 |T | X j=1 xijlog(p(ti|tj)) (30) = \u2212 |T | X i=1 xi |T | X j=1 xij xi log(p(ti|tj)) (31) = \u2212 |T | X i=1 xi |T | X j=1 \u00af p(ti|tj)log(p(ti|tj)) (32) = |T | X i=1 xiH(\u00af p(ti|tj)log(p(ti|tj))) (33) H(. . . ) is the cross-entropy error between the actual co-occurrence probability \u00af p(ti|tj) and the one predicted by the model p(ti|tj). This is similar to the loss function for GloVe [160] if we replace the cross-entropy error with a squared-error and apply a saturation function f(. . . ) over the actual co-occurrence frequencies. LGloV e = \u2212 |T | X i=1 |T | X j=1 f(xij)(log(xij \u2212\u20d7 v \u22ba wi\u20d7 vwj))2 (34) GloVe is trained using AdaGrad [53]. Similar to word2vec, GloVe also generates two different (IN and OUT) embeddings, but unlike word2vec it generally uses the sum of the IN and the OUT vectors as the embedding for each term in the vocabulary. Paragraph2vec Following the popularity of word2vec [134, 136], similar neural architectures [4, 5, 68, 69, 110, 189] have been proposed that trains on term-document co-occurrences. The training typically involves predicting a term given the ID of a document or a passage that contains the term. In some variants, as shown in Figure 12, neighbouring terms are also provided as input. 21 \fWd,in Wt,out dj ti ti+2 ti+1 ti-2 ti-1 Wt,in Figure 12: The paragraph2vec architecture as proposed by Le and Mikolov [110] trains by predicting a term given a document (or passage) ID containing the term. By trying to minimize the prediction error, the model learns an embedding for the term as well as for the document. In some variants of the architecture, optionally the neighbouring terms are also provided as input\u2014as shown in the dotted box. The key motivation for training on term-document pairs is to learn an embedding that is more aligned with a Topical notion of term-term similarity\u2014which is often more appropriate for IR tasks. The term-document relationship, however, tends to be more sparse [219]\u2014including neighbouring term features may compensate for some of that sparsity. In the context of IR tasks, Ai et al. [4, 5] proposed a number of IR-motivated changes to the original Paragraph2vec [110] model training\u2014including, document frequency based negative sampling and document length based regularization. 5 Term embeddings for IR Traditional IR models use local representations of terms for query-document matching. The most straight-forward use case for term embeddings in IR is to enable inexact matching in the embedding space. In Section 2.2, we argued the importance of inspecting non-query terms in the document for garnering evidence of relevance. For example, even from a shallow manual inspection, it is easy to conclude that the passage in Figure 13a is about Albuquerque because it contains \u201cmetropolitan\u201d, \u201cpopulation\u201d, and \u201carea\u201d among other informative terms. On the other hand, the passage in Figure 13b contains \u201csimulator\u201d, \u201cinterpreter\u201d, and \u201cAltair\u201d which seems to suggest that the passage is instead more likely related to computers and technology. In traditional term counting based IR approaches these signals are often ignored. 22 \fAlbuquerque is the most populous city in the U.S. state of New Mexico. The high-altitude city serves as the county seat of Bernalillo County, and it is situated in the central part of the state, straddling the Rio Grande. The city population is 557,169 as of the July 1, 2014 population estimate from the United States Census Bureau, and ranks as the 32nd-largest city in the U.S. The Albuquerque metropolitan statistical area (or MSA) has a population of 907,301 according to the United States Census Bureau\u2019s most recently available estimate for 2015. (a) About Albuquerque Allen suggested that they could program a BASIC interpreter for the device; after a call from Gates claiming to have a working interpreter, MITS requested a demonstration. Since they didn\u2019t actually have one, Allen worked on a simulator for the Altair while Gates developed the interpreter. Although they developed the interpreter on a simulator and not the actual device, the interpreter worked \ufb02awlessly when they demonstrated the interpreter to MITS in Albuquerque, New Mexico in March 1975; MITS agreed to distribute it, marketing it as Altair BASIC. (b) Not about Albuquerque Figure 13: Two passages both containing exactly a single occurrence of the query term \u201cAlbuquerque\u201d. However, the passage in (a) contains other terms such as \u201cpopulation\u201d and \u201carea\u201d that are relevant to a description of the city. In contrast, the terms in passage (b) suggest that it is unlikely to be about the city, and only mentions the city potentially in a different context. Most existing shallow neural methods for IR focus on inexact matching using term embeddings. These approaches can be broadly categorized as those that compare the query with the document directly in the embedding space; and those that use embeddings to generate suitable query expansion candidates from a global vocabulary and then perform retrieval based on the expanded query. We discuss both these classes of approaches in the remainder of this section. 5.1 Query-document matching A popular strategy for using term embeddings in IR involves deriving a dense vector representation for the query and the document from the embeddings of the individual terms in the corresponding texts. The term embeddings can be aggregated in different ways, although using the average word (or term) embeddings (AWE) is quite popular [96, 101, 110, 143, 151, 190, 207]. Non-linear combinations of term vectors\u2014such as using Fisher Kernel Framework [35]\u2014have also been explored, as well as other families of aggregate functions of which AWE has been shown to be a special case [228]. The query and the document embeddings themselves can be compared using a variety of similarity metrics, such as cosine similarity or dot-product. For example, sim(q, d) = cos(\u20d7 vq,\u20d7 vd) = \u20d7 v \u22ba q \u20d7 vd \u2225\u20d7 vq\u2225\u2225\u20d7 vd\u2225 (35) where, \u20d7 vq = 1 |q| X tq\u2208q \u20d7 vtq \u2225\u20d7 vtq\u2225 (36) \u20d7 vd = 1 |d| X td\u2208d \u20d7 vtd \u2225\u20d7 vtd\u2225 (37) An important consideration here is the choice of the term embeddings that is appropriate for the retrieval scenario. While, LSA [48], word2vec [136], and GloVe [160] are popularly used\u2014it is important to understand how the notion of inter-term similarity modelled by a speci\ufb01c vector space may in\ufb02uence its performance on a retrieval task. In the example in Figure 13, we want to rank documents that contains related terms, such as \u201cpopulation\u201d or \u201carea\u201d higher\u2014these terms are Topically similar to the query term \u201cAlbuquerque\u201d. Intuitively, a document about \u201cTucson\u201d\u2014which is Typically similar to \u201cAlbuquerque\u201d\u2014is unlikely to satisfy the user intent. The discussion in Section 4.2 on how input features in\ufb02uence the notion of similarity in the learnt vector space is relevant here. Models, such as LSA [48] and Paragraph2vec [110], that consider term-document pairs generally capture Topical similarities in the learnt vector space. On the other hand, word2vec [136] and GloVe [160] embeddings may incorporate a mixture of Topical and Typical notions of relatedness. These 23 \fneural models behave more Typical when trained with short window sizes or on short text, such as on keyword queries [115] (refer to Section 4.2 for more details). In Section 4.4, we made a note that the word2vec model learns two different embeddings\u2014IN and OUT\u2014corresponding to the input and the output terms. Mitra et al. [143] point out that when using word2vec embeddings for IR it is more appropriate to represent the query terms using the IN embeddings and the document terms using the OUT embeddings of the trained model. In this Dual Embedding Space Model (DESM)11 [143, 151] the word2vec embeddings are trained on search queries, which empirically performs better than training on document body text. Training on short queries, however, makes the inter-term similarity more pronouncedly Typical (where, \u201cYale\u201d is closer to \u201cHarvard\u201d and \u201cNYU\u201d) when both terms are represented using their IN vectors\u2014better retrieval performance is achieved instead by using the IN-OUT similarity (where, \u201cYale\u201d is closer to \u201cfaculty\u201d and \u201calumni\u201d) that mirrors more the Topical notions of relatedness. DESMin\u2212out(q, d) = 1 |q| X tq\u2208q \u20d7 v \u22ba tq,in\u20d7 vd,out \u2225\u20d7 vtq,in\u2225\u2225\u20d7 vd,out\u2225 (38) \u20d7 vd,out = 1 |d| X td\u2208d \u20d7 vtd,out \u2225\u20d7 vtd,out\u2225 (39) An alternative to representing queries and documents as an aggregate of their term embeddings is to incorporate the term representations into existing IR models, such as the ones we discussed in Section 2.5. Zuccon et al. [236] proposed the Neural Translation Language Model (NTLM) that uses the similarity between term embeddings as a measure for term-term translation probability p(tq|td) in Equation 15. p(tq|td) = cos(\u20d7 vtq,\u20d7 vtd) P t\u2208T cos(\u20d7 vt,\u20d7 vtd) (40) On similar lines, Ganguly et al. [58] proposed the Generalized Language Model (GLM) which extends the Language Model based approach in Equation 13 to, p(d|q) = Y tq\u2208q \u0012 \u03bbtf(tq, d) |d| + \u03b1 P td\u2208d (sim(\u20d7 vtq,\u20d7 vtd) \u00b7 tf(td, d)) P td1\u2208d P td2\u2208d sim(\u20d7 vtd1 ,\u20d7 vtd2 ) \u00b7 |d|2 + \u03b2 P \u00af t\u2208Nt (sim(\u20d7 vtq,\u20d7 v\u00af t) \u00b7 P \u00af d\u2208D tf(\u00af t, \u00af d)) P td1\u2208Nt P td2\u2208Nt sim(\u20d7 vtd1 ,\u20d7 vtd2 ) \u00b7 P \u00af d\u2208D | \u00af d| \u00b7 |Nt| + (1 \u2212\u03b1 \u2212\u03b2 \u2212\u03bb) P \u00af d\u2208D tf(tq, \u00af d) P \u00af d\u2208D | \u00af d| \u0013 (41) where, Nt is the set of nearest-neighbours of term t. Ai et al. [4] incorporate paragraph vectors [110] into the query-likelihood model [161]. Another approach, based on the Earth Mover\u2019s Distance (EMD) [171], involves estimating similarity between pairs of documents by computing the minimum distance in the embedding space that each term in the \ufb01rst document needs to travel to reach the terms in the second document. This measure, commonly referred to as the Word Mover\u2019s Distance (WMD), was originally proposed by Wan et al. [210, 211], but used WordNet and topic categories instead of distributed representations for de\ufb01ning distance between terms. Term embeddings were later incorporated into the model by Kusner et al. [87, 104]. Finally, Guo et al. [72] incorporated similar notion of distance into the Non-linear Word Transportation (NWT) model that estimates relevance between a a query and a document. The NWT model involves solving the following constrained optimization problem, 11The dual term embeddings trained on Bing queries is available for download at https://www.microsoft. com/en-us/download/details.aspx?id=52597 24 \fmax X tq\u2208q log \u0012 X td\u2208u(d) f(tq, td) \u00b7 max \u0000cos(\u20d7 vtq,\u20d7 vtd), 0 \u0001id f(tq)+b \u0013 (42) subject to f(tq, td) \u22650, \u2200tq \u2208q, td \u2208d (43) and X tq\u2208q f(tq, td) = tf(td) + \u00b5 P \u00af d\u2208D tf(tq, \u00af d) P \u00af d\u2208D | \u00af d| |d| + \u00b5 , \u2200td \u2208d (44) where, id f(t) = |D| \u2212d f(t) + 0.5 d f(t) + 0.5 (45) u(d) is the set of all unique terms in document d, and b is a constant. Another term-alignment based distance metric was proposed by Kenter and de Rijke [98] for computing short-text similarity. The design of the saliency-weighted semantic network (SWSN) is motivated by the BM25 [166] formulation. swsn(sl, ss) = X tl\u2208sl id f(tl) \u00b7 sem(tl, ss) \u00b7 (k1 + 1) sem(tl, ss) + k1 \u00b7 \u0010 1 \u2212b + b \u00b7 |ss| avgsl \u0011 (46) where, sem(t, s) = max \u00af t\u2208s cos(\u20d7 vt,\u20d7 v\u00af t) (47) Here ss is the shorter of the two sentences to be compared, and sl the longer sentence. Telescoping evaluation Figure 14 highlights the distinct strengths and weaknesses of matching using local and distributed representations of terms for retrieval. For the query \u201cCambridge\u201d, a local representation (or exact matching) based model can easily distinguish between the passage on Cambridge (Figure 14a) and the one on Oxford (Figure 14b). However, the model is easily duped by an non-relevant passage that has been arti\ufb01cially injected with the term \u201cCambridge\u201d (Figure 14d). The distributed representation based matching, on the other hand, can spot that the other terms in the passage provide clear indication that the passage is not about a city, but fails to realize that the the passage about Oxford (Figure 14b) is inappropriate for the same query. Embedding based models often perform poorly when the retrieval is performed over the full document collection [143]. However, as seen in the example of Figure 14, the errors made by embedding based models and exact matching models are typically different\u2014and the combination of the two performs better than exact matching models alone [4, 58, 143]. Another popular technique is to use the embedding based model to re-rank only a subset of the documents retrieved by a different\u2014 generally an exact matching based\u2014IR model. The chaining of different IR models where each successive model re-ranks a smaller number of candidate documents is called Telescoping [131]. Telescoping evaluations are popular in the neural IR literature [71, 88, 141, 143, 177] and the results are representative of performances of these models on re-ranking tasks. However, as Mitra et al. [143] demonstrate, good performances on re-ranking tasks may not be indicative how the model would perform if the retrieval involves larger document collections. 5.2 Query expansion Instead of comparing the query and the document directly in the embedding space, an alternative approach is to use term embeddings to \ufb01nd good expansion candidates from a global vocabulary, and then retrieving documents using the expanded query. Different functions [51, 170, 227] have been proposed for estimating the relevance of candidate terms to the query\u2014all of them involves comparing the candidate term individually to every query term using their vector representations, and then aggregating the scores. For example, [51, 170] estimate the relevance of candidate term tc as, score(tc, q) = 1 |q| X tq\u2208q cos(\u20d7 vtc,\u20d7 vtq) (48) 25 \fthe city ofcambridge is a university city and the county town of cambridgeshire , england . it lies in east anglia , on the river cam , about 50 miles ( 80 km ) north of london . according to the united kingdom census 2011 , its population was 123867 ( including 24488 students ) . this makescambridge the second largest city in cambridgeshire after peterborough , and the 54th largest in the united kingdom . there is archaeological evidence of settlement in the area during the bronze age and roman times ; under viking rulecambridge became an important trading centre . the \ufb01rst town charters were granted in the 12th century , although city status was not conferred until 1951 . (a) Passage about the city of Cambridge oxford is a city in the south east region of england and the county town of oxfordshire . with a population of 159994 it is the 52nd largest city in the united kingdom , and one of the fastest growing and most ethnically diverse . oxford has a broad economic base . its industries include motor manufacturing , education , publishing and a large number of information technology and sciencebased businesses , some being academic offshoots . the city is known worldwide as the home of the university of oxford , the oldest university in the englishspeaking world . buildings in oxford demonstrate examples of every english architectural period since the arrival of the saxons , including the mid18thcentury radcliffe camera . oxford is known as the city of dreaming spires , a term coined by poet matthew arnold . (b) Passage about the city of Oxford the giraffe ( giraffa camelopardalis ) is an african eventoed ungulate mammal , the tallest living terrestrial animal and the largest ruminant . its species name refers to its camellike shape and its leopardlike colouring . its chief distinguishing characteristics are its extremely long neck and legs , its hornlike ossicones , and its distinctive coat patterns . it is classi\ufb01ed under the family giraf\ufb01dae , along with its closest extant relative , the okapi . the nine subspecies are distinguished by their coat patterns . the scattered range of giraffes extends from chad in the north to south africa in the south , and from niger in the west to somalia in the east . giraffes usually inhabit savannas , grasslands , and open woodlands . (c) Passage about giraffes thecambridge ( giraffa camelopardalis ) is an african eventoed ungulate mammal , the tallest living terrestrial animal and the largest ruminant . its species name refers to its camellike shape and its leopardlike colouring . its chief distinguishing characteristics are its extremely long neck and legs , its hornlike ossicones , and its distinctive coat patterns . it is classi\ufb01ed under the family giraf\ufb01dae , along with its closest extant relative , the okapi . the nine subspecies are distinguished by their coat patterns . the scattered range of giraffes extends from chad in the north to south africa in the south , and from niger in the west to somalia in the east . giraffes usually inhabit savannas , grasslands , and open woodlands . (d) Passage about giraffes, but \u2019giraffe\u2019 is replaced by \u2019Cambridge\u2019 Figure 14: A visualization of IN-OUT similarities between terms in different passages with the query term \u201cCambridge\u201d. The visualization\u2014adapted from https://github.com/bmitra-msft/ Demos/blob/master/notebooks/DESM.ipynb\u2014reveal that, besides the term \u201cCambridge\u201d, many other terms in the passages about both Cambridge and Oxford have high similarity to the query term. The passage (d) is adapted from the passage (c) on giraffes by replacing all the occurrences of the term \u201cgiraffe\u201d with \u201ccambridge\u201d. However, none of the other terms in (d) are found to be relevant to the query term. An embedding based approach may be able to determine that passage (d) is non-relevant to the query \u201cCambridge\u201d, but fail to realize that passage (b) is also non-relevant. A term counting-based model, on the other hand, can easily identify that passage (b) is non-relevant, but may rank passage (d) incorrectly high. 26 \f(a) Global embedding (b) Local embedding Figure 15: A two-dimensional visualization of term embeddings when the vector space is trained on a (a) global corpus and a (b) query-speci\ufb01c corpus, respectively. The grey circles represent individual terms in the vocabulary. The white circle represents the query \u201c\u2018ocean remote sensing\u201d by averaging the embeddings of the individual terms in the query, and the light grey circles correspond to good expansion terms for this query. When the representations are query-speci\ufb01c then the meaning of the terms are better disambiguated, and more likely to result in the selection of good expansion terms. Term embedding based query expansion on its own performs worse than pseudo-relevance feedback [170]. But like the models in the previous section, shows better performances when used in combination with PRF [227]. Diaz et al. [51] explored the idea of query-speci\ufb01c term embeddings and found that they are much more effective in identifying good expansion terms than a global representation (see Figure 15). The local model proposed by Diaz et al. [51] incorporate relevance feedback in the process of learning the term embeddings\u2014a set of documents is retrieved for the query and a query-speci\ufb01c term embedding model is trained. This local embedding model is then employed for identifying expansion candidates for the query for a second round of document retrieval. Term embeddings have also been explored for re-weighting query terms [233] and \ufb01nding relevant query re-writes [69], as well as in the context of other IR tasks such as cross-lingual retrieval [207] and entity retrieval [200]. In the next section, we move on to neural network models with deeper architectures and their applications to retrieval. 6 Deep neural networks Deep neural network models consist of chains of tensor operations. The tensor operation can range from parameterized linear transformations (e.g., multiplication with a weight matrix, addition of a bias vector) to elementwise application of non-linear functions, such as tanh or recti\ufb01ed linear units (ReLU) [73, 89, 150]. Figure 16 shows a simple feed-forward neural network with fully-connected layers. For an input vector \u20d7 x, the model produces the output \u20d7 y as follows, \u20d7 y = tanh(W2 \u00b7 tanh(W1 \u00b7 \u20d7 x +\u20d7 b1) +\u20d7 b2) (49) The model training involves tuning the parameters W1,\u20d7 b1, W2, and\u20d7 b2 to minimize the loss between the expected output and the actual output of the \ufb01nal layer. The parameters are usually trained discriminatively using backpropagation [14, 77, 175]. During forward-pass each layer generates an output conditioned on its input, and during backward pass each layer computes the error gradient with respect to its parameters and its inputs. The design of a DNN typically involves many choices of architectures and hyper-parameters. Neural networks with as few a single hidden layer\u2014but with suf\ufb01cient number of hidden nodes\u2014can 27 \fforward pass backward pass W1 W2 input actual output loss expected output (a) A neural network with a single hidden layer. non-linearity (tanh) input linear transform (W1, b1) non-linearity (tanh) linear transform (W2, b2) actual output forward pass backward pass expected output loss (b) The same neural network viewed as a chain of computational steps. Figure 16: Two different visualizations of a feed-forward neural network with a single hidden layer. In (a), the addition of the bias vector and the non-linearity function is implicit. Figure (b) shows the same network but as a sequence of computational nodes. Most popular neural network toolkits implement a set of standard computational nodes that can be connected to build more sophisticated neural architectures. theoretically approximate any function [85]. In practice, however, deeper architectures\u2014sometimes with as many as 1000 layers [76]\u2014have been shown to perform signi\ufb01cantly better than shallower networks. For readers who are less familiar with neural network models, we present a simple example in Figure 17 to illustrate how hidden layers enable these models to capture non-linear relationships. We direct readers to [148] for further discussions on how additional hidden layers help. The rest of this section is dedicated to the discussion of input representations and popular architectures for deep neural models. 6.1 Input text representations Neural models that learn representations of text take raw text as input. A key consideration is how the text should be represented at the input layer of the model. Figure 18 shows some of the popular input representations of text. Some neural models [66, 94, 100, 192] operate at the character-level. In these models, each character is typically represented by a one-hot vector. The vector dimensions\u2014referred to as channels\u2014in this case equals the number of allowed characters in the vocabulary. These models incorporate the least amount of prior knowledge about the language in the input representation\u2014for example, these models are often required to learn about tokenization from scratch by treating space as just another character in the vocabulary. The representation of longer texts, such as sentences, can be derived by concatenating or summing the character-level vectors as shown in Figure 18a. The input text can also be pre-tokenized into terms\u2014where each term is represented by either a sparse vector or using pre-trained term embeddings (Figure 18d). Terms may have a one-hot (or local) representation where each term has an unique ID (Figure 18b), or the term vector can be derived by aggregating one-hot vectors of its constituting characters (or character n-graphs) as shown in Figure 18c. If pre-trained embeddings are used for term representation, then the embedding vectors can be further tuned during training, or kept \ufb01xed. 28 \fInput features Hidden layers Label surface kerberos book library H1 H2 1 0 1 0 1 0 \u2713 1 1 0 0 0 0 \u2717 0 1 0 1 0 1 \u2713 0 0 1 1 0 0 \u2717 library book surface kerberos +0.5 +0.5 -1 -1 -1 -1 +1 +1 +0.5 +0.5 H1 H2 Figure 17: Consider a toy binary classi\ufb01cation task on a corpus of four short texts\u2014\u201csurface book\u201d, \u201ckerberos library\u201d, \u201clibrary book\u201d, and \u201ckerberos surface\u201d\u2014where the model needs to predict if the text is related to computers. The \ufb01rst two texts\u2014\u201cSurface Book\u201d and \u201ckerberos library\u201d\u2014are positive under this classi\ufb01cation, and the latter two negative. The input feature space consists of four binary features that indicate whether each of the four terms from the vocabulary is present in the text. The table shows that the speci\ufb01ed classes are not linearly separable with respect to the input feature space. However, if we add couple of hidden nodes, as shown in the diagram, then the classes can be linearly separated with respect to the output of the hidden layer. Similar to character-level models, the term vectors are further aggregated (by concatenation or sum) to obtain the representation of longer chunks of text, such as sentences. While one-hot representations of terms (Figure 18b) are common in many NLP tasks, pre-trained embeddings (e.g., [86, 158]) and character n-graph based representations (e.g., [88, 141]) are more popularly employed in IR. 6.2 Popular architectures In this section, we describe few neural operations and architectures popular in IR. For broader overview of different neural architectures and design patterns please refer to [64, 112, 175]. Shift-invariant neural operations Convolutional [89, 103, 113, 114] and recurrent [67, 83, 135, 173] architectures are commonplace in most deep learning applications. These neural operations are part of a broader family of shift-invariant architectures. The key intuition behind these architectures stem from the natural regularities observable in most inputs. In vision, for example, the task of detecting a face should be invariant to whether the image is shifted, rotated, or scaled. Similarly, the meaning of an English sentence should, in most cases, stay consistent independent of which part of the document it appears in. Therefore, intuitively a neural model for object recognition or text understanding should not learn an independent logic for the same action applied to different parts of the input space. All shift-invariant neural operations fundamentally employ a window-based approach. A \ufb01xed size window is moved over the input space with \ufb01xed stride in each step. A (typically parameterized) function\u2014referred to as a kernel, or a \ufb01lter, or a cell\u2014is applied over each instance of the window. The parameters of the cell are shared across all the instances of the input window. The shared parameters not only implies less number of total parameters in the model\u201e but also more supervision per parameter per training sample due to the repeated application. Figure 19a shows an example of a cell being applied on a sequence of terms\u2014with a window size of three terms\u2014in each step. A popular cell implementation involves multiplying with a weight matrix\u2014in which case the architecture in Figure 19a is referred as convolutional. An example of a cell without any parameters is pooling\u2014which consists of aggregating (e.g., by computing the max or the average) over all the terms in the window12. Note, that the length of the input sequence can be variable in both cases and the length of the output of a convolutional (or pooling) layer is a function of the input length. Figure 19b shows an example of global pooling\u2014where the window spans over the 12If the input has multiple channels per term then the aggregation is performed per channel. 29 \fd o g s h a v e o w n e r s c a t s h a v e s t a f f one-hot vectors concatenate channels [chars x channels] (a) Character-level input d o g s h a v e o w n e r s c a t s h a v e s t a f f one-hot vectors concatenate sum sum sum sum sum sum channels [words x channels] (b) Term-level input w/ bag-of-characters per term # d o g s # # h a v e # # o w n e r s # # c a t s # # h a v e # # s t a f f # one-hot vectors concatenate or sum sum sum sum sum sum sum channels [words x channels] or [1 x channels] (c) Term-level input w/ bag-of-trigraphs per term d o g s h a v e o w n e r s c a t s h a v e s t a f f pre-trained embeddings concatenate or sum channels [words x channels] or [1 x channels] (d) Term-level input w/ pre-trained term embeddings Figure 18: Examples of different representation strategies for text input to deep neural network models. The smallest granularity of representation can be a character or a term. The vector can be a sparse local representation, or a pre-trained embedding. 30 \foutput (a) Convolution or pooling convolution pooling output (b) Convolution w/ global pooling output (c) Recurrent output (d) Recursive or tree Figure 19: Popular shift-invariant neural architectures including convolutional neural networks (CNN), recurrent neural networks (RNN), pooling layers, and tree-structured neural networks. whole input\u2014being applied on top of a convolutional layer. The global pooling strategy is common for generating a \ufb01xed size output from a variable length input.13 In convolution or pooling, each window is applied independently. In contrast, in the recurrent architecture of Figure 19c the cell not only considers the input window but also the output of the previous instance of the cell as its input. Many different cell architectures have been explored for recurrent neural networks (RNN)\u2014although Elman network [54], Long Short-Term Memory (LSTM) [83], and Gated Recurrent Unit (GRU) [32, 34] are popular. RNNs are popularly applied to sequences, but can also be useful for two (and higher) dimensional inputs [209]. One consideration when using convolutional or recurrent layers is how the window outputs are aggregated. Convolutional layers are typically followed by pooling or fully-connected layers that perform a global aggregation over all the window instances. While a fully-connected layer is aware of each window position, a global pooling layer is typically agnostic to it. However, unlike a fullyconnected layer, a global max-pooling operation can be applied to a variable size input. Where a global aggregation strategy may be less appropriate (e.g., long sequences), recurrent networks with memory [18, 188, 213] and/or attention [33, 78, 126, 146, 217] may be useful. Finally, Figure 19c shows tree-structured (or recursive) neural networks [20, 62, 182, 183, 195] where the same cell is applied at multple levels in a tree-like hierarchical fashion. 13It is obvious, but may be still worth pointing out, that a global convolutional layer is exactly the same as a fully-connected layer. 31 \finput output embedding encode decode (a) Auto-encoder input1 input2 embedding1 model1 similarity function embedding2 model2 (b) Siamese network Figure 20: Both (a) the auto-encoder and (b) the Siamese network architectures are designed to learn compressed representations of inputs. In an auto-encoder the embeddings are learnt by minimizing the self-reconstruction error, whereas a Siamese network focuses on retaining the information that is necessary for determining the similarity between a pair of items (say, a query and a document). Auto-encoders The auto-encoder architecture [14, 16, 164] is based on the information bottleneck method [197]. The goal is to learn a compressed representation \u20d7 x \u2208Rk of items from their higherdimensional vector representations \u20d7 v \u2208RK, such that k \u226aK. The model has an hour-glass shape as shown in Figure 20a and is trained by feeding in the high-dimensional vector inputs and trying to re-construct the same representation at the output layer. The lower-dimensional middle layer forces the encoder part of the model to extract the minimal suf\ufb01cient statistics of \u20d7 v into \u20d7 x, such that the decoder part of the network can reconstruct the original input back from \u20d7 x. The model is trained by minimizing the reconstruction error between the input \u20d7 v and the actual output of the decoder \u20d7 v\u2032. The squared-loss is popularly employed. Lauto\u2212encoder(\u20d7 v, \u20d7 v\u2032) = \u2225\u20d7 v \u2212\u20d7 v\u2032\u22252 (50) Siamese networks Siamese networks were originally proposed for comparing \ufb01ngerprints [10] and signatures [21]. Yih et al. [222] later adapted the same architecture for comparing short texts. The siamese network, as seen in Figure 20b, resembles the auto-encoder architecture (if you squint hard enough!)\u2014but unlike the latter is trained on pairs of inputs \u27e8input1, input2\u27e9. The architecture consists of two models (model1 and model2) that project input1 and input2, respectively, to \u20d7 v1 and \u20d7 v2 in a common embedding space. A pre-de\ufb01ned metric (e.g., cosine similarity) is used to then compute the similarity between \u20d7 v1 and \u20d7 v2. The model parameters are optimized such that \u20d7 v1 and \u20d7 v2 are closer when the two inputs are expected to be similar, and further away otherwise. One possible loss function is the logistic loss. If each training sample consist of a triple \u27e8\u20d7 vq, \u20d7 vd1, \u20d7 vd2\u27e9, such that sim(\u20d7 vq, \u20d7 vd1) should be greater than sim(\u20d7 vq, \u20d7 vd2), then we minimize, Lsiamese(\u20d7 vq, \u20d7 vd1, \u20d7 vd2) = log \u0010 1 + e\u03b3(sim( \u20d7 vq, \u20d7 vd2)\u2212sim( \u20d7 vq, \u20d7 vd1))\u0011 (51) where, \u03b3 is a constant that is often set to 10. Typically both the models\u2014model1 and model2\u2014share identical architectures, but can also choose to share the same parameters. It is important to note that, unlike the auto-encoder, the minimal suf\ufb01cient statistics retained by a Siamese network is dictated by which information it deems important for determining the similarity between the paired items. 32 \f6.3 Neural toolkits In recent years, the advent of numerous \ufb02exible toolkits [1, 6, 31, 38, 91, 152, 198, 225] has had a catalytic in\ufb02uence on the area of neural networks. Most of the toolkits de\ufb01ne a set of common neural operations that\u2014like Lego14 blocks\u2014can be composed to build complex network architectures.15 Each instance of these neural operations or computation nodes can have associated learnable parameters that are updated during training, and these parameters can be shared between different parts of the network if necessary. Every computation node under this framework must implement the appropriate logic for, \u2022 computing the output of the node given the input (forward-pass) \u2022 computing the gradient of the loss with respect to the inputs, given the gradient of the loss with respect to the output (backward-pass) \u2022 computing the gradient of the loss with respect to its parameters, given the gradient of the loss with respect to the output (backward-pass) A deep neural network, such as the one in Figure 16 or ones with much more complex architectures (e.g., [76, 107, 193]), can then be speci\ufb01ed by chaining instances of these available computation nodes, and trained end-to-end on large datasets using backpropagation over GPUs or CPUs. In IR, various application interfaces [142, 201] bind these neural toolkits with existing retrieval/indexing frameworks, such as Indri [186]. Refer to [179] for a comparison of different neural toolkits based on their speed of training using standard performance benchmarks. 7 Deep neural models for IR Traditionally, deep neural network models have much larger number of learnable parameters than their shallower counterparts. A DNN with a large set of parameters can easily over\ufb01t to smaller training datasets [231]. Therefore, during model design it is typical to strike a balance between the number of model parameters and the size of the data available for training. Data for ad-hoc retrieval mainly consists of, \u2022 Corpus of search queries \u2022 Corpus of candidate documents \u2022 Ground truth\u2014in the form of either explicit human relevance judgments or implicit labels (e.g., from clicks)\u2014for query-document pairs While both large scale corpora of search queries [46, 159] and documents [9, 29, 45] are publicly available for IR research, the amount of relevance judgments that can be associated with them are often limited outside of large industrial research labs\u2014mostly due to user privacy concerns. We note that we are interested in datasets where the raw text of the query and the document is available. Therefore, this excludes large scale public labelled datasets for learning-to-rank (e.g., [122]) that don\u2019t contain the textual contents. The proportion of labelled and unlabelled data that is available in\ufb02uences the level of supervision that can be employed for training these deep models. Most of the models we covered in Section 5 operate under the data regime where large corpus of documents or queries is available, but limited (or no) labelled data. Under such settings where no direct supervision or relevance judgments is provided, typically an unsupervised approach is employed (e.g., [174]). The unlabelled document (or query) corpus is used to learn good text representations, and then these learnt representations are incorporated into an existing retrieval model or a query-document similarity metric. If small amounts of labelled data are available, then that can be leveraged to train a retrieval model with few parameters that in turn uses text representations that is pre-trained on larger unlabelled corpus. Examples of such semi-supervised training includes models such as [71, 157, 158]. In contrast, fully-supervised 14https://en.wikipedia.org/wiki/Lego 15http://www.inference.vc/content/images/2016/01/9k-.jpg 33 \fTable 3: Comparing the nearest neighbours for \"seattle\" and \"taylor swift\" in the CDSSM embedding spaces when the model is trained on query-document pairs vs. query pre\ufb01x-suf\ufb01x pairs. The former resembles a Topical notion of similarity between terms, while the latter is more Typical in the de\ufb01nition of inter-term similarities. seattle taylor swift Query-Document Pre\ufb01x-Suf\ufb01x Query-Document Pre\ufb01x-Suf\ufb01x weather seattle chicago taylor swift.com lady gaga seattle weather san antonio taylor swift lyrics meghan trainor seattle washington denver how old is taylor swift megan trainor ikea seattle salt lake city taylor swift twitter nicki minaj west seattle blog seattle wa taylor swift new song anna kendrick models such as [37, 88, 141, 176], optimize directly for the target task by training on large number of labelled query-document pairs. It is also useful to distinguish between deep neural models that focus on ranking long documents, from those that rank short texts (e.g., for the question-answering task, or for document ranking where the document representation is based on the title or on clicked queries). The challenges in short text ranking are somewhat distinct from those involved in the ad-hoc retrieval task [36]. When computing similarity between pairs of short-texts, vocabulary mismatches are more likely than when the retrieved items contain long text descriptions [133]. Neural models that perform matching in an embedding space tends to be more robust towards the vocabulary mismatch problem compared to lexical term-based matching models. On the other hand, documents with long body texts may contain mixture of many topics and the query matches may be spread over the whole document. A neural document ranking model (NDRM) must effectively aggregate the relevant matches from different parts of a long document. In the rest of this section, we discuss different types of NDRM architectures and approaches that have been explored in the literature. 7.1 Document auto-encoders Salakhutdinov and Hinton [174] proposed one of the earliest deep neural models for ad-hoc retrieval. The model is a deep auto-encoder trained on unlabelled document corpus. The model treats each document as a bag-of-terms and uses a one-hot vector for representing the terms themselves\u2014 considering only top two thousand most popular terms in the corpus after removing stopwords. Salakhutdinov and Hinton [174] \ufb01rst pre-train the model layer-by-layer, and then train it further end-to-end for additional tuning. The model uses binary hidden units and therefore the learnt vector representations of documents are also binary. The Semantic Hashing model generates a condensed binary vector representation (or a hash) of documents. Given a search query, a corresponding hash is generated and the relevant candidate documents quickly retrieved that match the same hash vector. A standard IR model can then be employed to rank between the selected documents. Semantic hashing is an example of a document encoder based approach to IR. The vocabulary size of two thousand distinct terms may be too small for most practical IR tasks. A larger vocabulary or a different term representation strategy\u2014such as the character trigraph based representation of Figure 18c\u2014may be considered. Another shortcoming of the auto-encoder architecture is that it minimizes the document reconstruction error which may not align exactly with the goal of the target IR task. A better alternative may be to train on query-document paired data where the choice of what constitutes as the minimal suf\ufb01cient statistics of the document is in\ufb02uenced by what is important for determining relevance of the document to likely search queries. In line with this intuition, we next discuss the Siamese architecture based models. 34 \finteraction matrix neural network query document Figure 21: Schematic view of an interaction matrix generated by comparing windows of text from the query and the document. A deep neural network\u2014such as a CNN\u2014operates over the interaction matrix to \ufb01nd patterns of matches that suggest relevance of the document to the query. 7.2 Siamese networks In recent years, several deep neural models based on the Siamese architecture have been explored especially for short text matching. The Deep Semantic Similarity Model (DSSM) [88] is one such architecture that trains on query and document title pairs where both the pieces of texts are represented as bags-of-character-trigraphs. The DSSM architecture consists of two deep models\u2014for the query and the document\u2014with all fully-connected layers and cosine distance as the choice of similarity function in the middle. Huang et al. [88] proposed to train the model on clickthrough data where each training sample consists of a query q, a positive document d+ (a document that was clicked by a user on the SERP for that query), and a set of negative documents D\u2212randomly sampled with uniform probability from the full collection. The model is trained my minimizing the cross-entropy loss after taking a softmax over the model outputs for all the candidate documents, Ldssm(q, d+, D\u2212) = \u2212log \u0010 e\u03b3\u00b7cos \u0000\u20d7 q, \u20d7 d+\u0001 P d\u2208D e\u03b3\u00b7cos \u0000\u20d7 q,\u20d7 d \u0001 \u0011 (52) where, D = {d+} \u222aD\u2212 (53) While, DSSM [88] employs deep fully-connected architecture for the query and the document models, more sophisticated architectures involving convolutional layers [59, 86, 177, 178], recurrent layers [155, 156], and tree-structured networks [195] have also been explored. The similarity function can also be parameterized and implemented as additional layers of the neural network as in [176]. Most of these models have been evaluated on the short text matching task, but Mitra et al. [141] recently reported meaningful performances on the long document ranking task from models like DSSM [88] and CDSSM [177]. Mitra et al. [141] also show that sampling the negative documents uniformly from the collection is less effective to using documents that are closer to the query intent but judged as non-relelvant by human annotators. Notions of similarity It is important to emphasize that our earlier discussion in Section 4.2 on different notions of similarity between terms that can be learnt by shallow embedding models is also relevant in the context of these deeper architectures. In the case of Siamese networks, such as the convolutional-DSSM (CDSSM) [177], the notion of similarity being modelled depends on the choice of the paired data that the model is trained on. When the CDSSM is trained on query and document title pairs [177] then the notion of similarity is more Topical in nature. Mitra and Craswell [139] trained the same CDSSM architecture on query pre\ufb01x-suf\ufb01x pairs which, in contrast, captures a more Typical notion of similarity, as shown in Table 3. In a related work, Mitra [138] demonstrated that the CDSSM model when trained on session-query pairs is amenable to vector-based text analogies. 35 \fThe President of the United States of America (POTUS) is the elected head of state and head of government of the United States. The president leads the executive branch of the federal government and is the commander in chief of the United States Armed Forces. Barack Hussein Obama II (born August 4, 1961) is an American politician who is the 44th and current President of the United States. He is the \ufb01rst African American to hold the of\ufb01ce and the \ufb01rst president born outside the continental United States. (a) Lexical model The President of the United States of America (POTUS) is the elected head of state and head of government of the United States. The president leads the executive branch of the federal government and is the commander in chief of the United States Armed Forces. Barack Hussein Obama II (born August 4, 1961) is an American politician who is the 44th and current President of the United States. He is the \ufb01rst African American to hold the of\ufb01ce and the \ufb01rst president born outside the continental United States. (b) Semantic model Figure 22: Analysis of term importance for estimating the relevance of a passage to the query \u201cUnited States President\u201d by a lexical and a semantic deep neural network model. The lexical model only considers the matches of the query terms in the document, but gives more emphasis to earlier occurrences. The semantic model is able to extract evidence of relevance from related terms such as \u201cObama\u201d and \u201cfederal\u201d. \u20d7 vthings to do in london \u2212\u20d7 vlondon + \u20d7 vnew york \u2248\u20d7 vnew york tourist attractions (54) \u20d7 vuniversity of washington \u2212\u20d7 vseattle + \u20d7 vdenver \u2248\u20d7 vuniversity of colorado (55) \u20d7 vnew york + \u20d7 vnewspaper \u2248\u20d7 vnew york times (56) By modelling different notions of similarity these deep neural models tend to be more suitable for other IR tasks, such as query auto-completion [139] or session-based personalization [138]. 7.3 Interaction-based networks Siamese networks represent both the query and the document using single embedding vectors. Alternatively, we can individually compare different parts of the query with different parts of the document, and then aggregate these partial evidence of relevance. Especially, when dealing with long documents\u2014that may contain a mixture of many topics\u2014such a strategy may be more effective than trying to represent the full document as a single low-dimensional vector. Typically, in these approaches a sliding window is moved over both the query and the document text and each instance of the window over the query is compared against each instance of the window over the document text (see Figure 21). The terms within each window can be represented in different ways including, one-hot vectors, pre-trained embeddings, or embeddings that are updated during the model training. A neural model (typically convolutional) operates over the generated interaction matrix and aggregates the evidence across all the pairs of windows compared. The interaction matrix based approach have been explored both for short text matching [86, 124, 158, 208, 220, 223], as well as for ranking long documents [141, 157]. 7.4 Lexical and semantic matching networks Much of the explorations in neural IR models have focused on learning good representations of text. However, these representation learning models tend to perform poorly when dealing with rare terms and search intents. In Section 2.2, we highlighted the importance of modelling rare terms in IR. Based on similar motivaions, Guo et al. [71] and Mitra et al. [141] have recently emphasized the importance of modelling lexical matches using deep neural networks. Mitra et al. [141] argue that Web search is a \u201ctale of two queries\u201d. For the query \u201cpekarovic land company\u201d, it is easier to estimate relevance based on patterns of exact matches of the rare term \u201cpekarovic\u201d. On the other hand, a neural model focused on matching in the embedding space is unlikely to have a good representation for this rare term. In contrast, for the query \u201cwhat channel are the seahawks on today\u201d, the target document likely contains \u201cESPN\u201d or \u201cSky Sports\u201d\u2014not the term \u201cchannel\u201d. A representation learning neural model can associate occurrences of \u201cESPN\u201d in the document as positive evidence towards the document being relevant to the query. Figure 22 highlights the difference between the terms that in\ufb02uence 36 \fquery text generate query term vector doc text generate doc term vector generate interaction matrix query term vector doc term vector query text generate query embedding doc text generate doc embedding hadamard product query embedding doc embedding fully connected layers for matching fully connected layers for matching sum lexical matching model semantic matching model Figure 23: In the Duet architecture [141], the two sub-networks are jointly trained and the \ufb01nal output is a linear combination of the outputs of the lexical and the semantic matching sub-networks. The lexical matching sub-network (left) uses a convolutional model that operates over a binary interaction matrix.16The semantic matching sub-network (right) learns representations of query and document text for effective matching in the embedding space. Cross-entropy loss is used to train the network similar to other models in Section 7.2. the estimation of relevance of the same query-passage pair by a lexical matching and a semantic matching model. A good neural IR model should incorporate both lexical and semantic matching signals [141]. Guo et al. [71] proposed to use histogram-based features in their DNN model to capture lexical notion of relevance. Mitra et al. [141] leverage large scale labelled data from Bing to train a Duet architecture (Figure 23) that learns to identify good patterns of both lexical and semantic matches jointly. Neural models that focus on lexical matching typically have fewer parameters, and can be trained under small data regimes\u2014unlike their counterparts that focus on learning representations of text. Interestingly, a query level analysis seems to indicate that both traditional non-neural IR approaches and more recent neural methods tend to perform well on different segments of queries depending on whether they focus on lexical or semantic matching. Figure 24 plots a few of these models based on their per-query NDCG values on a test set. 8" + }, + { + "url": "http://arxiv.org/abs/1702.05042v1", + "title": "Luandri: a Clean Lua Interface to the Indri Search Engine", + "abstract": "In recent years, the information retrieval (IR) community has witnessed the\nfirst successful applications of deep neural network models to short-text\nmatching and ad-hoc retrieval. It is exciting to see the research on deep\nneural networks and IR converge on these tasks of shared interest. However, the\ntwo communities have less in common when it comes to the choice of programming\nlanguages. Indri, an indexing framework popularly used by the IR community, is\nwritten in C++, while Torch, a popular machine learning library for deep\nlearning, is written in the light-weight scripting language Lua. To bridge this\ngap, we introduce Luandri (pronounced \"laundry\"), a simple interface for\nexposing the search capabilities of Indri to Torch models implemented in Lua.", + "authors": "Bhaskar Mitra, Fernando Diaz, Nick Craswell", + "published": "2017-02-16", + "updated": "2017-02-16", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION In recent years, deep neural networks (DNNs) have demonstrated early positive results on a variety of standard information retrieval (IR) tasks, including on short-text matching [10, 11, 14, 19, 21, 22] and ad-hoc retrieval [9, 17], and shown promising performances on exciting novel retrieval tasks such as multi-modal retrieval [15] and conversational IR [28, 30]. However, the two research communities focused on neural networks and on IR may have less in common when it comes to the choice of programming languages for implementing their respective toolsets. Popular neural network \u2217Te author is a part-time PhD student at UCL. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\ufb01t or commercial advantage and that copies bear this notice and the full citation on the \ufb01rst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Under review for SIGIR\u201917, Tokyo, Japan \u00a9 2017 Copyright held by the owner/author(s). 123-4567-24-567/08/06...$15.00 DOI: 10.475/123 4 toolkits are ofen implemented in (or have bindings for) scripting languages, such as Python1 (e.g., TensorFlow [1], Teano [2], CNTK [29], Ca\ufb00e [13], MXNet [3], Chainer [25], and PyTorch2) or Lua [12] (e.g., Torch [4]) because of their rapid prototyping capabilities. In contrast, many of the popular indexing frameworks for IR are implemented in C++ (e.g., Indri [24]) or Java (e.g., Terrier [18] and Apache Lucene [16]) puting more emphasis on the speed of execution at runtime. Te open-source community has developed Python wrappers over the Indri [26] and the Apache Lucene [20] programming interfaces to expose the functionalities of these rich IR libraries to the programming language. However, there is still a gap that remains to be bridged for non-Python based deep learning toolkits, such as Torch. Torch3 is a numeric computing framework popular among the deep neural network community. It has been shown to be significantly faster compared to other toolkits such as TensorFlow on convolutional neural networks in multi-GPU environment [23]. It is implemented using the light-weight scripting language Lua.4 In this paper, we introduce Luandri (pronounced \u201claundry\u201d) \u2013 a Lua wrapper over the Indri search engine. In particular, Luandri exposes parts of the Indri query environment application programming interface (API) for document retrieval including support for the rich Indri query language. 2 MOTIVATION Tere are a variety of scenarios in which a DNN model can bene\ufb01t from having access to a search engine during training and/or evaluation. Existing DNN models for ad-hoc retrieval [9, 17], for example, operate on query-document pairs to predict relevance. Running these models on the full corpus is prohibitively costly \u2013 therefore the evaluation of these models is ofen limited to re-ranking top-N candidate documents retrieved by a traditional IR model or a search engine. Typically, these candidate sets are retrieved o\ufb04ine in a process separate from the one in which the DNN is evaluated. However, if the search engine is accessible in the same language as the one in which the DNN is implemented, then the candidate generation step and the DNN-based re-ranking step can follow each other within the same process \u2013 removing the requirement to store large quantity of intermediate datasets containing the candidates to be ranked. DNN models train on labelled data, although in some cases labels can be inferred rather than explicit. For example, many DNN models for IR [9\u201311, 14, 22] use negative training examples that are sampled uniformly from the corpus. Recently Mitra et al. [17] reported that training with judged negative documents can yield 1htps://www.python.org/ 2htps://github.com/pytorch/pytorch 3htps://github.com/torch/torch7 4htps://www.lua.org \fUnder review for SIGIR\u201917, August 2017, Tokyo, Japan Bhaskar Mitra, Fernando Diaz, and Nick Craswell Code snippet 1: Sample Lua code for searching an Indri index using the Luandri API. Te Indri index builder application is used for generating the index beforehand. Te search query is written using the popular INQUERY structured operators that are supported natively by Indri for specifying matching constraints. Te runQery method in the Luandri API accepts the request as a Lua table and automatically converts it into the appropriate C++ request object that Indri natively expects. Similarly, the result object returned by Indri in C++ is automatically converted to a Lua table. 1 local luandri = paths.dofile ('luandri.lua ') 2 local query_environment = QueryEnvironment () 3 query_environment :addIndex(\"path_to_index_file \") 4 5 local request = { 6 query = '#syn( #od1(neural networks) #od1(deep learning )) #greater(year 2009) ', 7 resultsRequested = 10 8 } 9 local results = query_environment :runQuery(request).results 10 11 for k, v in pairs (results) do 12 print (v.docid .. '\\n' .. v.documentName .. '\\n' .. v.snippet .. '\\n') 13 end beter NDCG performance than training with uniform random negatives. Having access to a search engine during training could enable additional methods for generating negative samples, such as using documents that are retrieved by the engine but at lower ranks. Te lack of adequate labelled data available for training DNN models for ad-hoc retrieval has been a focus for the neural IR community [5]. It is possible that alternate strategies for supervision may be considered for training these deep models \u2013 including reinforcement learning [27] and training under adversarial setings [8] \u2013 which could also make use of retrieval from a full corpus during the model training. Diaz et al. [6] demonstrated a di\ufb00erent application of the traditional retrieval step in the neural IR model. Given a query, they retrieve a set of documents using Indri and use that to train a brand new distributed representation of words speci\ufb01c to that query at run time. Such models, with query-speci\ufb01c representation learning, can be implemented and deployed more easily if the machine learning framework has access to a search engine. Finally, Ghazvininejad et al. [7] proposed to \u201clookup\u201d external repositories of facts as part of solving larger tasks using neural network models. Empowering DNN models with access to a search engine may be an exciting area for future exploration. In all these scenarios, it is useful for a search engine, such as Indri, to be accessible from the same programming language used to implement the DNN. Terefore, we are optimistic that by publicly releasing the Luandri API we will stimulate novel explorations from IR researchers already familiar with Torch. 3 QUERYING INDRI FROM LUA Indri is an open-source search engine available with the Lemur toolkit.5 Indri consists of two primary components \u2013 an application that builds an index from a raw document collection and another application that can perform searches using this index. Te Indri index builder can deal with several di\ufb00erent document formats for indexing. Tis includes TREC (text and Web), HTML, XML, PDF, and plain text among many others. Searching using Indri involves specifying one or more indices and querying them by either interactively calling the API or by running an application in batch-mode. Te Indri query language supports a rich set of operators for specifying phrasal matching conditions, synonymy relationships, document \ufb01ltering criteria, and other complex constraints. Te full query language grammar is available online for reference.6 Invoking a search on an Indri index using the Luandri API is very similar to how one may use the native C++ Indri API. Code snippet 1 shows a minimal example of a typical Indri-based search using the Luandri API. We observe that the search is performed by invoking very few lines of Lua code. Te example also demonstrates the use of Indri structuredqueries. A search is performed using a structured query that constraints the matching to either of the two ordered phrases \u2013 \u201cneural networks\u201d or \u201cdeep learning\u201d. Te query directs Indri to treat both phrases as synonyms. In addition, a numeric \ufb01lter is speci\ufb01ed to limit matches to only documents whose value corresponding to the year \ufb01eld is greater than 2009. Tis example shows searching on the full document index. However, Luandri also allows users to specify a list of document identi\ufb01ers in the request object to limit the search to only those set 5htp://www.lemurproject.org/indri/ 6htps://www.lemurproject.org/lemur/IndriQeryLanguage.php \fLuandri: a Clean Lua Interface to the Indri Search Engine Under review for SIGIR\u201917, August 2017, Tokyo, Japan of documents. A \ufb01xed list of stop words can also be speci\ufb01ed for retrieval using the Luandri API. Te full Luandri implementation is available on GitHub7 under the MIT license. We direct interested readers to the source code for exact API speci\ufb01cations. 4 UNDER THE HOOD Te implementation of Lua as a programming language puts a strong emphasis on extensibility [12]. Lua is an extension language because any Lua code can be relatively easily embedded as libraries into code writen in other languages. It is also an extensible language because of its ability to call functions writen in other languages, such as C. Te implementation of the Luandri API bene\ufb01ts from the later property of the language. Lua comes with a fast Just In Time (JIT) compiler called LuaJIT.8 LuaJIT exposes a foreign-function interface9 (FFI) that makes it easy to call external C functions and manipulate C data structures from Lua. Te Luandri API is writen using the LuaJIT FFI library. Luandri API wraps Indri\u2019s query environment data types and methods by extern C functions. Ten using the LuaJIT\u2019s FFI library these C methods are exposed to any code writen in Lua. Luandri automatically handles any conversions necessary between Lua tables and Indri\u2019s C++ objects, and vice versa. Te \u201cLuandri.cpp\u201d and \u201cluandri.lua\u201d \ufb01les contain all the wrapper logic on the C++ and the Lua side of our API code, respectively. Te current Luandri API exposes only some of the data structures and methods from Indri\u2019s query environment. In future, we hope to expose more of Indri\u2019s retrieval functionalities prioritizing based on the need of the broader research community. 5" + }, + { + "url": "http://arxiv.org/abs/1610.08136v1", + "title": "Learning to Match Using Local and Distributed Representations of Text for Web Search", + "abstract": "Models such as latent semantic analysis and those based on neural embeddings\nlearn distributed representations of text, and match the query against the\ndocument in the latent semantic space. In traditional information retrieval\nmodels, on the other hand, terms have discrete or local representations, and\nthe relevance of a document is determined by the exact matches of query terms\nin the body text. We hypothesize that matching with distributed representations\ncomplements matching with traditional local representations, and that a\ncombination of the two is favorable. We propose a novel document ranking model\ncomposed of two separate deep neural networks, one that matches the query and\nthe document using a local representation, and another that matches the query\nand the document using learned distributed representations. The two networks\nare jointly trained as part of a single neural network. We show that this\ncombination or `duet' performs significantly better than either neural network\nindividually on a Web page ranking task, and also significantly outperforms\ntraditional baselines and other recently proposed models based on neural\nnetworks.", + "authors": "Bhaskar Mitra, Fernando Diaz, Nick Craswell", + "published": "2016-10-26", + "updated": "2016-10-26", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Neural text embedding models have recently gained signi\ufb01cant popularity for both natural language processing (NLP) and information retrieval (IR) tasks. In IR, a signi\ufb01cant number of these works have focused on word embeddings [6, 8, 10, 11, 27, 28, 35, 42] and modelling short-text similarities [15, 16, 29, 36\u201338]. In traditional Web search, the query consists of only few terms but the body text of the documents typically has at least few hundred sentences. In the absence of click information, such as for newly-published or infrequently-visited documents, the body text can be a useful signal to determine the relevance of the document for the query. Therefore, extending existing neural text representation learning approaches to long body text for document ranking is an important challenge in IR. However, as was noted during a recent workshop [4], in spite of the recent surge in interests towards applying deep neural network (DNN) models for retrieval, their success on ad-hoc retrieval tasks has been rather limited. Some recent papers [30, 35] report worse performance of neural embedding models when compared to traditional term-based approaches, such as BM25 [34]. Traditional IR approaches consider terms as discrete entities. The relevance of the document to the query is estimated based on, amongst other factors, the number of matches of query terms in the document, the parts of the document in which the matches occur, and the proximity between the matches. In contrast, latent semantic analysis (LSA) [5], probabilistic latent semantic analysis (PLSA) [14] and latent Dirichlet allocation (LDA) [2, 40] learn low-dimensional \u2217The author is a part-time PhD student at UCL. The President of the United States of America (POTUS) is the elected head of state and head of government of the United States. The president leads the executive branch of the federal government and is the commander in chief of the United States Armed Forces. Barack Hussein Obama II (born August 4, 1961) is an American politician who is the 44th and current President of the United States. He is the \ufb01rst African American to hold the of\ufb01ce and the \ufb01rst president born outside the continental United States. (a) Local model The President of the United States of America (POTUS) is the elected head of state and head of government of the United States. The president leads the executive branch of the federal government and is the commander in chief of the United States Armed Forces. Barack Hussein Obama II (born August 4, 1961) is an American politician who is the 44th and current President of the United States. He is the \ufb01rst African American to hold the of\ufb01ce and the \ufb01rst president born outside the continental United States. (b) Distributed model Figure 1: Visualizing the drop in each model\u2019s retrieval score by individually removing each of the passage terms for the query \u201cunited states president\u201d. Darker green signi\ufb01es bigger drop. The local model uses only exact term matches. The distributed model uses matches based on a learned representation. vector representations of terms, and match the query against the document in the latent semantic space. Retrieval models can therefore be classi\ufb01ed based on what representations of text they employ at the point of matching the query against the document. At the point of match, if each term is represented by a unique identi\ufb01ers (local representation [13]) then the query-document relevance is a function of the pattern of occurrences of the exact query terms in the document. However, if the query and the document text is \ufb01rst projected into a continuous latent space, then it is their distributed representations that are compared. Along these lines, Guo et al. [12] classify recent DNN models for short-text matching as either interaction-focused [12, 15, 22, 31] or representation-focused [15, 16, 36\u201338]. They claim that IR tasks are different from NLP tasks, and that it is more important to focus on exact matching for the former and on learning text embeddings for the latter. Mitra et al. [27], on the other hand, claim that models that compare the query and the document in the latent semantic space capture a different sense of relevance than models that focus on exact term matches, and therefore the combination of the two is more favorable. Our work is motivated by the latter intuition that it is important to match the query and the document using both local and distributed representations of text. We propose arXiv:1610.08136v1 [cs.IR] 26 Oct 2016 \fa novel ranking model comprised of two separate DNNs that model query-document relevance using local and distributed representations, respectively. The two DNNs, referred to henceforth as the local model and the distributed model, are jointly trained as part of a single neural network, that we name as a duet architecture because the two networks co-operate to achieve a common goal. Figure 1 demonstrates how each subnetwork models the same document given a \ufb01xed query. While the local model captures properties like exact match position and proximity, the distributed model detects property synonyms (e.g. \u2018Obama\u2019), related terms (e.g. \u2018federal\u2019), and even wellformedness of content (e.g. \u2018the\u2019, \u2018of\u2019)1. In this paper, we show that the duet of the two DNNs not only outperforms the individual local and distributed models, but also demonstrate large improvements over traditional baselines and other recently proposed models based on DNNs on the document ranking task. Unlike many other work, our model signi\ufb01cantly outperforms classic IR approaches by using a DNN to learn text representation. DNNs are known for requiring signi\ufb01cant training data, and most of the state-of-the-art performances achieved by these deep models are in areas where large scale corpora are available for training [19, 20]. Some of the lack of positive results from neural models in the area of ad-hoc retrieval is likely due to the scarce public availability of large quantity of training data necessary to learn effective representations of text. In Section 6, we will present some analysis on the effect of training data on the performance of these DNN models. In particular, we found that\u2013unsurprisingly\u2013the performance of the distributed model improves drastically in the presence of more data. Unlike some previous work [16, 37, 38] that train on clickthrough data with randomly sampled documents as negative examples, we train our model on human-judged labels. Our candidate set for every query consists of documents that were retrieved by the commercial search engine Bing, and then labelled by crowdsourced judges. We found that training with the documents that were rated irrelevant by the human judges as the negative examples is more effective than randomly sampling them from the document corpus. To summarize, the key contributions of this work are: 1. We propose a novel duet architecture for a model that jointly learns two deep neural networks focused on matching using local and distributed representations of text, respectively. 2. We demonstrate that this architecture out-performs state-ofthe-art neural and traditional non-neural baselines. 3. We demonstrate that training with documents judged as irrelevant as the negative examples is more effective than randomly sampling them from the corpus. 2. DESIDERATA OF DOCUMENT RANKING Before describing our ranking model, we \ufb01rst present three properties found across most effective retrieval systems. We will then operationalize these in our architecture in Section 3. First, exact term matches between the query and the document are fundamental to all information retrieval models [7]. Traditional IR models such as BM25 [34] are based on counts of exact matches of query terms in the document text. They can be employed with minimal (or no) need for training data, sometimes directly on new tasks or corpora. Exact matching can be particularly important when the query terms are new or rare. For example, if new documents appear on the Web with the television model number \u2018SC32MN17\u2019 1While surprising, this last property is important for detecting quality web content [43]. Query: big deal derby carpet \u2713 \u2717 \u2717 \u2713 \u2717 \u2717 Query: rosario trainer 1 1000 Document terms rosario trainer rosario trainer rosario trainer Big Deal Derby carpet Big Deal Derby carpet Big Deal Derby carpet Figure 2: Visualizing patterns of query term matches in documents. Query terms are laid out along the vertical axis, and the document terms along the horizontal axis. The short vertical lines correspond to exact matches between pairs of query and document terms. For both of the above queries, the \ufb01rst document was rated relevant by a human judge and the following two as irrelevant. then BM25 can immediately retrieve these pages containing precisely that model number without adjusting any parameters of the ranking model. A good ranking model needs to take advantage of exact matches to perform reliably on fresh and rare queries. Second, match positions of query terms in the document not only re\ufb02ect where potentially relevant parts of the document are localized (e.g. title, \ufb01rst paragraph, closing paragraph) but also how clustered individual query term matches are with each other. Figure 2 shows the position of matches on two different queries and a sample of relevant and non-relevant documents. In the \ufb01rst query, we see that query term matches in the relevant document are much more clustered than in the non-relevant documents. We observe this behavior in the second query but also notice that the clustered matches are localized near the beginning of the relevant document. Match proximity serves as a foundation for effective methods such as sequential dependence models [23]. Finally, inexact term matches between the query and the document refer to techniques for addressing the vocabulary mismatch problem. The main disadvantage of term matching is that related terms are ignored, so when ranking for the query \u2018Australia\u2019 then only the term frequency of \u2018Australia\u2019 is considered, even though counting terms like \u2018Sydney\u2019 and \u2018koala\u2019 can be good positive evidence. Robertson [33] pointed out that under the probabilistic model of IR there is, in fact, no good justi\ufb01cation for ignoring the non-matching terms in the document. Furthermore, Mitra et al. [27] demonstrated that a distributed representation based retrieval model that considered all document terms is able to better distinguish between a passage that is truly relevant to the query \u201cCambridge\u201d from a passage on a different topic (e.g., giraffes) with arti\ufb01cially injected occurrences of the term \u201cCambridge\u201d. Any IR model that considers the distribution of nonmatching terms is therefore likely to bene\ufb01t from this additional evidence, and be able to tell \u201cCambridge\u201d apart from \u201can African even-toed ungulate mammal\u201d. In practice, the most effective techniques leverage combinations of these techniques. Dependence models combine exact matching with proximity [23]. LDA-based document models combine exact matching with inexact matching [40]. Query hypergraphs capture all three [1]. Our method also combines these techniques but, unlike prior work, jointly learns all free parameters of the different \fcomponents. 3. THE DUET ARCHITECTURE Figure 3 provides a detailed schematic view of the duet architecture. The distributed model projects the query and the document text into an embedding space before matching, while the local model operates over an interaction matrix comparing every query term to every document term. The \ufb01nal score under the duet setup is the sum of scores from the local and the distributed networks, f(Q, D) = f\u2113(Q, D) + fd(Q, D) (1) where both the query and the document are considered as ordered list of terms, Q = [q1, . . . , qnq] and D = [d1, . . . , dnd]. Each query term q and document term d is a m \u00d7 1 vector where m is the input representation of the text (e.g. the number of terms in the vocabulary for the local model). We \ufb01x the length of the inputs across all queries and documents such that we consider only the \ufb01rst 10 terms in the query and the \ufb01rst 1000 in the document. If the either is shorter than the target dimension, the input vectors are padded with zeros. The truncation of the document body text to the \ufb01rst 1000 terms is performed only for our model variants. For all the neural and non-neural baseline models we consider the full body text. 3.1 Local Model The local model estimates document relevance based on patterns of exact matches of query terms in the document. To this end, each term is represented by its one-hot encoding in a m\u2113-dimensional space, where m\u2113is the size of the vocabulary. The model then generates the nd \u00d7 nq binary matrix X = DTQ, capturing every exact match (and position) of query terms in the document. This interaction matrix, without the zero-padding, is analogous to the visual representation of term matches in Figure 2, and therefore captures both exact term matches and match positions. It is also similar to the indicator matching matrix proposed previously by Pang et al. [31]. While interaction matrix X perfectly captures every query term match in the document, it does not retain information about the actual terms themselves. Therefore, the local model cannot learn term-speci\ufb01c properties from the training corpus, nor model interactions between dissimilar terms. The interaction matrix X is \ufb01rst passed through a convolutional layer with c \ufb01lters, a kernel size of nd \u00d7 1, and a stride of 1. The output Zi corresponding to the ith convolutional window over X is a function of the match between the qi term against the whole document, Zi = tanh \u0010 XT i W \u0011 (2) where Xi is row i of X, tanh is performed elementwise, and the nd \u00d7 c matrix W contains the learnable parameters of the convolutional layer. The output Z of the convolutional layer is a matrix of dimension c \u00d7 nq. We use a \ufb01lter size (c) of 300 for all the evaluations reported in this paper. The output of the convolutional layer is then passed through two fully-connected layers, a dropout layer, and a \ufb01nal fully-connected layer that produces a single real-valued output. All the nodes in the local model uses the hyperbolic tangent function for non-linearity. 3.2 Distributed Model The distributed model learns dense lower-dimensional vector representations of the query and the document text, and then computes the positional similarity between them in the learnt embedding space. Instead of a one-hot encoding of terms, as in the local model, we use Interaction Featurizer 1000 x 10 (binary) Convolution (1000 x 1) 300 x 10 Fully Connected 300 Fully Connected 300 Dropout 300 Fully Connected 1 Trigraph Featurizer 2000 x 10 (counts) Convolution (2000 x 3) 300 x 8 Max Pooling (1 x 8) 300 Fully Connected 300 Trigraph Featurizer 2000 x 1000 (counts) Convolution (2000 x 3) 300 x 998 Max Pooling (1 x 100) 300 x 899 Convolution (300 x 1) 300 x 899 Hadamard Product 300 x 899 Fully Connected 300 Fully Connected 300 Dropout 300 Fully Connected 1 Sum Query Document Query Document local model distributed model Figure 3: The duet architecture composed of the local model (left) and the distributed model (right). The local sub-network takes an interaction matrix of query and document terms as input, whereas the distributed sub-network learns embeddings of the query and document text before matching. a character n-graph based representation of each term in the query and document. Our n-graph based input encoding is motivated by the trigraph encoding proposed by Huang et al. [16]. For each term, we count all of the n-graphs present for 1 \u2264n \u2264G. We then use this n-graph frequency vector of length md to represent the term. Instead of directly computing the interaction between the md\u00d7nq matrix Q and the md \u00d7 nd matrix D, we \ufb01rst learn a series of nonlinear transformations to the character-based input. For both query and document, the \ufb01rst step is convolution. The md \u00d7 3 convolution window has \ufb01lter size of 300. It projects 3 consecutive terms to a 300 dimensional vector, then takes a stride by 1 position, and projects the next 3 terms, and so on. For the query, the convolution step generates a tensor of dimensions 300 \u00d7 8. For document it generates one of dimensions 300 \u00d7 998. Following this, we conduct a max-pooling step. For the query the pooling kernel dimensions are 1 \u00d7 8. For document it is 1 \u00d7 100. \fAs a result, we get one 300 \u00d7 1 matrix \u02dc Q for the query and a 300 \u00d7 899 matrix \u02dc D for the document. The document matrix \u02dc D can be interpreted as 899 separate embeddings, each corresponding to different equal-sized spans of text within the document. Our choice of a window-based max-pooling strategy, instead of global maxpooling as employed by CDSSM [38], is motivated by the fact that the window-based approach allows the model to distinguish between matches in different parts of the document. As posited in Section 2, a model that is aware of match positions may be more suitable when dealing with long documents, especially those containing mixture of many topics. The output of the max-pooling layer for the query is then passed through a fully-connected layer. For the document, the 300 \u00d7 899 dimensional matrix output is operated on by another convolutional layer with \ufb01lter size of 300, kernel dimensions of 300 \u00d7 1, and a stride of 1. The combination of these convolutional and max-pooling layers enable the distributed model to learn suitable representations of text for effective inexact matching. In order to perform the matching, we conduct the element-wise or Hadamard product between the embedded document matrix and the extended or broadcasted query embedding, \u02dc X = ( \u02dc Q . . . \u02dc Q | {z } 899 times ) \u25e6\u02dc D (3) After this, we pass the matrix through fully connected layers, and a dropout layer until we arrive at a single score. Similar to the local model, we use hyperbolic tangent function here for non-linearity. 3.3 Optimization Each training sample consists of a query Q, a relevant document D\u2217and a set of irrelevant documents N = {D0, . . . , DN}. We use a softmax function to compute the posterior probability of the positive document given a query based on the score. p(D\u2217|Q) = exp(f(Q, D\u2217)) P D\u2208N exp(f(Q, D)) (4) and we maximize the log likelihood log p(D\u2217|Q) using stochastic gradient descent. 4. MATERIALS AND METHODS We conducted three experiments to test: (1) the effectiveness of our duet model compared to the local and distributed models separately, and (2) the effectiveness of our duet model compared to existing baselines for content-based web ranking, (3) the effectiveness of training with judged negative documents compared to random negative documents. In this section, we detail our experiment setup and baseline implementations. 4.1 Data The training dataset consisted of 199,753 instances in the format described in Section 4.2. The queries in the training dataset were randomly sampled from Bing\u2019s search logs from a period between January, 2012 and September, 2014. Human judges rated the documents on a \ufb01ve-point scale (perfect, excellent, good, fair and bad). The document body text was retrieved from Bing\u2019s Web document index. We used proprietary parsers for extracting the body text from the raw HTML content. All the query and the document text were normalized by down-casing and removing all non-alphanumeric characters. Table 1: Statistics of the three test sets randomly sampled from Bing\u2019s search logs. The candidate documents are generated by querying Bing and then rated using human judges. queries documents documents query training 199,753 998,765 5 weighted 7,741 171,302 24.9 unweighted 6,808 71,722 10.6 We considered two different test sets, both sampled from Bing search logs. The weighted set consisted of queries sampled according their frequency in the search logs. As a result, frequent queries were well-represented in this dataset. Queries were sampled between October, 2014 and December, 2014. The unweighted set consisted of queries sampled uniformly from the entire population of unique queries. The queries in this samples removed the bias toward popular queries found in the weighted set. The unweighted queries were sampled between January, 2015 and June, 2015. Because all of our datasets were derived from sampling real query logs and because queries will naturally repeat, there was some overlap in queries between the training and testing sets. Speci\ufb01cally, 14% of the testing queries in the weighted set occurred in the training set, whereas only 0.04% of the testing queries in the unweighted set occurred in the training set. We present both results for those who may be in environments with repeated queries (as is common in production search engines) and for those who may be more interested in cold start situations or tail queries. Table 1 summarizes statistics for the two test sets. 4.2 Training Besides the architecture (Figure 3), our model has the following free parameters: the maximum order of the character-based representation for the distributed model (G), the number of negative documents to sample at training time (N), the dropout rate, and the learning rate. We used a maximum order of \ufb01ve for our character n-graphs in the distributed model. Instead of using the full 62,193,780-dimensional vector, we only considered top 2,000 most popular n-graphs, resulting 36 unigraphs (a-z and 0-9), 689 bigraphs, 1149 trigraphs, 118 4-graphs, and eight 5-graphs. When training our model (Section 3.3), we sampled four negative documents for every one relevant document. More precisely, for each query we generated a maximum of one training sample of each form, (1) One excellent document with four fair documents (2) One excellent document with four bad documents (3) One good document with four bad documents. Pilot experiments showed that treating documents judged as fair or bad as the negative examples resulted in signi\ufb01cantly better performance, than when the model was trained with randomly sampled negatives. For training, we discarded all documents rated as perfect because a large portion of them fall under the navigational intent, which can be better satis\ufb01ed by historical click based ranking signals. The dropout rate and the learning rate were set to 0.20 and 0.01, respectively, based on a validation set. We implemented our model using CNTK [41] and trained the model with stochastic gradient descent based optimization (with automatic differentiation) on a single GPU. It was necessary to use a small minibatch size of 8 to \ufb01t the whole data in GPU memory.2 2We will publicly release a CNTK implementation of our model by \f4.3 Baselines Our baselines capture the individual properties we outlined in Section 2. Exact term matching is effectively performed by many classic information retrieval models. We used the Okapi BM25 [34] and query likelihood (QL) [32] models as representative of this class of model. We used Indri3 for indexing and retrieval. Match positions are handled by substantially fewer models. Metzler\u2019s dependence model (DM) [23] provides an inference network approach to modeling term proximity. We used the Indri implementation for our experiments. Inexact term matching received both historic and modern treatments in the literature. Deerwester et al. originally presented latent semantic analysis (LSA) [5] as a method for addressing vocabulary mismatch by projecting words and documents into a lowerdimension latent space. The dual embedding space model (DESM) [27, 28] computes a document relevance score by comparing every term in the document with every query term using pre-trained word embeddings. We used the same pre-trained word embeddings dataset that the authors made publicly available online for download4. These embeddings, for approximately 2.8M words, were previously trained on a corpus of Bing queries. In particular, we use the DESMIN-OUT model, which was reported to have the best performance on the retrieval task, as a baseline in this paper. Both the deep structured semantic model (DSSM) [16] and its convolutional variant CDSSM [38] consider only the document title for matching with the query. While some papers have reported negative performances for title-based DSSM and CDSSM on the ad hoc document retrieval tasks [12, 30], we included document-based variants appropriately retrained on the same set of positive query and document pairs as our model. As with the original implementation we choose the irrelevant documents for training by randomly sampling from the document corpus. For the CDSSM model, we concatenated the trigraph hash vectors of the \ufb01rst T terms of the body text followed by a vector that is a sum of the trigraph hash vectors for the remaining terms. The choice of T was constrained by memory requirements, and we pick 499 for our experiments. The DRMM model [12] uses a DNN to perform term matching, with few hundred parameters, over histogram-based features. The histogram features, computed using exact term matching and pretrained word embeddings based cosine similarities, ignoring the actual position of matches. We implemented the DRMMLCH\u00d7IDF variant of the model on CNTK [41] using word embeddings trained on a corpus of 341,787,174 distinct sentences randomly sampled from Bing\u2019s Web index, with a corresponding vocabulary of 5,108,278 words. Every training sample for our model was turned into four corresponding training samples for DRMM, comprised of the query, the positive document, and each one of the negative documents. This guaranteed that both models observed the exact same pairs of positive and negative documents during training. We adopted the same loss function as proposed by Guo et al. 4.4 Evaluation All evaluation and empirical analysis used the normalized discounted cumulative gain (NDCG) metric computed at positions one and ten [18]. All performance metrics were averaged over queries for each run. Whenever testing for signi\ufb01cant differences in performance, we used the paired t-test with a Bonferroni correction. the time of publication of this paper. 3http://www.lemurproject.org/indri/ 4https://www.microsoft.com/en-us/download/details.aspx?id= 52597 Table 2: Performance on test data. All duet runs signi\ufb01cantly outperformed our local and distributed model (p < 0.05). All duet runs also outperformed non-neural and neural baselines. The difference between the duet model and the best performing baseline per dataset and position (italics) is statistically signi\ufb01cant (p < 0.05). The best NDCG performance on each dataset and position is highlighted in bold. NDCG@1 NDCG@10 Non-neural baselines LSA 22.4 44.2 BM25 24.2 45.5 DM 24.7 46.2 QL 24.6 46.3 Neural baselines DRMM 24.3 45.2 DSSM 25.8 48.2 CDSSM 27.3 48.2 DESM 25.4 48.3 Our models Local model 24.6 45.1 Distributed model 28.6 50.5 Duet model 32.2 53.0 (a) weighted NDCG@1 NDCG@10 Non-neural baselines LSA 31.9 62.7 BM25 34.9 63.3 DM 35.0 63.4 QL 34.9 63.4 Neural baselines DRMM 35.6 65.1 DSSM 34.3 64.4 CDSSM 34.3 64.0 DESM 35.0 64.7 Our models Local model 35.0 64.4 Distributed model 35.2 64.9 Duet model 37.8 66.4 (b) unweighted 5. RESULTS Table 2 reports NDCG based evaluation results on two test datasets for our model and all the baseline models. Our main observation is that the duet model performs signi\ufb01cantly better than the individual local and distributed models. This supports our underlying hypothesis that matching in a latent semantic space can complement exact term matches in a document ranking task, and hence a combination of the two is more appropriate. Note that the NDCG numbers for the local and the distributed models correspond to when these DNNs are trained individually, but for the \u2018duet\u2019 the two DNNs are trained together as part of a single neural network. \f46 48 50 52 local distributed joint judged random (a) Weighted set 64.5 65.0 65.5 66.0 local distributed joint judged random (b) Unweighted set Figure 4: The duet model demonstrates signi\ufb01cantly better performance (p < 0.05) on both test sets when trained with judged irrelevant documents as the negative examples, instead of randomly sampling them from the document corpus. The distributed model also shows statistically signi\ufb01cant gain (p < 0.05) on the weighted set, and a non-statistically signi\ufb01cant gain on the unweighted set. Among the baseline models, including both traditional and neural network based models, CDSSM and DESM achieve the highest NDCG at position one and ten, respectively, on the weighted test set. On the unweighted test set DRMM is our best baseline model at both rank positions. The duet model demonstrates signi\ufb01cant improvements over all these baseline models on both test sets and at both NDCG positions. We also tested our independent local and distributed models against their conceptually closest baselines. Because our local model captures both matching and proximity, we compared performance to dependence models (DM). While the performance in terms of NDCG@1 is statistically indistinguishable, both of the NDCG@10 results are statistically signi\ufb01cant (p < 0.05). We compared our distributed model to the best neural model for each test set and metric. We found no statistically signi\ufb01cant difference except for NDCG@10 for the weighted set. We were interested in testing our hypotheses that training with labeled negative documents is superior to training with randomly sampled documents presumed to be negative. We conducted an experiment training with negative documents following each of the two protocols. Figure 4 shows the results of these experiments. We found that, across all of our models, using judged nonrelevant documents was more effective than randomly sampling documents from the corpus and considering them as negative examples. 6. DISCUSSION Our results demonstrated that our joint optimization of local and distributed models provides substantial improvement over all baselines. Although the independent models were competitive with existing baselines, the combination provided a signi\ufb01cant boost. We also con\ufb01rmed that using judged negative documents should be used when available. We speculate that training with topicallysimilar (but nonrelevant) documents allows the model to better discriminate between the confusable documents provided by an earlier retrieval stage. This sort of staged ranking, \ufb01rst proposed by Cambazoglu et al. [3], is now a common web search engine architecture. In Section 4.3 we described our baseline models according to which of the properties of effective retrieval systems, that we outlined in Section 2, they incorporate. It is reasonable to expect that models with certain properties are better suited to deal with certain segments of queries. For example, the relevant Web page for the 42 44 46 48 50 52 54 1 2 3 4 5+ local dist. duet (a) Model performance by query length 42 44 46 48 50 52 54 frequent rare unseen local dist. duet (b) Model performance by term rarity Figure 5: Performance of different models by length of query and how rare the rarest query term is in the training data. For the rare term analysis, we place all query terms into one of \ufb01ve categories based on their occurrence counts in the training data. Then we then categorize each query in the test dataset based on the frequency of the rarest term belongs in the query. We include a category for queries with at least one term which has no occurrences in the training data. query \u201cwhat channel are the seahawks on today\u201d may contain the name of the actual channel (e.g., \u201cESPN\u201d or \u201cFOX\u201d) and the actual date for the game, instead of the terms \u201cchannel\u201d or \u201ctoday\u201d. A retrieval model that only counts repetitions of query terms is likely to retrieve less relevant documents for this query \u2013 compared to a model that considers \u201cESPN\u201d and \u201cFOX\u201d to be relevant document terms. In contrast, the query \u201cpekarovic land company\u201d, which may be considered as a tail navigational intent, is likely to be better served by a retrieval model that simply retrieves documents containing many matches for the term \u201cpekarovic\u201d. A representation learning model is unlikely to have a good representation for this rare term, and therefore may be less equipped to retrieve the correct documents. These anecdotal examples agree with the results in in Table 2 that show that on the weighted test set all the neural models whose main focus is on learning distributed representations of text (duet model, distributed model, DESM, DSSM, and CDSSM) perform better than the models that only look at patterns of term matches (local model and DRMM). We believe that this is because the DNNs are able to learn better representations for more popular queries, and perform particularly well on this segment. Figure 5 provides further evidence towards this hypothesis by demonstrating that the distributed model has a larger NDCG gap with the local model for queries containing more popular terms, and when the number of terms in the query is small. The duet model, however, is found to perform better than both the local and the distributed models across all these segments. In order to better understand the relationship of our models to existing baselines, we compared the per-query performance amongst all models. We conjecture that similar models should performance similarly for the same queries. We represented a retrieval model as a vector where each position of the vector contains the performance of the model on a different query. We randomly sample two thousand queries from our weighted test set and represent all ranking models as vectors of their NDCG values against these two thousand queries. We visualized the similarity between models by projecting using principal component analysis on the set of performance vectors. The two-dimensional projection of this analysis is presented in Figure 6. The \ufb01gure largely con\ufb01rms our intuitions about properties of retrieval models. Models that use only local representation of terms are closer together in the projection, and further away from models that learn distributed representations of text. Interestingly, the plot \fBM25 QL DM LSA DSSM CDSSM DESM DRMM local dist. duet Figure 6: Principal component analysis of models based on retrieval performance across testing queries. Models using exact term matches (\u25b3), proximity (\u25e6), and inexact matches (\u25bd) are presented. Our models are presented as black squares. does not distinguish between whether the underlying model is based on a neural network based or not, and only focuses on the retrieval properties of the model. Another interesting distinction between deep neural models and traditional approaches is the effect of the training data size on the performance of the model. BM25 has very few parameters and can be applied to new corpus or task with almost no training. On the other hand, DNNs like ours demonstrate signi\ufb01cant improvements when trained with larger datasets. Figure 7 shows that the effect of training data size particularly pronounced for the duet and the distributed models that learns representations of text. The trends in these plots indicate that training on even larger datasets may result in further improvements in model performance over what is reported in this paper. We believe this should be a promising direction for future work. A last consideration when comparing these models is runtime ef\ufb01ciency. Web search engines receive tens of thousands of queries per second. Running a deep neural model on raw body text at that scale is a hard problem. The local sub-network of our model operates on the term interaction matrix that should be reasonable to generate using an inverted index. For the distributed model, it is important to note that the 300 \u00d7 899 dimensional matrix representation of the document, that is used to compute the Hadamard product with the query, can be pre-computed and stored as part of the document cache. At runtime, only the Hadamard product and the subsequent part of the network needs to be executed. Such caching strategies, if employed effectively, can mitigate large part of the runtime cost of running a DNN based document ranking model at scale. 7. RELATED WORK Representations of data can be local or distributed. In a local representation a single unit represents an entity, for example there is a particular memory cell that represents the concept of a grandmother. That cell should be active if and only if the concept of a grandmother is present. By contrast, in a distributed representation, the concept of grandmother would be represented by a pattern of active cells. Hinton et al. [13] provides an overview contrasting distributed and local representations, listing their good and bad points. In a distributed representation, an activation pattern that has some errors or other differences from past data can still be mapped to the entity in question and to related entities, using a similarity function. A local representation lacks this robustness to noise and ability to generalize, but is better at precisely storing a large set of data. This paper considers local and distributed representations of queries and documents for use in Web page ranking. Our measure of ranking quality is NDCG [17], which rewards a ranker for returning documents with higher gain nearer to the top, where gain is determined according to labels from human relevance assessors. We describe different ranking methods in terms of their representations and how this should help them achieve good NDCG. Exact term matching models such as BM25 [34] and query likelihood [32] tend to rank a document higher if it has a greater number of query term matches, while also potentially employing a variety of smoothing, weighting and normalization approaches. Such exact matching is done with a local representation of terms. Exact match systems do not depend on a large training set, since they do not need to learn a distributed representation of queries and documents. They are useful in cases where the relevant documents contain exactly the query terms entered by the user, including very rare or new vocabulary, since new terms can be incorporated with no adjustments to the underlying model. They can also be extended to reward matches of query phrases and proximity [23]. To deal with the vocabulary mismatch problem that arises with local representations, it is possible to do document ranking using a distributed representation of terms. Mikolov et al. [24] developed the popular word2vec embedding approach that has been used in a number of retrieval studies. Zheng and Callan [42] use term embeddings as evidence for term weighting, learning regression models to optimize weighting in a language modeling and a BM25 retrieval model. Ganguly et al. [8] used term embeddings for smoothing in the language modeling approach of information retrieval. Nalisnick et al. [28] used dual embeddings, one for document terms and one for query terms, then ranked according to the all-pairs similarity between vectors. Diaz et al. [6] used term embeddings to generate query expansion candidates in the language modeling retrieval framework, also \ufb01nding better performance when training a specialized term embedding. Other papers incorporating word embeddings include [10, 11, 35]. Pang et al. [31] propose the use of matching matrices to represent the similarity of short texts, then apply a convolutional neural network inspired by those in computer vision. They populate the matching matrix using both local and distributed term representations. In the local representation, an exact match is used to generate binary indicators of whether the ith term of one text and jth term of the other are the same, as in our local model. In the distributed representation, a pre-trained term embedding is used instead, populating the match matrix with cosine or inner product similarities. The method works for some problems with short text, but not for document ranking [30]. However, by using the match matrix to generate summary statistics it is possible to make the method work well [12], which is our DRMM baseline. These term embeddings are a learned representation of language, but in most cases they are not learned on query-document relevance labels. More often they are trained based on a a corpus, where a term\u2019s representation is learned from its surrounding terms or other document context. The alternative, learning a representation based on NDCG labels, is in keeping with recent progress in deep learning. Deep models have multiple layers that learn distributed representations with multiple levels of abstraction. This kind of representation learning, along with other factors such as the availability of large labeled data sets, has yielded performance improvements on a variety of tasks such as speech recognition, visual object recognition and object detection [20]. This paper learns a text representation end-to-end based on querydocument ranking labels. This has not been done often in related \f42 44 46 48 50 52 Number of training samples per epoch NDCG@10 27 28 29 210 211 212 213 214 215 216 217 Same # of total samples Same # of epochs QL baseline (a) Local model 42 44 46 48 50 52 Number of training samples per epoch NDCG@10 27 28 29 210 211 212 213 214 215 216 217 Same # of total samples Same # of epochs QL baseline (b) Distributed model 42 44 46 48 50 52 Number of training samples per epoch NDCG@10 27 28 29 210 211 212 213 214 215 216 217 Same # of total samples Same # of epochs QL baseline (c) Duet model Figure 7: We study the performance of our model variants when trained with different size datasets. For every, dataset size we train two models \u2013 one for exactly one epoch and another one with multiple epochs such that the total number of training samples seen by the model during training is 131,072. work with document body text, but we can point to related papers that use short text such as title, for document ranking or related tasks. Huang et al. [16] learn a distributed representation of query and title, for document ranking. The input representation is character trigraphs, the training procedure asks the model to rank clicked titles over randomly chosen titles, and the test metric is NDCG with human labels. Shen et al. [37] developed a convolutional version of the model. These are our DSSM and CDSSM baselines. Other convolutional models that match short texts using distributed representations include [15, 36], also showing good performance on short text ranking tasks. Outside of document ranking, learning text representations for the target task has been explored in the context of other IR scenarios, including query classi\ufb01cation [21], query auto-completion [26], next query prediction [25, 39], and entity extraction [9]. 8." + }, + { + "url": "http://arxiv.org/abs/1602.01137v1", + "title": "A Dual Embedding Space Model for Document Ranking", + "abstract": "A fundamental goal of search engines is to identify, given a query, documents\nthat have relevant text. This is intrinsically difficult because the query and\nthe document may use different vocabulary, or the document may contain query\nwords without being relevant. We investigate neural word embeddings as a source\nof evidence in document ranking. We train a word2vec embedding model on a large\nunlabelled query corpus, but in contrast to how the model is commonly used, we\nretain both the input and the output projections, allowing us to leverage both\nthe embedding spaces to derive richer distributional relationships. During\nranking we map the query words into the input space and the document words into\nthe output space, and compute a query-document relevance score by aggregating\nthe cosine similarities across all the query-document word pairs.\n We postulate that the proposed Dual Embedding Space Model (DESM) captures\nevidence on whether a document is about a query term in addition to what is\nmodelled by traditional term-frequency based approaches. Our experiments show\nthat the DESM can re-rank top documents returned by a commercial Web search\nengine, like Bing, better than a term-matching based signal like TF-IDF.\nHowever, when ranking a larger set of candidate documents, we find the\nembeddings-based approach is prone to false positives, retrieving documents\nthat are only loosely related to the query. We demonstrate that this problem\ncan be solved effectively by ranking based on a linear mixture of the DESM and\nthe word counting features.", + "authors": "Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana", + "published": "2016-02-02", + "updated": "2016-02-02", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "Introduction to information retrieval, volume 1. Cambridge university press Cambridge, 2008. [27] I. Matveeva, C. Burges, T. Burkard, A. Laucius, and L. Wong. High accuracy retrieval with multiple nested ranker. pages 437\u2013444. ACM, 2006. [28] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Ef\ufb01cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. [29] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Proc. NIPS, pages 3111\u20133119, 2013. [30] B. Mitra. Exploring session context using distributed representations of queries and reformulations. In Proc. SIGIR, pages 3\u201312. ACM, 2015. [31] B. Mitra and N. Craswell. Query auto-completion for rare pre\ufb01xes. In Proc. CIKM. ACM, 2015. [32] E. Nalisnick, B. Mitra, N. Craswell, and R. Caruana. Improving document ranking with dual word embeddings. In Proc. WWW. International World Wide Web Conferences Steering Committee, to appear, 2016. [33] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. Proc. EMNLP, 12: 1532\u20131543, 2014. [34] S. Robertson. Understanding inverse document frequency: on theoretical arguments for idf. Journal of documentation, 60 (5):503\u2013520, 2004. [35] S. Robertson and H. Zaragoza. The probabilistic relevance framework: BM25 and beyond. Now Publishers Inc, 2009. [36] S. E. Robertson and S. Walker. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. pages 232\u2013241. Springer-Verlag New York, Inc., 1994. [37] X. Rong. word2vec parameter learning explained. arXiv preprint arXiv:1411.2738, 2014. [38] R. Salakhutdinov and G. Hinton. Semantic hashing. International Journal of Approximate Reasoning, 50(7): 969\u2013978, 2009. [39] G. Salton, A. Wong, and C.-S. Yang. A vector space model for automatic indexing. Communications of the ACM, 18(11): 613\u2013620, 1975. [40] T. Schnabel, I. Labutov, D. Mimno, and T. Joachims. Evaluation methods for unsupervised word embeddings. In Proc. EMNLP, 2015. [41] Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. Learning semantic representations using convolutional neural networks for web search. In Proc. WWW, pages 373\u2013374, 2014. [42] T. Shi and Z. Liu. Linking glove with word2vec. arXiv preprint arXiv:1411.5595, 2014. [43] A. Singhal, C. Buckley, and M. Mitra. Pivoted document length normalization. In Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval, pages 21\u201329. ACM, 1996. [44] D. Tang, F. Wei, N. Yang, M. Zhou, T. Liu, and B. Qin. Learning sentiment-speci\ufb01c word embedding for twitter sentiment classi\ufb01cation. In Proc. ACL, volume 1, pages 1555\u20131565, 2014. [45] L. Vilnis and A. McCallum. Word representations via gaussian embedding. arXiv preprint arXiv:1412.6623, 2014. [46] I. Vuli\u00b4 c and M.-F. Moens. Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings. In Proc. SIGIR, pages 363\u2013372. ACM, 2015. [47] X. Wei and W. B. Croft. Lda-based document models for ad-hoc retrieval. In Proc. SIGIR, pages 178\u2013185. ACM, 2006. [48] B. J. Wilson and A. M. J. Schakel. Controlled experiments for word embeddings. arXiv preprint arXiv:1510.02675, 2015. [49] X. Yan, J. Guo, S. Liu, X. Cheng, and Y. Wang. Learning topics in short texts by non-negative matrix factorization on term correlation matrix. In Proceedings of the SIAM International Conference on Data Mining, 2013. [50] G. Zheng and J. Callan. Learning to reweight terms with distributed representations. In Proc. SIGIR, pages 575\u2013584. ACM, 2015. [51] J. Zobel and A. Moffat. Exploring the similarity space. In ACM SIGIR Forum, volume 32, pages 18\u201334. ACM, 1998. [52] W. Y. Zou, R. Socher, D. M. Cer, and C. D. Manning. Bilingual word embeddings for phrase-based machine translation. In EMNLP, pages 1393\u20131398, 2013." + } + ], + "Emine Yilmaz": [ + { + "url": "http://arxiv.org/abs/2004.13486v1", + "title": "On the Reliability of Test Collections for Evaluating Systems of Different Types", + "abstract": "As deep learning based models are increasingly being used for information\nretrieval (IR), a major challenge is to ensure the availability of test\ncollections for measuring their quality. Test collections are generated based\non pooling results of various retrieval systems, but until recently this did\nnot include deep learning systems. This raises a major challenge for reusable\nevaluation: Since deep learning based models use external resources (e.g. word\nembeddings) and advanced representations as opposed to traditional methods that\nare mainly based on lexical similarity, they may return different types of\nrelevant document that were not identified in the original pooling. If so, test\ncollections constructed using traditional methods are likely to lead to biased\nand unfair evaluation results for deep learning (neural) systems. This paper\nuses simulated pooling to test the fairness and reusability of test\ncollections, showing that pooling based on traditional systems only can lead to\nbiased evaluation of deep learning systems.", + "authors": "Emine Yilmaz, Nick Craswell, Bhaskar Mitra, Daniel Campos", + "published": "2020-04-28", + "updated": "2020-04-28", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.LG" + ], + "main_content": "INTRODUCTION In recent years, deep neural models achieved state-of-the-art performance on a variety of tasks, and this has happened in a variety of fields ranging from computer vision to information retrieval (IR). It took relatively longer to observe such advances in core IR problems such as ranking [7], especially if we exclude results based on proprietary data\u2014e.g., [8]. Two possible explanations for this delay are related to training data and test data: 1) The lack of large-scale training datasets with tens or hundreds of thousands of queries, since large data would seem to be a requirement based on the experience in other fields, and 2) The lack of test collections to evaluate the quality of neural models in a fair and reliable manner. The TREC 2019 Deep Learning Track [6] addressed these problems by releasing large-scale training data, as well as by developing reliable and reusable test collections for evaluating the quality of various algorithms (ranging from traditional retrieval models such as BM25 to various neural models). The results of the track showed that when sufficient training data is available, most neural models tend to outperform the traditional retrieval models. Findings of the track were based on test collections that were created using depth-10 pools of both neural and traditional models. Inclusion of neural runs in pooling is highly unusual, since most test collections were created in the years before neural models had been developed, and even those developed more recently (e.g., [1]) did not have neural models trained on the large labeled datasets that were introduced in TREC 2019. Even though reusability of test collections for evaluating the quality of unseen systems has been widely studied in literature [4, 5, 9, 10, 13, 13], no previous work has analysed the reusability of test collections when they are created solely using systems of a particular type (e.g. traditional systems based on BM25, language modelling, etc.) towards evaluating the quality of systems that are of different type (e.g. neural systems based on deep learning models), which is the main question we aim to answer in this paper. Our approach is to simulate the earlier test collections, where pooling was with one type of model, to see whether this creates a bias against the other type of model, both when comparing within type and across types. Our results demonstrate that evaluation results obtained using test collections that are created solely using pools of traditional systems are less reliable in terms of evaluating the quality of neural systems. Our findings suggest that such test collections should be arXiv:2004.13486v1 [cs.IR] 28 Apr 2020 \fSIGIR \u201920, July 25\u201330, 2020, Xi\u2019an, China Emine Yilmaz, Nick Craswell, Bhaskar Mitra, and Daniel Campos used wit caution when evaluating the quality of neural systems as they may lead to incorrect conclusions regarding how the quality of a neural model compares with a traditional model, as well as how the neural model compares with another baseline neural model. 2 RELATED WORK A significant amount of research has been devoted to analysing the fairness and reusability of test collections for retrieval evaluation, where fairness refers to collection being unbiased in its evaluation to different runs that contributed to the construction of the pool and reusability refers to the fairness of the test collection towards evaluating the quality of the runs that did not contribute to the construction of the test collection [13]. Zobel et al. [14] argued that test collections constructed using depth-k pooling [11] tend to be reasonably reusable and tend to be fair towards evaluating the quality of new systems. Various methods have been proposed in order to generate fair and reusable test collections with limited relevance labels [4, 5, 9, 10, 13]. Previous work has shown that when test collections are constructed using pools that are too small compared to the document collection size, the resulting pools could exhibit some bias (in particular, bias towards systems that retrieve documents that contain topic title words) [3]. While most of this previous work analysed the reusability of test collections in terms of their fairness towards evaluating new systems that did not contribute to the pool, none of the previous work analysed the reusability of such collections when they are constructed solely using systems that are of particular type (e.g. traditional systems) but are used to evaluate the quality of systems that are of a different type (e.g. neural systems). The TREC 2017 Common Core Track showed some evidence of neural runs\u2014e.g., [12]\u2014being more likely to uniquely retrieve a relevant document [13] in comparison to traditional runs. If future neural runs, during reuse of the test collection, also had this property of finding previously unseen relevant results, then the evaluation of those new runs would be unfair, since no new judging is done during reuse. Although this indicates a potential problem, no previous work systematically analysed the reusability of test collections generated using traditional models towards evaluating the quality of such neural models. 3 EXPERIMENTAL ANALYSIS We analysed the quality of test collections constructed using depth pooling from traditional vs. neural systems in terms of the number of relevant document identified, as well as in terms of the reusability of these pools based on the evaluation results obtained for systems of different types (neural vs. traditional systems). For this purpose, we use the data from The TREC Deep Learning Track [6], details of which are described below. 3.1 Task and datasets The TREC Deep Learning Track has two tasks: Document retrieval and passage retrieval. Both tasks have large training sets based on human relevance assessments, derived from MS MARCO [2]. The test collection used in the track, which was generated using the depth-10 pools of the participating systems, contains 43 queries. Judgments were done on a four-point scale: Perfectly relevant, highly relevant, relevant and irrelevant. The track reported both NDCG@10 and MRR metrics, with NDCG@10 being the primary metric used in ranking the systems. In total 10 groups with a total of 38 runs participated in the document retrieval task and 11 groups with a total of 37 runs participated in the passage retrieval task. For the document retrieval task, out of the 38 runs, 27 of them were based on neural models (models based on deep learning methods or use such models (e.g. BERT) as features)) and 11 of them were based on traditional methods (models that are based on traditional, non-neural methods such as BM25). For the passage retrieval task, 26 runs were based on neural models and 11 of them were based on traditional methods. Top 10 performing systems in both document ranking and passage ranking tasks were based on neural models. More details about the results of the Deep Learning Track can be found at the overview paper of the track [6]. 3.2 Experimental Results 3.2.1 Number of Relevant Documents Found. Since the reusability of a test collection highly depends on the number of relevant documents identified, we first analysed the number of relevant documents identified if pools were to be constructed solely using (i) traditional runs vs. (ii) neural runs. For this purpose, we divided the runs submitted to the Deep Learning Track into two categories: Traditional systems and neural systems. The runs were assigned to the two categories based on the original categorization used by the track, as described in Section 3.1. We then analysed the number of documents identified when test collections are constructed using depth-k pooling by pooling top k results from traditional systems vs. neural systems, for various cutoff values k. Figure 1 shows the result of this experiment for the document retrieval task (left) and the passage retrieval task (right plot). The x axis in the figures shows the cutoff value k used to constructed the depth-k pools and the y axis shows the number of relevant documents identified using pools constructed via traditional (grey line) vs. neural (black line) models. It can be seen that for both tasks neural models tend to find more relevant documents at early cutoff levels. For document retrieval task, neural runs seem to be overtaken by the traditional runs as one goes deeper in the ranking whereas for the passage retrieval task neural runs consistently find more relevant results at all cutoffs. Given that most IR metrics tend to be top heavy, these results raise concerns about the reliability of evaluation results in evaluating neural models with pools generated from traditional methods, a commonly faced scenario due to most existing test collections being generated solely using traditional models. 3.2.2 Test Collection Reusability. We then analysed the reusability of test collections generated via pooling top-k results of systems of a particular type for evaluating systems that are of a different type, particularly focusing on traditional vs. neural system types. In particular, we are interested in the question as to whether pools generated using traditional systems can be reliably used to evaluate the quality of neural models, and vice versa. \fOn the Reliability of Test Collections for Evaluating Systems of Different Types SIGIR \u201920, July 25\u201330, 2020, Xi\u2019an, China 0 20 40 60 80 100 500 1000 1500 2000 2500 3000 trad neural (a) Document retrieval task 0 20 40 60 80 100 0 500 1000 1500 2000 2500 trad neural (b) Passage retrieval task Figure 1: Cumulative count of relevant results at each rank cutoff k. In document retrieval, neural runs find more relevant results at early cutoffs, but then are overtaken by traditional runs at later cutoffs. In passage retrieval, neural runs find more relevant results at all cutoffs. MRR NDCG@10 Test System: Trad Neural All Trad Neural All Trad Pool 0.436 -0.12 -0.19 0.772 0.68 0.676 Neural Pool 0.769 0.635 0.842 0.774 0.836 0.852 Table 1: Average Kendall\u2019s tau correlations between actual metric and metric values computed using 10 randomly generated traditional (top row) vs. neural pools (bottom row) for document retrieval runs. In order to evaluate the reusability of test collections generated using traditional pools towards evaluating the quality of neural models, we randomly split the traditional runs submitted to the TREC Deep Learning Track into two sets. We used the first set of systems to construct the test collection using depth-10 pooling (which we refer to as the traditional pool), and we used the second set of systems together with the neural models as test systems, using which we analyse the reusability of the pools generated. Depth-10 pools were used in the pooling process since the original test collection for the Deep Learning Track was generated using depth-10 pooling. In TREC, most groups tend to submit multiple runs and most of these runs tend to be different variants of the same system, which was also the case for the Deep Learning Track (as described in Section 3.1). In order to avoid having a system in the test set that is very similar to a system used in constructing the pools, if one run from one group is randomly selected to be included in the pool, all the remaining runs from that group are also included in the pool. We then used this test collection to evaluate the quality of test systems (neural systems, as well as the traditional systems that did not contribute to the pool). This way we can evaluate the reusability of the test collection constructed with traditional systems in terms of their fairness towards evaluating (i) the performance of neural systems within themselves, (ii) the performance of other traditional systems that did not contribute to the pool within themselves, and (iii) the relative performance of neural vs. traditional systems. Since the test collection for the Deep Learning Track was generated using depth-10 pooling, we also use such depth-10 pools evaluation results obtained by pooling all the systems submitted as our gold standard (which we refer to as the actual metric values). MRR NDCG@10 Test System: Trad Neural All Trad Neural All Trad Pool 0.63 0.004 0.0 0.789 0.574 0.612 Neural Pool 0.7 0.81 0.875 0.89 0.874 0.881 Table 2: Average Kendall\u2019s tau correlations between actual metric and metric values computed using 10 randomly generated traditional (top row) vs. neural pools (bottom row) for passage retrieval runs. We then compare the actual metric values with metric values computed using the traditional pool (which we refer to as the estimated metric values). We then compute the Kendall\u2019s tau correlation between the actual metric values and estimated metric values when (i) only traditional methods are used as the test systems, (ii) only neural models are used as the test systems, and (iii) systems of both types are used as the test systems. Since NDCG@10 and MRR were two of the primary metrics used in context of the Deep Learning Track, we also focused on these metrics as the primary evaluation metrics. Since the quality of the pools constructed could be highly affected by the type of runs that are randomly selected for constructing the pools, we repeat this process 10 times to construct 10 random pools generated by using different random splits of traditional runs, and compute the average Kendall\u2019s tau correlation results over all the 10 runs. In order to evaluate the reusability of pools generated using neural models, we repeated the same procedure to by randomly selecting half of the neural runs for constructing the pools (which we refer to as neural pools), which are then used to evaluate the quality of neural models that did not contribute to the pool together with traditional models in a similar way as above. Table 1 and Table 2 show the average Kendall\u2019s tau values obtained over the 10 randomly constructed pools when pools are constructed using half of the traditional systems (upper row) and half of the neural systems (bottom row) for the document retrieval and passage retrieval tasks, respectively. The columns in the tables show the different types of test systems used in evaluation. It can be seen that for both document retrieval and passage retrieval tasks, traditional pools result in poor evaluation results for the neural systems. In fact, traditional pools seem to be worse than neural pools even for evaluating the quality of traditional systems that did not contribute to the pool! Furthermore, in most cases the Kendall\u2019s tau correlation when all the test systems are considered seems to be less than the Kendall\u2019s tau scores for traditional systems and neural systems alone. This suggests that such pools are performing very poorly when the pairwise comparisons between the traditional vs. neural systems are considered. Hence, when a neural model is compared with a traditional model using a test collection generated via traditional pools, one might incorrectly infer that the neural model is performing worse than the traditional model. Note these findings could be partially related to the number and type of runs included in the pools. In our experiments, we were limited by the number and type of runs submitted to the Deep Learning Track. If more runs with more variety were used to create the pools, the resulting pools have the potential to result in more reliable evaluations of neural systems. However, the fact that \fSIGIR \u201920, July 25\u201330, 2020, Xi\u2019an, China Emine Yilmaz, Nick Craswell, Bhaskar Mitra, and Daniel Campos (a) MRR Traditional Depth-10 Pool (b) MRR Neural Depth-10 Pool (c) NDCG@10 Traditional Depth-10 Pool (d) NDCG@10 Neural Depth-10 Pool Figure 2: MRR (top) and NDCG@10 (bottom) values for document retrieval task, when pools are generated using (left) traditional systems vs. (right) neural systems. the same exact pools Table 1 Table 2 could lead to very different evaluation results in terms of their reliability when evaluating the quality of traditional vs. neural techniques is highly concerning. Out results suggest that existing test collections generated using traditional systems should be used with caution when evaluating the quality of neural models as the evaluation results obtained are likely to be unreliable and one might incorrectly infer that the quality of the neural run is worse than a baseline traditional or neural run. Figure 2 and Figure 3 show how such evaluations look like in detail for a randomly picked pool for the document retrieval and passage retrieval tasks, respectively. The x axis in the plots show the actual metric values when all the systems are used to generate the pools and the y axis shows the estimated metric values computed when half of the traditional (left plots) or neural systems (right plots) are used to generate the pools. The plots also contain line y = x for comparison purposes. The titles in the plots show the Kendall\u2019s tau correlation between the actual and the estimated metric values when all systems are considered in the test set. The plots also show the Kendall\u2019s tau correlation values within the neural models, as well as within the traditional models in the test set. It can be seen that pools generated using the traditional pools are particularly unreliable in evaluating neural runs and may have a tendency to underestimate the quality of the neural runs, whereas pools generated via the neural runs tend to be more reliable for evaluating the quality of both traditional and neural systems. (a) MRR Traditional Depth-10 Pool (b) MRR Neural Depth-10 Pool (c) NDCG@10 Traditional Depth-10 Pool (d) NDCG@10 Neural Depth-10 Pool Figure 3: MRR (top) and NDCG@10 (bottom) values for passage retrieval task, when pools are generated using (left) traditional systems vs. (right) neural systems. 4" + } + ], + "Daniel Campos": [ + { + "url": "http://arxiv.org/abs/2311.07861v2", + "title": "Overview of the TREC 2023 Product Product Search Track", + "abstract": "This is the first year of the TREC Product search track. The focus this year\nwas the creation of a reusable collection and evaluation of the impact of the\nuse of metadata and multi-modal data on retrieval accuracy. This year we\nleverage the new product search corpus, which includes contextual metadata. Our\nanalysis shows that in the product search domain, traditional retrieval systems\nare highly effective and commonly outperform general-purpose pretrained\nembedding models. Our analysis also evaluates the impact of using simplified\nand metadata-enhanced collections, finding no clear trend in the impact of the\nexpanded collection. We also see some surprising outcomes; despite their\nwidespread adoption and competitive performance on other tasks, we find\nsingle-stage dense retrieval runs can commonly be noncompetitive or generate\nlow-quality results both in the zero-shot and fine-tuned domain.", + "authors": "Daniel Campos, Surya Kallumadi, Corby Rosset, Cheng Xiang Zhai, Alessandro Magnani", + "published": "2023-11-14", + "updated": "2023-11-15", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "main_content": "Introduction At TREC 2023, we hosted the first TREC Product Search Track, looking to create a reusable general benchmark for evaluating the performance of retrieval methods in the product search domain. We focus on providing a benchmark similar in scale and format to NQ Kwiatkowski et al. [2019], or the Deep Learning Track Craswell et al. [2021] but focused on product search. In providing a simple-to-use dataset, we believe broad experimentation using popular retrieval libraries Lin et al. [2021] Gao et al. [2022] can lead to broad improvements in retrieval performance. In this first year of the track, we created a novel collection based on the ESCI Product Re-ranking dataset Reddy et al. [2022], sampled novel queries, created enriched metadata in the form of additional text and images along with seeded evaluation results with a broad range of baseline runs to aid in collection reusability and to allow iteration and experimentation on the use of additional context. Unlike previous product search corpora, the Product Search Track is multi-modal and has a large enough scale to explore the usage of neural retrieval methods. We observe somewhat surprising results using this scaled dataset and a wide variety of baseline runs. Single-stage retrieval models that leverage vector representations do not consistently outperform traditional retrieval methods such as BM25. Moreover, in the zero-shot setting, we find that larger vectorbased models do not always beat their more minor variants, which is at odds with other evaluation corpora such as MTEB Muennighoff et al. [2023]. Finally, while additional metadata can improve retrieval performance at a macro level, extra information cannot guarantee performance. In evaluating per-query performance, we find that vector-based systems lose performance with the other metadata. Please see the participant papers for more insights about what we learned this year. 2 Task description The product search track has one task: product ranking. Within this task, various enriched datasets are opened to participants to allow them to enrich the collection as they see fit. Participants were allowed to submit up to three arXiv:2311.07861v2 [cs.IR] 15 Nov 2023 \fFigure 1: Prompt used in our synthetic query generation on randomly selected product. The sampled product is included in the placeholders {product}. official runs. When submitting each run, participants indicated what external data, pretrained models, and other resources were used, as well as information on what style of the model was used. In the ranking task, given a query, the participants were expected to retrieve a ranked list of products from the full collection based on the estimated likelihood that the product would meet the user\u2019s need. Participants could submit up to 100 products per query for this end-to-end ranking task. We first selected a subset of 200 queries for judging in the pooling and judging process. Then NIST started judging these queries, throwing out queries without high disagreement or deemed un-judgable. If at least 50% of the judged products are relevant or there is no relevant product, the query is deemed un-judgable. This led to a judged test set of 186 queries, which we compare the quality of runs. The track received 62 submissions to the passage ranking task, 39 of which were baseline runs that the track coordinators submitted. Judgments were collected for each query product pair on a four-point scale: [3] Perfectly relevant: The product is exactly what the user wants. [2] Highly relevant: The product could match the user query, but it may be a substitute for the original query intent. It may have a slightly different style, brand, or type, but a user would be satisfied if they received this product. [1] Related: The product seems related to the query but not the item the user seeks. Products in this category could complement the user\u2019s intended product. [0] Irrelevant: The product has nothing to do with the query. For binary metrics, we map judgment levels 3,2 to relevant and 1,0 to irrelevant. The collection is based on the ESCI Shopping queries dataset Reddy et al. [2022]. While this dataset is focused on improving product search, it lacks a clear end-to-end retrieval benchmark. Instead, the dataset includes a re-ranking task in which the top 40 results retrieved from the Amazon product corpus must be re-ranked for improved relevance. While this re-ranking task is quite important to the end-to-end performance of a product search engine, it does not allow for ample understanding of what impacts the performance of end-to-end retrieval in the product domain. Since there is no source of publicly available shopping queries, nor does the ESCI dataset have a publicly accessible test dataset, we created a new set of 998 evaluation queries leveraging GPT-4 and some heuristic-based sampling. For query generation, we leveraged GPT-4 along with Prompt, which built on the work of InPars Bonifacio et al. [2022] Jeronymo et al. [2023], and we created 500 queries using the prompt shown in table 1. 2 \fFigure 2: Each product in the collection contains basic information such as a title and product description along with contextual metadata, which includes attributes such as reviews, dimensions, etc. Figure 3: Some examples of product images. Some items have multiple images, while others have none. To avoid cases where this approach fails and to study how models perform with more typical product search queries with high keyword overlap, we generate queries by selecting sub-spans of product titles or descriptions. In generating queries with GPT-4, we aimed to create a reliable way of generating new and interesting queries for the collection, as we do not have a method to sample novel queries reliably. 3 Datasets This year, we leverage an enriched and filtered product search dataset based on the ESCI dataset Reddy et al. [2022]. We will first describe the dataset and its generation before we describe how we adapted it to best suit the track. Shopping Queries Dataset: A Large-Scale ESCI Benchmark for Improving Product Search The benchmark for improving product search, or ESCI, is a large-scale benchmarking dataset that focuses on subsets of product search use cases and provides frameworks from which improvement can be studied. Unlike other product retrieval datasets, the ESCI corpus contains English, Japanese, and Spanish queries. The dataset centers around three tasks important to the world of product search. It can be used to improve customer experience: Query-Product Ranking, Multi-class Product Classification, and Product Substitute Identification. For Query-Product Ranking: given a user-specified query and the top 40 products retrieved by a commercial product search engine, this task aims to rank the products to have more relevant products ranked above non-relevant products. For Multi-class Product Classification: given a query and a result of products, classify products into the following matches: Exact, substitute, complement, and irrelevant. For Product Substitute Identification: given a product and a 3 \flist of potential substitutes, identify which could be substituted. Within the three tasks, there are two variants of product collections, with the product ranking task using the smaller collection and the other tasks using a larger task. Given our focus on retrieval, we leverage the former. Within each task, there is a large training data set that contains query product pairs that have been annotated as exact match (E), substitute (S), complement (C), and irrelevant (I). The data contains the following fields: example id, query, query id, product id, product locale, ESCI label, small version, large version, split, product title, product description, product bullet point, product brand, product color, and source. The smaller ranking dataset has 48,300 unique queries and 1,118,011 relevance judgments. The data sets are stratified into train, dev, and test, of which only the labels for the train and dev have been released publicly. On average, each query has 20 judgments for English and 28 for other languages. Item Instances Notes Collection 1,661,907 90+% of products have at least 1 image Train Queries 30,734 Train + Dev Train QREL 392,119.00 N/A 2023 Test Queries 926 N/A 2023 Test Queries (Judged) 182 N/A Table 1: High-level statistics on the size of the collection and queries of the TREC Product Search 2023 Collection Product Search Track Corpus While the full ESCI dataset is multilingual and features over 3 million items, we narrowed our focus to English only. We attempt to enrich the dataset with additional Metadata and images for these English products as we believe this can be very important for product search. The ESCI dataset is focused on text information that ignores any behavioral, categorical, visual, or numerical features that can be used for ranking. Product metadata enrichment improves product representations by including additional helpful information such as reviews, attributes such as size and color, and categorical ordering from extracted Metadata from Amazon\u2019s online catalog 1. Figure 2 shows an example product with its additional Metadata. We crawled and removed images for each product using the ASIN from this enrichment. Product images contain one to ten thumbnail-size images for a given product, which were shuffled from Amazon and joined with the ESCI dataset. Since these images are extracted from product thumbnails, each image is only 64x64, which allows the entire collection to be relatively small. Some product image examples can be found in figure 3. Numerical details on the collection can be found in table 1 4 Results and analysis Submitted runs A total of 4 groups participated in the TREC 2023 Product Search Track, including baseline runs submitted by the track coordinators. Across all groups, we received 62 run submissions, including 49 baseline runs. Table 2 and figure 4 summarizes the submission statistics for this year\u2019s track. This set of runs led to 182 evaluated queries, which we believe will likely make this a highly reusable collection apt for future experimentation. This year, we had fewer participating groups than we hoped for compared to similar tracks (Deep learning had 15 groups in 2019, 25 in 2020, and 19 in 2021). We believe this might indicate the broader saturation of the IR community by large-scale datasets focused on single-stage retrieval via neural language models. Overall results Table 3 presents a standard set of relevance quality metrics for product search ranking runs. Reported metrics include Normalized Discounted Cumulative Gain (NDCG) [J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002] at depth 10 and 100, Recall at depth 10 and 100, and the Infrared Average Precision (InfAP) [Yilmaz and Aslam, 2006]. Results represent the mean score across the 182 queries that NIST assessed, and scores are computed using TRECEVAL. 1Metadata was extracted from https://github.com/shuttie/esci-s/ Table 2: TREC 2023 Product Search Track run submission statistics. All Groups Coordinator Baselines Number of groups 4 1 Number of total runs 62 39 4 \fFigure 4: Relative system ordering based on mean NDC@10 None of these results leverage the existing development portion of ESCI or the unreleased eval set. In subsequent discussions, we employ NDCG@10 as our primary evaluation metric to analyze the ranking quality produced by different methods. To analyze how different approaches perform in the high recall domain, we employ recall at 100 (R@100), which compares how often the positive set is present in the top 100 candidates even if they are not often ranked highly. In the product search, users often add sorting and filtering forms via price, size, color, etc. When a user removes portions of the ranked candidate set, the recall of a larger filter set becomes highly important. Looking at the results in table 3, we see clear gains from hybrid retrieval systems that leverage multiple retrievers to improve performance (f_splade_bm25,cfdaclip_MR_A). We further see that in this domain, there is a consistent and effective performance for traditional retrieval methods such as BM25, which is one of the top-performing systems despite a lack of collection optimization. When we evaluate specific queries as shown in table 4, we find that there are some queries, such as Elite (Elite) Volano/drivo/Kura For Body 329770001 or small measuring rice bin Asvel or Dinosaur Pee Pee Teepee Wee, where one or a few retrieval systems have high NDCG@10 scores. In contrast, most systems have scores of 0. Each of these queries is looking for specific items, and surprisingly, the systems that excel at spear-fishing each product are inconsistent across questions. Performance on long vs. short queries To better understand the variance between variations in query length, we stratified the queries based on query length to analyze whether system ordering depends on query length. We study this because shorter queries, such as Google Wi-Fi System Mesh, tend to be more broad in the world of product search. In contrast, longer queries, such as 21x21 beige sun shade sail patio UV protection outdoor backyard, focus on finding specific products. We stratify queries by setting queries with 7 or more words that are too long and everything else assorted. This leaves 182 total queries, 81 short queries, and 101 long queries. When we use these stratified sets of queries, we find high Kendall tau with 0.93442 (7.13e-27) and 0.9640 (1.70e-28) for short and long, respectively. This agreement is surprisingly high as we see a large variation in NDCG as shown in table 11 where, for example, with BM25, the NDCG@10 from long to short queries is > 10% relative move. Metadata vs. Simple Collection To understand the impact of enriching the collection with textual metadata in our baselines, we provided runs that use the simple collection and enhanced metadata. Tables 5 and 6 provide detailed 5 \fTable 3: Summary of results for all runs. Run Group InfAP NDCG @10 NDCG @100 R@10 R@100 f_splade_bm25 F 0.6068 0.7505 0.7244 0.4919 0.8015 f_splade_clip_bm25 F 0.5731 0.7327 0.7143 0.4739 0.8001 cfdaclip_MR_A JBNU 0.5910 0.7257 0.7019 0.4766 0.7857 cfdaclip_ER_B JBNU 0.5905 0.7256 0.7010 0.4766 0.7862 cfdaclip_ER_A JBNU 0.5902 0.7252 0.7008 0.4765 0.7840 JBNU-C JBNU 0.5885 0.7251 0.7074 0.4700 0.7870 cfdaclip_MR_B JBNU 0.5903 0.7251 0.7010 0.4765 0.7859 metadata-enhanced-gte-small-zero-shot Baselines 0.4955 0.6647 0.6500 0.4363 0.7416 simple-gte-small-zero-shot Baselines 0.4818 0.6612 0.6492 0.4375 0.7372 JBNU-2 JBNU 0.4792 0.6583 0.6208 0.4359 0.7272 BM25-pyserini-simple-collection Baselines 0.4769 0.6540 0.6148 0.4287 0.7241 JBNU-1 JBNU 0.4828 0.6531 0.6185 0.4092 0.7272 BM25-pyserini-metadata-collection Baselines 0.4729 0.6408 0.6160 0.4254 0.7272 f_gpt_rerank F 0.4673 0.6225 0.6599 0.3765 0.8001 JBNU-A JBNU 0.4500 0.5989 0.5607 0.3772 0.6636 r_gpt3d5_turbo r 0.4174 0.5950 0.5889 0.3806 0.7272 metadata-enhanced-all-mpnet-base-v2-zero-shot Baselines 0.4144 0.5937 0.5541 0.3862 0.6512 simple-all-mpnet-base-v2-zero-shot Baselines 0.4000 0.5895 0.5508 0.3806 0.6348 JBNU-B JBNU 0.4349 0.5763 0.5380 0.3580 0.6339 metadata-enhanced-all-MiniLM-L12-v2-zero-shot Baselines 0.3844 0.5660 0.5309 0.3821 0.6558 metadata-enhanced-all-MiniLM-L6-v2-zero-shot Baselines 0.3688 0.5328 0.5164 0.3654 0.6415 simple-all-MiniLM-L12-v2-zero-shot Baselines 0.3483 0.5288 0.5161 0.3502 0.6365 metadata-enhanced-trec-product-search-gte-small Baselines 0.3520 0.5168 0.5101 0.3443 0.5859 metadata-enhanced-trec-product-search-e5-small-v2 Baselines 0.3488 0.5119 0.5082 0.3481 0.6096 metadata-enhanced-trec-product-search-gte-base Baselines 0.3423 0.5009 0.5004 0.3400 0.5895 simple-e5-large-zero-shot Baselines 0.3339 0.4998 0.4490 0.3428 0.5537 simple-all-MiniLM-L6-v2-zero-shot Baselines 0.3261 0.4952 0.4924 0.3334 0.6099 simple-trec-product-search-gte-small Baselines 0.3194 0.4901 0.4902 0.3080 0.5692 simple-trec-product-search-gte-base Baselines 0.3067 0.4777 0.4813 0.3123 0.5676 simple-trec-product-search-all-miniLM-L12-v2 Baselines 0.3060 0.4763 0.4589 0.3100 0.5351 metadata-enhanced-trec-product-search-bge-small-en Baselines 0.3193 0.4721 0.4708 0.3012 0.5565 metadata-trec-product-search-all-miniLM-L12-v2 Baselines 0.3129 0.4681 0.4603 0.3081 0.5581 metadata-trec-product-search-all-miniLM-L6-v2 Baselines 0.3144 0.4673 0.4675 0.3181 0.5528 simple-trec-product-search-all-miniLM-L6-v2 Baselines 0.3008 0.4591 0.4599 0.2931 0.5429 metadata-enhanced-gte-large-zero-shot Baselines 0.2503 0.4501 0.4103 0.2698 0.4978 simple-trec-product-search-bge-small-en Baselines 0.2726 0.4379 0.4328 0.2741 0.5080 search-dpr-bert-base Baselines 0.2648 0.4272 0.4333 0.2796 0.5068 metadata-enhanced-trec-product-search-e5-base-v2 Baselines 0.2703 0.4242 0.4118 0.2793 0.5148 metadata-enhanced-trec-product-search-bge-base-en Baselines 0.2709 0.4237 0.4165 0.2938 0.4955 metadata-enhanced-trec-product-search-dpr-bert Baselines 0.2636 0.4165 0.4276 0.2774 0.5208 simple-trec-product-search-all-mpnet-base-v2 Baselines 0.2377 0.4090 0.4013 0.2507 0.4747 metadata-trec-product-search-all-mpnet-base-v2 Baselines 0.2611 0.4089 0.4118 0.2643 0.5006 simple-trec-product-search-bge-base-en Baselines 0.2448 0.4064 0.4027 0.2628 0.4728 simple-gte-large-zero-shot Baselines 0.2146 0.3930 0.3654 0.2319 0.4294 simple-bge-small-zero-shot Baselines 0.1898 0.3680 0.3475 0.2059 0.4188 metadata-enhanced-bge-base-en-zero-shot Baselines 0.1919 0.3396 0.3290 0.2211 0.4301 simple-bge-base-zero-shot Baselines 0.1178 0.2948 0.2458 0.1479 0.2664 simple-gte-base-zero-shot Baselines 0.0522 0.1493 0.1131 0.0581 0.0965 simple-bge-large-zero-shot Baselines 0.0498 0.1486 0.1056 0.0537 0.0787 simple-e5-base-zero-shot Baselines 0.0439 0.1168 0.0938 0.0527 0.0894 metadata-enhanced-e5-base-v2-zero-shot Baselines 0.0276 0.0936 0.0861 0.0375 0.1021 metadata-enhanced-gte-base-zero-shot Baselines 0.0285 0.0604 0.0614 0.0333 0.0940 simple-bert-base-uncased-zero-shot Baselines 0.0074 0.0352 0.0294 0.0100 0.0374 metadata-enhanced-bge-large-en-zero-shot Baselines 0.0101 0.0323 0.0287 0.0108 0.0314 simple-contriever-base-zero-shot Baselines 0.0049 0.0159 0.0159 0.0062 0.0372 metadata-enhanced-e5-small-v2-zero-shot Baselines 0.0102 0.0142 0.0116 0.0090 0.0126 simple-e5-small-zero-shot Baselines 0.0071 0.0113 0.0098 0.0089 0.0130 metadata-enhanced-contriever-base-msmarco Baselines 0.0022 0.0081 0.0108 0.0026 0.0332 metadata-enhanced-trec-product-search-bge-large-en Baselines 0.0000 0.0021 0.0008 0.0000 0.0000 simple-trec-product-search-gte-large Baselines 0.0000 0.0015 0.0007 0.0001 0.0001 metadata-enhanced-trec-product-search-e5-large-v2 Baselines 0.0000 0.0015 0.0007 0.0000 0.0000 metadata-enhanced-trec-product-search-gte-large Baselines 0.0000 0.0011 0.0006 0.0000 0.0000 results on the impact of using metadata vs simple data across some baseline runs. Based on this data, we note that when focused on NDCG@10, the introduction of metadata is somewhat of a mixed message. Some models see benefit from the additional information, and others see losses. Despite this variability, our impact tends to be relatively small, as good retrievers models know an effect of less than 5%. When focused on recall, the message changes, as most models see an improvement in recall by using metadata. 6 \fQuery Max-Mean Gap Systems Score 0 Elite (Elite) Volano/drivo/Kura For Body 329770001 0.9781 96.77% small measuring rice bin Asvel 0.8037 70.97% Switch protective film Japanese glass blue light reduction water repellent anti-fingerprint 0.7968 77.42% Elegant satin floral lace ribbon lingerie set 0.7264 59.68% onlypuff Pocket Shirts for Women Casual 0.6750 27.42% Ekouaer Long Nightgown,Women\u2019s Loungewear Short Sleeve 0.6590 33.87% Lugz Women\u2019s Empire Hi Wvt Fashion Boot 0.6559 24.19% Dinosaur Pee Pee Teepee Wee 0.6553 41.94% 10th birthday decorations for girl 0.6548 19.35% juDanzy kids knee high tube socks with grips 0.6461 58.06% Matching Delivery Robe and Swaddle Blanket 0.6429 41.94% UCGOU Bubble Mailers 7.25x12 Inch Teal 25 Pack 0.6277 24.19% Cicy Bell Women\u2019s Sunflower 0.6211 62.90% Canomo Lamp Light Kit Make a 0.6209 19.35% 5L matte black rectangular trash can with soft close lid and anti-bag slip liner for bathroom or kitchen 0.6206 50.00% Women\u2019s UPF 50+ cotton linen bucket sun hat beige small 0.6163 24.19% Small breed wet dog food Hill\u2019s Science Diet Chicken & Barley Recipe 0.6038 40.32% DKB Evian Jetted Whirlpool 0.6031 27.42% OtterBox Symmetry Disney Princess Mulan iPhone Xs iPhone X case 0.6002 45.16% Stars in the Desert book 0.5958 58.06% DC Collectibles Batman Arkham Origins 0.5930 35.48% fall sunflower pumpkin placemats set of 6 cotton linen washable table mats 0.5853 27.42% 300 piece jigsaw puzzle Kitchen Memories by Steve Crisp 0.5821 30.65% Marvel Avengers Endgame Gauntlet T-Shirt 0.5816 20.97% Pahajim Women Fashion Purses Handbags Shoulder Tote Bags Top Handle Satchel 0.5811 46.77% Mai Puru Endo Mai\u2019s First Photo Collection 0.5806 66.13% girls princess dress up costume headband accessories 0.5764 19.35% Nanatang Badflower Logo Men\u2019s Long Sleeve Sweatshirt\u2019s 0.5756 14.52% HAPY SHOP 80 Pcs Silver Alligator Hair 0.5705 45.16% VERYKE L-Shaped sectional sofa chenille fabric golden legs living room 0.5689 27.42% ZINUS Owen Wood Platform 0.5667 29.03% Xperia 10 II Blue Light Cut Glass Film Asahi Japanese Ultra Thin Anti-Bubble Anti-Fingerprint 0.5630 67.74% Acer V6 V196LB 19\" HD 0.5619 50.00% Table 4: Per query gap between Mean NDCG@10 and Max NDCG and the % of retrieval systems with a NDCG@10 of zero. Run Zero Shot NDCG @10 (metadata) NDCG @10 (simple) Impact BM25 Y 0.6408 0.6540 -0.0133 all-MiniLM-L12-v2 Y 0.5660 0.5288 0.0371 all-MiniLM-L6-v2 Y 0.5328 0.4952 0.0376 all-mpnet-base-v2 Y 0.5937 0.5895 0.0043 bge-base-en Y 0.3396 0.2948 0.0448 bge-large-en Y 0.0323 0.1486 -0.1163 contriever-base Y 0.0081 0.0159 -0.0079 e5-base-v2 Y 0.0936 0.1168 -0.0232 e5-small-v2 Y 0.0142 0.0113 0.0029 gte-base Y 0.0604 0.1493 -0.0889 gte-large Y 0.4501 0.3930 0.0571 gte-small Y 0.6647 0.6612 0.0035 bge-base-en N 0.4237 0.4064 0.0174 bge-small-en N 0.4721 0.4379 0.0342 Bert-base N 0.4165 0.4272 -0.0107 gte-base N 0.5009 0.4777 0.0233 gte-large N 0.0011 0.0015 -0.0005 gte-small N 0.5168 0.4901 0.0267 all-miniLM-L12-v2 N 0.4681 0.4763 -0.0082 all-miniLM-L6-v2 N 0.4673 0.4591 0.0082 all-mpnet-base-v2 N 0.4089 0.4090 -0.0001 Table 5: NDCG@10 performance of retrieval methods using the simple collection and metadata enhanced collection. However, like the impact on the top ten, recall improvements are minor, with a few exceptions. Across both metrics, we do not see any impact trend related to fine-tuned or zero-shot models. Finetune vs. Zero-Shot As part of our baselines, we evaluated a set of naive baselines where we finetune in a single form and compare the impact of fine-tuning across runs. We leverage the Tevatron library and follow the training 7 \fRun Zero Shot R@100 (Metadata) R@100 (Simple) Impact BM25 Y 0.7272 0.7241 0.0032 all-MiniLM-L12-v2 Y 0.6558 0.6365 0.0193 all-MiniLM-L6-v2 Y 0.6415 0.6099 0.0316 all-mpnet-base-v2 Y 0.6512 0.6348 0.0163 bge-base-en Y 0.4301 0.2664 0.1637 bge-large-en Y 0.0314 0.0787 -0.0473 contriever-base Y 0.0332 0.0372 -0.0040 e5-base-v2 Y 0.1021 0.0894 0.0127 e5-small-v2 Y 0.0126 0.0130 -0.0005 gte-base Y 0.0940 0.0965 -0.0026 gte-large Y 0.4978 0.4294 0.0684 gte-small Y 0.7416 0.7372 0.0044 bge-base-en N 0.4955 0.4728 0.0227 bge-small-en N 0.5565 0.5080 0.0485 Bert-base N 0.5208 0.5068 0.0140 gte-base N 0.5895 0.5676 0.0219 gte-large N 0.0000 0.0001 -0.0001 gte-small N 0.5859 0.5692 0.0167 all-miniLM-L12-v2 N 0.5581 0.5351 0.0230 all-miniLM-L6-v2 N 0.5528 0.5429 0.0099 all-mpnet-base-v2 N 0.5006 0.4747 0.0259 Table 6: Recall@100 performance of retrieval methods using the simple collection and metadata enhanced collection. procedure from the NQ implementation 2. We train each model for 40 epochs using 4 A100 with a batch size of 128, cross-device negatives, and learning rates of 1e-5,2e-5,3e-5,5e-5, and 1e-4, selecting the model that had the lowest validation loss at the end. These runs are not meant to be highly optimized fine-tuning runs but general explorations on the impact of fine-tuning. As shown in tables 9 and 10, we see a consistent trend that the larger models suffer from fine-tuning (indicating the fine-tuning recipe was incorrect), but smaller models see large gains, in some cases going from completely unusable to highly competitive. 5" + }, + { + "url": "http://arxiv.org/abs/2304.03401v2", + "title": "Noise-Robust Dense Retrieval via Contrastive Alignment Post Training", + "abstract": "The success of contextual word representations and advances in neural\ninformation retrieval have made dense vector-based retrieval a standard\napproach for passage and document ranking. While effective and efficient,\ndual-encoders are brittle to variations in query distributions and noisy\nqueries. Data augmentation can make models more robust but introduces overhead\nto training set generation and requires retraining and index regeneration. We\npresent Contrastive Alignment POst Training (CAPOT), a highly efficient\nfinetuning method that improves model robustness without requiring index\nregeneration, the training set optimization, or alteration. CAPOT enables\nrobust retrieval by freezing the document encoder while the query encoder\nlearns to align noisy queries with their unaltered root. We evaluate CAPOT\nnoisy variants of MSMARCO, Natural Questions, and Trivia QA passage retrieval,\nfinding CAPOT has a similar impact as data augmentation with none of its\noverhead.", + "authors": "Daniel Campos, ChengXiang Zhai, Alessandro Magnani", + "published": "2023-04-06", + "updated": "2023-04-10", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "main_content": "Introduction Contextual language representations derived from Large Language Models (LLM) have led to impressive improvements in performance across many tasks in natural language processing, such as sentiment analysis, topic detection, and question answering. In information retrieval, LLM based cross encoders and bi-encoder models have led to major improvements in relevance on benchmarking datasets like MSMARCO (Nguyen et al., 2016) and Natural Questions (Kwiatkowski et al., 2019) and have been adopted as common backbones for many industrial deployments in search. Unlike traditional term-based search, contextual representations excel at semantic search, which can improve releFigure 1: To learn a align the representation of queries and their counterparts with noise, a contrastive loss is used. Since the document encoder and its relative relation to the query encoder are frozen, an anchoring vector keeps the aligned encoder from drifting from its original learned representation. vance as it matches intent instead of keywords. While neural methods excel on well-formulated academic benchmarks, performance falters when faced with domain shifts or queries with misspellings or typos. On recent benchmarks like BEIR (Thakur et al., 2021), cross-encoders are more robust to shifts in domain than bi-encoders. When looking at queries with some noise like typos and misspellings, the same holds (Zhuang and Zuccon, 2021). Despite their superior performance on noisy queries and domain shifts, cross-encoder inference requirements make them too expensive to use at scale. To avoid inference inef\ufb01ciency of cross-encoder, bi-encoders have emerged as a popular method for retrieval, particularly for candidate generation in multi-stage ranking systems. Bi-encoders leverage query and document models trained to match their representations in a latent space. Since document representations are query independent, they only need to be generated once, and the inference load is limited to a single run of the query encoder. Research on bi-encoder has been driven by the arXiv:2304.03401v2 [cs.IR] 10 Apr 2023 \favailability of large training datasets such as MSMARCO (Campos et al., 2016), Natural Questions (NQ)(Kwiatkowski et al., 2019), and Trivia QA (TQA) (Joshi et al., 2017). These datasets have allowed deep explorations on how to improve training procedure (Qu et al., 2021), decrease index size (Yamada et al., 2021), and model ef\ufb01ciency (Khattab and Zaharia, 2020). Despite the tremendous success, these neural methods models are brittle to subtle search domain shifts and minor query formulation variations(Wu et al., 2021). While there has been plenty of work that has shown how neural methods are not robust to typos (Wu et al., 2021) (Penha et al., 2021) (Sidiropoulos et al., 2022) (Sidiropoulos and Kanoulas, 2022) (Zhuang and Zuccon, 2021) all approaches which improve performance either require a new optimized general model such as CharBERT (Zhuang and Zuccon, 2022) or require retraining with data augmentation (Zhuang and Zuccon, 2021). While effective, both approaches introduce a sizable overhead in dataset generation and augmentation or language model pre-training. Moreover, despite the effectiveness of these two techniques, further study is required to understand the interplay between data augmentation and curriculum learning (Mao et al., 2022) and topic-aware sampling (Hofst\u00e4tter et al., 2021). Seeking to improve the performance of the query encoder on noisy queries with high ef\ufb01ciency possible, we introduce Constrastive Allignment POst Training (CAPOT). To avoid complicated dual encoder training regimes, CAPOT assumes that the document encoder and index are immutable and learn an improved query representation without altering existing relations to the index. As shown in \ufb01gure 1, CAPOT uses a traditional contrastive loss (Schroff et al., 2015) where queries with noise (positive samples) should be closer to the anchor (query without noise) than unrelated queries. Unlike a traditional contrastive loss, CAPOT introduces a notion of an anchoring loss between the unaltered model and the aligned model. As the model learns to group noisy queries with their unaltered roots, we also constrain its ability to alter the representation aligned with the unaltered document encoder. The main contributions of our work are as follows: \u2022 We introduce CAPOT, a highly ef\ufb01cient \ufb01netuning method for improving performance on noisy queries without retraining a model or index regeneration. \u2022 We demonstrate that CAPOT is incredibly effective at making the encoder robust, particularly with typos. Using CAPOT approximates the impact of data augmentation without the associated computational overhead. \u2022 We demonstrate that CAPOT is robust enough to prove functional with completely unsupervised data. Using the ORCAS dataset, CAPOT can improve performance without access to the query training distribution. 2 Related Work Bi-Encoders, commonly called dual-encoders or dense retrievers, decompose ranking by leveraging the inner product of query and document representations to produce a relevance score for query document pairs. Since their document representations are query invariant, they can be precomputed and loaded into an Approximate Nearest Neighbor (ANN) such as FAISS (Johnson et al., 2019). The k closest documents can be found for each query with minimal latency at run time. Since bi-encoders leverage LLM such as BERT (Devlin et al., 2019), they are often limited to ranking short passages of text and are commonly referred to as Dense Passage Retrievers (DPR) (Karpukhin et al., 2020). Driven by their ef\ufb01ciency in deployment and relevance performance, DPRbased models have rapidly become the building blocks for systems doing product search (Magnani et al., 2022), open domain question answering (Karpukhin et al., 2020) and customer support (Mesquita et al., 2022). Recent work has heavily focused on improving the relevance of DPR models by improving the negative sampling using methods like ANCE (Xiong et al., 2021) and in-batch negatives (Lin et al., 2021). While effective DPR models are brittle to shifts in the domain, minor variations can cause a complete collapse in relevance. Li et al. \u20192022 introduced methods for improving such performance by having a single query encoder leverage multiple document encoders to transfer between domains (Li et al., 2022). While effective, such a method carries a high computational load as multiple indexes must be maintained and updated. 2 \fNoising Function Alteration Type Original Alteration Determiner Syntactic who sang waiting for a girl like you who sang waiting a a girl like you Synonym Semantic Which was the \ufb01rst European country to abolish capital punishment? Which was the \ufb01rst European country to abolish majuscule punishment? Lemmatize Syntactic who plays young dr mallard on ncis who play young dr mallard on ncis Stemming Syntactic who recorded the song still the one? who record the song still the one? Random Character Swap (RCS) Surface big little lies season 2 how many episodes big litt e lies season 2 how many episodes Keyboard Character Swap (KCS) Surface when did veterans day start being called veterans day when djid veterans day start being called veterans day Character Delete (CD) Surface when did big air snowboarding become an olympic sport when did big air snowboarding become an olympic sort Reorder Word (RW) Surface who is the main character in green eggs and ham who is the main character and green eggs in ham Back-Translation (BT) Semantic what is project charter in project management What is a project charter in project management Paraphrasing Semantic turkey and china time difference Time difference between Turkey and China in the middle of the night, depending on the time difference. Table 1: Example of the forms of query noise that we leverage to evaluate how robust bi-encoders are to noise. Data Augmentation (DA) is a popular approach for improving how well models perform on new or noisy data. In data augmentation, training is extended by augmenting the training data with modi\ufb01cations or perturbations which match the desired model behavior. DA is extremely common in computer vision where training data is commonly rotated, blurred, cropped, or zoomed-in/out (Miko\u0142ajczyk and Grochowski, 2018) (Zhong et al., 2020). DA has become increasingly more popular in NLP and has been used to improve model performance (Jiao et al., 2020), simulate large-scale training data when it is not available (Li et al., 2020), and mitigate bias (Lu et al., 2020) in existing datasets. A detailed survey on DA approaches for NLP has been complied by Feng et al. 21\u2019 (Feng et al., 2021). Contrastive Learning builds on the notion of a contrastive loss (Chopra et al., 2005), which seeks to create clusters in the embedding space such that examples with a shared class are far from other classes but close to each other. Much like learning that queries with noise have a shared intent, Schroff et al. 15\u2019 leverage contrastive learning to recognize faces despite different angles and perspectives (Schroff et al., 2015) by using a triplet loss. This approach is a natural \ufb01t for the world of search as relevance is at its core clustering relevant items close together and far from irrelevant items. Recently, contrastive learning has become a method for learning relevance at the corpora scale (Xiong et al., 2021) and improving DPR on noisy queries, (Sidiropoulos and Kanoulas, 2022) (Chen et al., 2022) 3 Query Encoders Meet Noise 3.1 Generating Noisy Queries While previous work has studied the impact of minor variations to queries, such as typos and misspellings, query noise is much more diverse. Seeking to expand this understanding, we explore the impact of query alterations that evaluate surface, syntactic, and semantic alterations. To apply noise to a query, we either edit a query to introduce a speci\ufb01c type of noise or rewrite the query to simulate similarly worded intents. Each query that is altered has a notion of its anchor, either a character, word or a group of words, which is selected where noise is applied. To achieve this for each query, a character or word index is randomly selected. Then, noise is applied to the left, right, or at the noising index (replacing the existing index) with equal probability. Example alterations are in table 1. To study the impact of surface-level alterations, we introduce noise in queries by simulating misspellings and typos by swapping, eliminating, or shuf\ufb02ing characters in a query. To understand how models respond to typos or character omissions, we delete a character (DC), inject a random character (RCS), or simulate a keyboard-based typo by injecting a character close to its neighbor on a keyboard (KCS). We swap the indexed word with another word in the query to understand how systems may work when faced with natural shifts in keyword queries. To study syntactic alterations, we introduce noise that alters the syntax of the query introducing lemmas, stems, synonyms, and determiners using tools from the NLTK toolkit (Bird et al., 2009). Synonyms are introduced using NLTK\u2019s interface with WordNet (Fellbaum, 2000), and 3 \fexact synonyms for a single word are introduced. Determiners, af\ufb01xes that occur with nouns and commonly are not discriminative for search, are introduced similarly to typos to the left or right of noun phrases. Lemma\u2019s return words to their canonical root while stemming reduced word in\ufb02ection using the Porter-stemmer. We select up to \ufb01ve words per query to attempt stemming/lemmatization, but many queries do not have any words which can be stemmed or lemmatized versions and, as a result, are un-noised. Exploring semantically similar queries, we leverage paraphrasing, back-translation, and synonyms. To paraphrase, we rewrite queries using a T5 (Raffel et al., 2020) sequence-to-sequence model, which has been \ufb01ne-tuned on the PAWS (Zhang et al., 2019a) dataset. For back-translation, we use OpenNMT\u2019s (Klein et al., 2017) to translate queries from English to another language and then back to English after exploring performance using German, French, Italian, and Spanish to \ufb01nd the German to have the best quality and use only those. It is worth noting that these semantic noising methods are the most likely to alter the true query intent, as seen by the \u2019hallucinations\u2019 in table 1 paraphrase alteration. Using the aforementioned noising approaches, we noise the queries on the MSMARCO (Campos et al., 2016) 1 2, Natural Questions 3 4 (Kwiatkowski et al., 2019), and the Trivia QA (Joshi et al., 2017) 5 Passage Ranking datasets. 3.2 Baseline Performance In production workloads, bi-encoders are most commonly used for early retrieval, where the sets they produce are then reranked using a cross-encoder. Given cross-encoder are more robust to typos (Sidiropoulos and Kanoulas, 2022), our work focuses exclusively on evaluating 1https://huggingface.co/datasets/spacemanidol/msmarcopassage-query-variation 2https://huggingface.co/datasets/spacemanidol/rewritenoisy-queries 3https://huggingface.co/datasets/spacemanidol/wikipedianq-query-variation 4https://huggingface.co/datasets/spacemanidol/nqnoising 5https://huggingface.co/datasets/spacemanidol/wikipediatrivia-query-variation 20 100 200 50 60 70 80 90 95 Recall Set Size Recall Accuracy Bi-encoder Recall Accuracy by Recall Set size NQ NQ w/noise MSMARCO MSMARCO w/noise TriviaQA TriviaQA w/noise Figure 2: Bi-encoder recall accuracy on noisy and non-noisy queries with variations of recall set size and datasets. the impact of noise on the retrieval accuracy of bi-encoders. To do so, we train a series of task-speci\ufb01c bi-encoders leveraging the open-source bi-encoderfocused library Tevatron (Gao et al., 2022) with task-speci\ufb01c training parameters found in 4 on the widely used and studied MSMARCO (Campos et al., 2016), Natural Questions (NQ) (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017) passage retrieval datasets. For contextual representations, each encoder uses a pre-trained BERT (Devlin et al., 2019) model for its initialization, and we use separate models for the query and document models. Representations are taken from the unaltered 768-dimension vectors based on the last hidden representation of the CLS token. For each dataset, we train each model using 5 different random seeds with \ufb01xed optimal hyperparameters, generate seed and task-speci\ufb01c indexes and evaluate the retrieval impact of queries with noise and unaltered routes. We evaluate the impact on retrieval by measuring the impact on retrieval accuracy at k with a k = 20, 100, 200. As shown in \ufb01gure 2 on the impact of averaged noise, our experimental results align with prior research. There is a wide variation of impact as the long, trivia-inspired queries of Trivia QA 4 \fsee minor losses in accuracy compared to the real-world web search queries of MSMARCO, up to 12% the impact. Besides the impact of query type, we also notice the large role recall set size plays on the relative degradation in retrieval accuracy. Across tasks and datasets, increasing the recall set from 20 to 200 decreases the impact on accuracy by about 50%. Focusing on the impacts of individual types of noise shown in A.1, we see that queries with surface alteration, such as typos, see the largest loss. Despite featuring real-world search engine queries with noise, on MSMARCO, there is nearly a 30% loss in retrieval accuracy for queries with typos, dropping from 71% to 41%. On all datasets, queries with character-level alterations see a 50% average higher loss in accuracy than other alterations. This large gap can be attributed to the vocabulary construction method of BERT and BERT-like models, where a minor alteration to a single character can produce large variations in tokenization. In the absence of model optimization, data augmentation, or post-training optimization, the clearest way to make dense retrieval robust to noise is to expand the recall set and allow cross-encoders to re-rank the expanded results. 4 Incorporating Noise By Aligning Representations Lc(x, x+, x\u2212) = X x\u2208X max(0, \u2225f(x)\u2212f(x+) \u22252 2 \u2212\u2225f(x)\u2212f(x\u2212) \u22252 2 +\u03f5) (1) La(x) = X x\u2208X max(0, \u2225f(x) \u2212fa(x) \u22252 2 +\u03f5a) (2) Lr(x+, x\u2212) = X x\u2208X max(0, \u2212y \u2217(f(x+) \u2212f(\u2212x) + \u03f5r) (3) LCAP OT (x, x+, x\u2212) = X x\u2208X \u03c4c \u2217Lc + \u03c4a \u2217La + \u03c4r \u2217Lr (4) 4.1 Motivation A robust query encoder seeks to represent queries with a shared intent in a common latent space such that minor variations in the formulation of the intent lead to similar document ranking. Prior work has shown that data augmentation and typooptimized models increase model robustness, but it is not without cost. Data augmentation requires changes to existing training methodologies and complete regeneration of the passage index. Given that the generation of the passage index can take longer than it does to train the model (Karpukhin et al., 2020) regenerating a new index and retraining a model every time a novel form of noise is discovered is not tractable. Optimized pre-trained models can provide effective modeling solutions. However, given the rapid iteration pace of pretrained language models, making typo-aware variants for each new advance is hard to scale. Motivated to improve performance without altering the underlying pretrained model or the biencoder training regime, we introduce CAPOT, a new methodology for increasing model robustness which is computationally inexpensive and independent of training. CAPOT works well because it can focus on improving the query encoder and leverages the short nature of queries to scale to large batch sizes. 4.2 CAPOT Contrastive Alligment Post Training (CAPOT) is an expansion on traditional contrastive learning focused on making dual encoders robust to noise. The goal of CAPOT is to allow representations of noisy queries to be close to their original on the traditional triplet contrastive loss (Schroff et al., 2015) in 1, where f is a query encoder, x is the original query, x+ is a query where noise has been introduced, and x\u2212is a negative query selected at random 6. We modify 1 to scale the role of positive and negative samples using term speci\ufb01c in\u03c4positive and \u03c4negative parameters. While 1 allows the query-encoder to represent queries and noisy queries in a similar latent space, it has the unwanted side effect of query representation drifting related to the learned notion of relevance. Without controlling for this drift, a complete collapse in ranking accuracy came at the expense of effective representation of noisy samples. To avoid this, we introduce an anchoring term,2, that minimizes the drift between learning a notion of relevance and shared embeddings for queries with noise where f is the noise-robust query-encoder, and fa is a copy of the unaltered 6We explore the usage of hard negatives mined from nearby query representation but did not \ufb01nd any measurable impact 5 \ffrozen query-encoder, optimized for an existing document encoder and document index. Seeking to improve performance further, we add a ranking loss as shown in 3 between the anchored model fa and f where the model learns that f(x+) always ranks higher than fa(x). While this loss component is not crucial, we can leverage this to improve model performance slightly. 1,2 and 3 are combined to form the CAPOT, 4. 4.3 Experimental Approach To qualify the effectiveness, we explore how alignment can improve performance on noisy queries before and after bi-encoder training and compare them to data augmentation. We then compare the performance of the aligned models with unaltered baselines and models trained with DA. Except for models aligned with CAPOT, each experiment requires a complete training run and index generation, which can be quite slow. Each experiment is performed across 5 seeds, and we use the same evaluation metrics previously discussed and report the mean performance over \ufb01ve seeds. To quantify the ability of post-training alignment, we take the converged baseline models and apply CAPOT to align the model on the training portion of the query set. Once aligned, a model is retrieved on the unaligned, \ufb01xed document index generated during our baseline experimentation. Since queries are short and batch sizes can be scaled easily, it\u2019s important to note how fast this is. A single 2080ti NVIDIA GPU using CAPOT takes under 60 minutes on the NQ dataset. To explore if alignment can happen before training, we leverage the ORCAS (Craswell et al., 2020) dataset to generate a corpus of 10 million queries. Using these queries, we create positive and negative noisy samples using the same noising approach discussed in 3.1 making a dataset of 100 million queries called Noisy-ORCAS 7. Using these 100m queries, we align the representation of queries and their noisy counterparts using a BERT-base model optimized for masked-language modeling. Given the scale of this dataset, We train for a single epoch on the Noisy-ORCAS corpus using the \u03c4positive = 1.0,\u03c4negative = 0.1,and \u03c4anchor = 1.0 on 4 V100 GPUs with a batch size of 2048. Then, 7https://huggingface.co/datasets/spacemanidol/CAPOTqueries we leverage this optimized model to initialize our unaltered bi-encoder model\u2019s training procedure. Then, this model is trained on our datasets and evaluated similarly to the baseline. We refer to models trained this way as(PT), and each usage of PT requires retraining and index regeneration. 4.4 Experimental Results 20 100 200 \u221212 \u221210 \u22128 \u22126 \u22124 \u22122 Recall Set Size Loss in Accuracy Relative Degradation in Retrieval vs. Recall Set size Baseline PT DA CAPOT Figure 3: Average Relative loss in bi-encoder recall accuracy on NQ by recall set size depth on the baseline, Pretrained Alignment (PT), Data Augmentation (DA), and Contrastive Alignment Post Training (CAPOT) on noisy queries. As shown in \ufb01gures 3 and 4, using CAPOT can improve performance on queries with noise, particularly typos. Moreover, the impact of CAPOT is similar to DA without a training set alteration or index regeneration. CAPOT approach takes advantage of training on only the query encoder and \ufb01xing the document encoder. Since queries tend to be short, CAPOT uses a max sequence length of 28 tokens to minimize memory usage, allowing scaling to batch sizes of 2048 on GPUs with 16 GBs. This large batch size means training is rapid and effective. A complete alignment run on the NQ dataset takes one hour on a single V100 gpu. At the same time, data augmentation requires 26 hours for training and an additional day for index generation (50 hours overall). Looking at summary metrics in table 2 and 3, we can see that the use of pre-training alignment 6 \fDataset Regular DA PT CAPOT Depth 20 100 200 20 100 200 20 100 200 20 100 200 NQ -10.28% -5.07% -4.91% -4.95% -2.67% -2.76% -12.89% -7.05% -5.39% -5.95% -3.46% -2.94% TriviaQA -4.90% -2.98% -2.24% -7.20% -4.44% -3.57% -11.89% -6.83% -5.34% -3.37% -1.68% -1.17% MSMARCO -20.92% -33.91% -30.46% -43.98% -28.89% -16.73% -46.28% -36.41% -28.69% -22.76% -16.73% -14.48% Table 2: Relative degradation in retrieval accuracy at 20,100,200 on NQ,TriviaQA, and MSMARCO. Retrieval accuracy and relative loss across types of noise for unaltered (Regular), Data Augmentation (DA),Pre Training Alignment (PT), and Post Training Contrastive Alignment (CAPOT) Dataset Regular DA PT CAPOT Depth 20 100 200 20 100 200 20 100 200 20 100 200 NQ -14.96% -8.33% -7.27% -5.17% -2.39% -2.55% -15.74% -7.99% -6.57% -5.28% -2.86% -2.37% TriviaQA -8.43% -4.56% -3.39% -7.71% -4.44% -3.64% -14.64% -8.28% -5.47% -3.39% -1.42% -0.87% MSMARCO -41.68% -33.91% -30.46% -43.98% -33.70% -28.69% -55.58% -45.47% -40.95% -24.58% -18.40% -15.95% Table 3: Relative degradation in retrieval accuracy at 20,100,200 on NQ,TriviaQA, and MSMARCO. Retrieval accuracy and relative loss across types of character alteration noise (typos) for unaltered (Regular), Data Augmentation (DA),Pre Training Alignment (PT), and Post Training Contrastive Alignment (CAPOT) 20 100 200 \u221216 \u221214 \u221212 \u221210 \u22128 \u22126 \u22124 \u22122 Recall Set Size Loss in Accuracy Relative Degradation in Retrieval vs. Recall Set size Baseline PT DA CAPOT Figure 4: Average Relative loss in bi-encoder recall accuracy on NQ by recall set size depth on the baseline, Pretrained Alignment (PT), Data Augmentation (DA), and Contrastive Alignment Post Training (CAPOT) on character-based noisy queries (typos). is never optimal and always under-performs un unaltered network. We believe this indicates the importance of introducing noise after training. If introduced prior, the noise will likely be forgotten, and it will hamper the ability to learn a proper, relevant representation. 5 Expanding Contrastive Alignment Seeking to explore the impact of variations in alignment query distribution\u2019s role, we explore how well CAPOT works with an alignment dataset that differs from the evaluation. To do so, we explore the impact of using the previously discussed NoisyORCAS dataset to align noisy queries for TriviaQA. Given the differences in dataset size, we train for the same number of optimization steps with the Noisy-Orcas data as we do with the regular data. As shown in \ufb01gure 5, using an unrelated dataset, 20 100 200 \u22125 \u22124 \u22123 \u22122 \u22121 Recall Set Size Loss in Accuracy Relative Degradation in Retrieval vs. Recall Set size Unaltered CAPOT CAPOT-ORCAS Figure 5: Average Relative loss in bi-encoder recall accuracy on NQ by recall set size depth on of unaltered,Contrastive Alignment Post Training (CAPOT) and Contrastive Alignment Post Training (CAPOT) ORCAS on TriviaQA. ORCAS, provides a close approximation to using the true query distribution, but it does not always outperform the un-altered baseline, indicated by the performance at 200. We believe this is expected 7 \fas the true query distribution is a factor in how the query vector manifold is optimized. 6 Discussion CAPOT and typos When queries have typos, CAPOT is a computationally ef\ufb01cient method of improving performance as the relative gap between unaltered and aligned is greatest on alterations like character deletion, keyboard character replacement, and random character replacement. We attribute this impact to the relative importance of our alignment dataset\u2019s character level alterations. Three out of the 10 methods focus on learning alignments based around minor character shifts, and as a result, the performance optimizes there to the detriment of other forms of noise. CAPOT is much less effective in improving the relevance of minor syntactic shifts such as lemmatization or stemming leading to marginal improvements over the unaltered baselines. We attribute this to the already smaller gap on syntactically altered queries, which on datasets such as TriviaQA have less than 2% impact. CAPOT and Retrieval Set Depth demonstrates that CAPOT, like DA, sees the highest impact when the recall set size is small. On the NQ the gap between CAPOT and the baseline at 20 is nearly 10% which narrows to 3% at 200. Limitations of contrastive alignment While effective, contrastive alignment has a non-negligible impact on the retrieval accuracy of unaltered queries. As shown in table 18 on non-noisy queries the use of data augmentation incurs no loss in accuracy yet CAPOT incurs 2.5%. This is a fundamental issue because the alignment of embeddings causes minor variations in representations that have actual implications on retrieval accuracy. We believe that the use of larger datasets could such as the web search logs used by the Generic Intent Representation of query vectors (Zhang et al., 2019b) could improve this. Poly Encoding using alignment-based optimization we believe leads to novel retrieval methods which allow for \ufb01xed index, constrained optimizations tailored to speci\ufb01c types of noise or de\ufb01ciencies in retrieval. Novel forms of noise-optimized encoders can be deployed in parallel without additional index generation. Given the prevalence of bi-encoders as candidate set generation tools, CAPOT, unlike Data Augmentation, can generate many targeted query encoder variants which share a document representation. As shown in \ufb01gure 6, instead of seeking a single query encoder that learns all surface and semantic forms of query representations, alignment approaches can be used to create many encoders which are tuned to various goals. Figure 6: Proposed poly-encoder architecture using noise-targeted query encoders optimized with CAPOT 7" + }, + { + "url": "http://arxiv.org/abs/2304.02721v3", + "title": "To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency", + "abstract": "Sequence-to-sequence language models can be used to produce abstractive\nsummaries which are coherent, relevant, and concise. Still, model sizes can\nmake deployment in latency-sensitive or web-scale implementations difficult.\nThis paper studies the relationship between model size, structured pruning,\ninference efficiency, and summarization accuracy on widely used summarization\ndatasets. We show that model accuracy is tied to the encoder size while\ninference efficiency is connected to the decoder. Using asymmetric pruning can\nlead to nearly 3x improvement in inference latency with ~1 point loss in\nRouge-2. Moreover, we find both the average degradation and the role of\nasymmetry to be consistent across model sizes and variations in datasets.", + "authors": "Daniel Campos, ChengXiang Zhai", + "published": "2023-04-05", + "updated": "2023-06-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction The application of sequence-to-sequence language models has become an important tool for natural language processing tasks such as machine translation (Sutskever et al., 2014), audio transcription (Radford et al., 2022), and abstractive summarization (Raffel et al., 2020). Sequence-to-sequence models effectively turn each of these aforementioned tasks into two-step problems: extraction and generation, and heavily condition the generation on the input. Besides ensuring on-topic responses sequence to sequence models have the added benefit of being able to map inputs to targets with varying lengths \u2217Corresponding author: dcampos3@illinois.edu 1https://github.com/spacemanidol/Efficient-Web-ScaleAbsractive-Summarization 2https://huggingface.co/spacemanidol 1 2 3 \u221230 \u221220 \u221210 0 Inference Speedup % Degradation in Rouge-2 Accuracy vs. Inference Speed Prune Decoder Prune Encoder Prune Both Figure 1: Impact of Asymmetrical Pruning on inference speedups and ROUGE-2 degradation on Query Independent Web Summarization. Inference Time is the mean inference time for a batch size of 1 on an A10 GPU over seven iterations. and modalities in ways encoder or decoder-only systems cannot. When used for abstractive summarization, sequence-to-sequence modeling has two steps, extraction using the encoder and generation using the decoder, which usually involves repeated execution until an end-of-sequence token is emitted. Since the encoder runs once on the input (Sutskever et al., 2014) its cost of execution is proportional to the batch size. The cost of decoder execution can be highly variable based on the generation length (Tay et al., 2021). Despite the broad study of sequence-to-sequence models (Raffel et al., 2020) and how they compress (Li et al., 2022), the role of model symmetry as applied to inference efficiency and model accuracy arXiv:2304.02721v3 [cs.CL] 12 Jun 2023 \fis lacking. Recent advances in scaling language models have led to a wide study on scaling laws as applied to language model performance (Kaplan et al., 2020), training data size (Hoffmann et al., 2022), machine translation (Henighan et al., 2020), and even reinforcement learning (Neumann and Gros, 2022). We build on this work and study the impact of scaling on abstractive summarization and what role model asymmetry has in it. This asymmetry can manifest in various ways, such as the number of layers and hidden units in the encoder and decoder and the type of attention mechanisms used. In this paper, we explore the role of asymmetry in the number of layers in encoder-decoder language modeling for summarization and its impact on the performance of these models. As shown in Figure 1, the symmetry of pruning drives the impact on accuracy and inference speedups for sequence-to-sequence models. The following research questions drive our work: \u2022 What scaling laws can be observed in abstractive summarization? \u2022 What impact does encoder-decoder asymmetry have on abstractive summarization accuracy? \u2022 What impact does encoder-decoder asymmetry have on abstractive summarization inference efficiency? \u2022 What is asymmetries impact on accuracy and inference efficiency does scale have in encoder-decoder models for abstractive summarization? It is in answering these questions that we deliver the following contributions: \u2022 We present the first robust study on scaling laws applied to the compression of sequenceto-sequence modeling. \u2022 We demonstrate that the asymmetric inference cost of sequence-to-sequence models leads to asymmetric pruning for optimal inference efficient compression. \u2022 We empirically demonstrate on a wide variety of benchmarks how Asymmetric Compression can lead to a 2.7x inference speedup with no loss in accuracy on the XSUM dataset. 2 Related Work Transformer Based Language Models such as BERT (Devlin et al., 2019) and T5 (Raffel et al., 2020) provide contextual language representations built on the Transformer architecture (Vaswani et al., 2017) which can be specialized and adapted for specific tasks and domains (Lee et al., 2020). Using these models, it becomes relatively easy to excel at a broad range of natural language processing tasks such as question answering, text classification, and sentiment analysis. Scaling Laws has become an increasingly important area of study as models\u2019 size and training data grows. Performance of the transformer-based language model improves with the relation to model size (Radford, 2018) and that larger models outperform smaller models (Brown et al., 2020) on most NLP tasks. Increasing the training corpus size can lead to large improvements in performance, and model sizes can have a optimal training data size (Hoffmann et al., 2022). Li et al. (2020) (Li et al., 2020) explore the relationship between model size and training efficiency finding larger models train faster and are more robust to pruning and quantization (Na et al., 2022). Asymmetrical in sequence-to-sequence models broadly refers to non-uniformity between encoder and decoder model shape or attributes. Training and inference procedures should match as closely as possible (Ranzato et al., 2015) (Mihaylova and Martins, 2019) as improvements in training loss during optimization result in improvements in model performance during Inference. While this may lead to the best model performance, it ignores the variable inference cost of sequence-tosequence models. During Inference, latency is dominated by the asymmetric execution of the language model. The auto-encoding encoder executes once over the entire input sequence, while the auto-regressive decoder executes iteratively until an end-of-sequence token is produced. Kasai et al. demonstrated how the sequence-tosequence language model performance for ma2 \fTable 1: Information about the architecture and attributes of the FLAN-T5 models Model Size(MBs) Parameters Encoder Layers Parameters Encoder Decoder Layers Parameters decoder Ratio End:Dec Hidden Size Flan-t5-small 3 146 60511616 8 35332800 8 41628352 0.849 512 Flan-t5-base 4 472 222903552 12 109628544 12 137949312 0.795 768 Flan-t5-large 5 1500 750251008 24 341231104 24 441918976 0.772 1024 chine translation is dominated by the encoder depth (Kasai et al., 2020). Tay et al. 2021 extend this work by finding a DeepNarrow which shows that for broad language modeling, it is possible to have 50% fewer parameters and a 40% faster inference with no loss in accuracy (Tay et al., 2021). Efficient Inference for language modeling is a growing area of study that broadly focuses on reducing the inference cost without losses in accuracy. Unstructured Pruning has been broadly studied (Han et al., 2015) (Sanh et al., 2020) (Kurti\u00b4 c et al., 2022) (Zafrir et al., 2021) (Campos et al., 2022) but realizing speedups can be difficult. Structured Pruning removes fundamental structural components in a language model such as individual attention heads (Voita et al., 2019) or entire model layers such as transformer encoders (Sanh et al., 2019). Rosenfeld et al. 2020 demonstrate that unstructured pruning impacts follow scaling laws (Rosenfeld et al., 2020) where larger models can be pruned with greater ease. Compressing Sequence-to-sequence is a growing area of study where approaches from regular, efficient Inference has shown some transfer ability. Shleifer et al. show that it is possible to gain 1.93x speedup on a BART summarization model by applying structural pruning (Shleifer and Rush, 2020) but find compression approaches differ in their success depending on the dataset. Leveraging semi-structured pruning, Lagunas et al. can gain a 1.19 speedup (Lagunas et al., 2021) for minor losses in accuracy. While they find that the encoder is easier to prune than the decoder, they do not use this evidence of asymmetry to speed up performance further. Li et al. investigate how to enable quantization, finding that without specialized distillation during quantization, performance collapses (Li et al., 2022). Leveraging that generation occurs iteratively, and some tokens are easier to generate than other CALM (Schuster et al., 2022) apply early exiting to improve inference speed by 1.4x. While existing work has found interest in asymmetry, it has not been studied directly, nor has relationships in model scale been explored. While there are other approaches such as knowledge distillation (Hinton et al., 2015) (Sanh et al., 2019) (Jiao et al., 2020), quantization (Zafrir et al., 2019), early exiting (Xin et al., 2020) and token pruning (Kim et al., 2021) these are not the focus on our work as understanding the impact of many variables together limits the depth of our exploration. We leave further study of the interplay between summarization and quantization, unstructured pruning, structured pruning, and knowledge distillation for future work. 3 Scale and Abstractive Summarization 3.1 Background Sequence-to-sequence language models such as BART (Lewis et al., 2021), T5 (Raffel et al., 2020), and PEGASUS (Zhang et al., 2020) combine transformer encoders and decoders to produce models which can adapt to novel tasks and reach top performance on tasks ranging from information retrieval (Nogueira et al., 2020) to summarization (Raffel et al., 2020). We focus on the instruction-tuned FLAN-T5 models (Wei et al., 2021) as their performance is competitive and they feature wide variations in model size ranging from 60 million to 11 billion parameters and given the cost of training the larger variants, focus on the small, base, and large variants. Details on model size and architecture can be found in table 1. Abstractive summarization is a method of sequence compression where a source document D is transformed into a target document dsum, which is shorter but faithful to the input. Datasets of use are a combination of public and academic benchmarks and a proprietary web search dataset. The CNN/DailyMail (CNNDM) (See et al., 2017) and XSUM (Narayan et al., 2018) datasets are based on the summarization of English new language models. The Query Independent 3 \f0 100 200 300 400 500 600 700 800 0 10 20 30 40 Model Parameters (millions) % Gain in Rouge-2(vs. Small) Summarization Accuracy vs. Model Size QIWS CNN/DM (See et al., 2017) XSUM (Narayan et al., 2018) Figure 2: Model Size vs. Gain to summarization accuracy as measured by the relative Gain in rouge-2 vs. the small model. Web Summary (QIWS) is a proprietary corpus of abstractive summaries of web pages that are used to create informative contextual snippets for search engine users. It is important to note the differences in compression factor in each dataset as each impact how decoder-driven inference latency is. Further information on the makeup of each dataset can be found in table 11. Metrics For each dataset, we evaluate model performance by measuring the ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-L (R-L), RougeSumL (RSL) 6 (Lin, 2004), and Generation Length (GenL) on the test portion of the dataset. To aid the reproducibility and extension of our work, we experiment using HuggingFace\u2019s Transformers 7, release our training and pruning scripts 8 and model variants for datasets that are publicly available datasets 9. 3.2 Scaling Laws for Abstract Summarization To study the role of scale in abstractive summarization, we train small, base, and large models of the three datasets mentioned above. We do not study 6Rouge-L is sentence level vs. RougeSum-L is summary level 7https://github.com/huggingface/transformers 8https://github.com/spacemanidol/Efficient-Web-ScaleAbsractive-Summarization 9https://huggingface.co/spacemanidol the XL (3B) and XXL (11B) as they are expensive and slow to train. For all of our experiments, we train on various hardware but fix the batch size to 64 using gradient accumulation and leverage the hyperparameters in 12. While further hyperparameter optimization and instruction tuning would likely lead to further gains in accuracy, our work is not focused on absolute Gains but on the relative relation of scale. As shown in 2, 13, 14, and 15, there is a substantial role between scale and performance, but there is a substantial variation across datasets. Datasets with short candidate summaries, such as XSUM, see nearly three times the impact compared to the long summaries of QIWS and CNNDM. During qualitative evaluations, the role of scale can easily be observed as smaller models generate more short keyword summaries while introducing scale makes responses more natural. 3.3 Inference Benchmark To evaluate the impact of asymmetry on inference, we run experiments on the throughput of each model. Using an A10 GPU and the models from our QIWS datasets, we evaluate performance with a max sequence length of 1024, a max summary of 256, and batch sizes 1, 8, and 16 using native inference in PyTorch. We report the mean and standard deviation of timings on seven runs. In comparing the impact of scale on R-2 vs. the effects on latency across batch sizes in 2, 4, 3 it becomes clear that larger models are more expensive to execute significantly as batch sizes increase. This is because of potential differences in output length within a batch as the batch completes when all sequences have produced an EOS token. To alleviate this issue bottleneck, improved streaming methods for improved batching have been proposed (Yang et al., 2020) but can be challenging to manage. 4 To Asymmetry and Beyond While prior work has studied how to improve inference and tangentially explored the asymmetry between the encoder and decoder, we study that explicitly and across model scales. We focus our studies on structural pruning as inference gains are easy to realize, and this approach is highly 4 \fTable 2: Impact of scale on inference throughput for abstractive summarization models trained on the XSUM dataset. Latency is measured in MS/batch and the impact is the impact to latency vs. the small model Model R-2 Gain BS 1 Latency Impact BS 8 Latency Impact BS 16 Latency Impact small 17.55 0.00% 138 1 230 1 330 1 base 19.77 12.63% 199 1.44 550 2.39 931 2.82 large 21.15 20.51% 445 3.22 1480 6.43 2700 8.18 Table 3: Impact of scale on inference throughput for abstractive summarization models trained on the QIWS dataset. Latency is measured in MS/batch and the impact is the impact to latency vs. the small model Model R-2 Gain BS 1 Latency Impact BS 8 Latency Impact BS 16 Latency Impact small 29.03 0 524 1 653 1 729 1 base 34.19 17.77% 746 1.42 1060 1.62 1310 1.80 large 37.37 28.72% 1,430 2.73 2240 3.43 3320 4.55 Table 4: Impact of scale on inference throughput for abstractive summarization models trained on the CNNDM dataset.Latency is measured in MS/batch and the impact is the impact to latency vs. the small model Model R-2 Gain BS 1 Latency Impact BS 8 Latency Impact BS 16 Latency Impact small 11.09 0 171 1.00 252 1.00 344 1.00 base 15.69 41.50% 255 1.49 550 2.18 845 2.46 large 16.34 47.41% 525 3.07 1370 5.44 2300 6.69 Table 5: Relation between scale and asymmetry on model performance on the QIWS dataset. As shown by the results in bold pruning only the decoder leads to less degradation than just the encoder or both, across all scales. Small Base Large lenc ldec R-2 R R-2 R R-2 R 6 6 29.03 100.00% 34.19 100.00% 37.37 100.00% 6 5 28.90 99.55% 34.00 99.44% 37.59 100.59% 6 4 28.56 98.40% 34.50 100.91% 36.56 97.84% 6 3 27.94 96.24% 33.70 98.58% 35.74 95.64% 6 2 24.85 85.61% 31.93 93.38% 35.13 94.01% 6 1 15.41 53.08% 28.05 82.03% 33.69 90.15% 5 6 27.92 96.17% 33.57 98.18% 36.39 97.38% 4 6 27.75 95.60% 33.06 96.69% 35.90 96.07% 3 6 25.20 86.82% 32.23 94.28% 34.22 91.58% 2 6 23.67 81.55% 27.47 80.35% 33.42 89.43% 1 6 18.23 62.79% 25.57 74.78% 30.31 81.11% 5 5 26.82 92.38% 32.88 96.18% 36.32 97.20% 4 4 26.62 91.72% 32.81 95.96% 35.98 96.29% 3 3 23.12 79.64% 28.70 83.95% 33.00 88.31% 2 2 19.14 65.92% 26.53 77.60% 30.78 82.38% 1 1 6.09 20.99% 19.64 57.43% 22.77 60.94% compatible with other methods like quantization and unstructured pruning. We do not study how asymmetry is impacted by unstructured pruning or quantization as these methods are difficult to combine optimized libraries like FasterTransformers10. Following Shleifer et al., we adopt the \"Shink and then fine\" (STF) tune approach for compression. First, a model is trained until convergence on a fine-tuning summarization task. Then, entire layers are removed from the encoder, decoder, or both, and the model is further fine-tuned until it has re10https://github.com/NVIDIA/FasterTransformer converged. We do not study the use of knowledge distillation to avoid the additional training overhead without guaranteed improvements following Shleifer et al.\u2019s results. Each model we study has a uniform number of encoder and decoder layers, so we prune only the encoders, decoders, and a symmetric combination of the two combinations. We used our three scales of uncompressed models (small, base, large), and we pruned the model in multiples of 1 on the encoder, the decoder, and both. After pruning, models are fine-tuned again and evaluated. This means that for each dataset, we have 16 variants for each model size leading to 48 models per dataset and 144 models overall. Given the wide number of models and the cost of multiple seeds or model-specific optimization, we train each model once and do not optimize the parameters for each model. While this leads to a worse-than-ideal performance, our goal is not to hyper-optimize models but explore where there is high sensitivity. To save space, we use the shorthand lenc and ldec to refer to the number portion of transformer encoder and decoder layers (out of 6), and R refers to the percentage performance recall vs. uncompressed baseline. Detailed results have been moved to the A.3 to save space. 5 \fTable 6: Relation between scale and asymmetry on model performance on the CNNDM dataset. As shown by the results in bold as the model size grows the impact of pruning becomes more muted Small Base Large lenc ldec R-2 R R-2 R R-2 R 6 6 17.55 100.00% 19.77 100.00% 21.15 100.00% 6 5 17.68 100.74% 19.92 100.76% 21.30 100.69% 6 4 17.27 98.36% 19.85 100.42% 21.32 100.81% 6 3 16.40 93.43% 18.85 95.37% 21.08 99.66% 6 2 15.35 87.42% 18.68 94.51% 20.67 97.73% 6 1 11.33 64.57% 16.48 83.38% 19.49 92.12% 5 6 17.69 100.81% 19.92 100.76% 21.13 99.88% 4 6 17.35 98.84% 19.67 99.50% 20.83 98.47% 3 6 16.80 95.70% 18.85 95.37% 20.53 97.06% 2 6 15.54 88.51% 18.22 92.14% 19.74 93.33% 1 6 13.31 75.83% 17.06 86.27% 18.68 88.31% 5 5 17.07 97.23% 19.72 99.74% 21.23 100.34% 4 4 16.20 92.28% 19.17 96.99% 20.90 98.81% 3 3 14.91 84.95% 17.46 88.29% 20.13 95.16% 2 2 11.97 68.17% 15.87 80.26% 18.47 87.30% 1 1 6.05 34.45% 12.23 61.88% 15.51 73.32% Table 7: Scale and Pruning on XSUM dataset Small Base Large lenc ldec R-2 R R-2 R R-2 R 6 6 11.09 100.00% 15.69 100.00% 16.34 100.00% 6 5 11.61 104.74% 15.27 97.35% 19.80 121.16% 6 4 11.43 103.12% 14.91 95.03% 19.30 118.09% 6 3 11.24 101.36% 15.40 98.17% 18.92 115.77% 6 2 10.53 94.98% 15.19 96.82% 17.96 109.93% 6 1 6.03 54.42% 13.73 87.53% 16.47 100.76% 5 6 11.18 100.82% 15.92 101.47% 19.43 118.88% 4 6 10.61 95.68% 14.10 89.91% 18.33 112.16% 3 6 10.11 91.16% 13.84 88.21% 16.90 103.39% 2 6 8.59 77.52% 12.10 77.12% 14.97 91.61% 1 6 7.70 69.43% 10.27 65.47% 12.52 76.63% 5 5 10.73 96.76% 15.72 100.22% 19.18 117.38% 4 4 10.19 91.96% 14.30 91.15% 17.56 107.43% 3 3 9.50 85.69% 12.44 79.32% 15.89 97.21% 2 2 7.31 65.91% 10.67 68.05% 12.15 74.34% 1 1 4.00 36.09% 7.74 49.35% 8.96 54.86% 4.1 Scale and Pruning Looking at abridged results in 5, 6, and 7, there is a clear scaling law as smaller models see much larger drops in performance when compressed to the same degree. For example, on the QIWS dataset, compression to 1 6 of the layers on the encoder and decoder cause an 80% drop in R-2 on a small model but only 40% on the larger model. This scale comparison is 65% to 26% on CNNDM and 64% to 45% on XSUM datasets. Similar scaling results hold with encoder or decoder pruning, where compressing large models lead to a 5x lower loss in performance than small models. As the model\u2019s size grows, the impact of decoder vs. encoder-only pruning becomes more muted. On the CNNDM dataset, the gap between the decoder only and encoder only pruned to 1 6 is 10% with the FLAN-T5 small but only 4% with the large variant. When comparing asymmetric and symmetric, the gap is even further pronounced where the small gap is 30% while the large is 20%. 0 1 2 3 4 5 10 15 20 Portion of Model Pruned (Out of Six) Rouge-2 Role of scale and compression on CNNDM smallencoder smalldecoder smallboth 0 1 2 3 4 5 10 15 20 Portion of Model Pruned (Out of Six) Rouge-2 Role of scale and compression on CNNDM baseencoder basedecoder baseboth 0 1 2 3 4 5 10 15 20 Portion of Model Pruned (Out of Six) Rouge-2 Role of scale and compression on CNNDM largeencoder largedecoder largeboth Figure 3: Relationship between model compression, model size, and summarization accuracy measured by rouge-2 vs. Number Layers. smallencoder refers to a FLAN-T5 small which has only pruned the encoder, smalldecoder for only the decoder, and smallboth for the encoder and decoder 6 \fAs shown in Figure 3, the impact of compression becomes more muted as the model size grows. In other words, larger models are more compressible and amenable to asymmetry in this compression. The impact of asymmetry is easiest to understand as it is not surprising that complete pruning of a model leads to higher losses than partial pruning across datasets and model sizes. While this finding is not immediately surprising, evaluating the inference costs becomes important. 4.2 Inference Benchmarks We evaluate the impact of asymmetry in a similar method to our scale experiments. Using an A10 GPU, we evaluate performance for summarization on a portion of each model\u2019s respective evaluation datasets with a max sequence length of 1024, a max summary length of 256, and batch sizes 1, 8, and 16. We choose these batch sizes to represent streaming workloads (batch size 1), real-time results for the top results from a search query (batch size 8i), and max throughput given the A10\u2019s memory budget (batch size 16) QIWS CNN/DailyMail XSUM lenc ldec Impact Speedup Impact Speedup Impact Speedup 6 3 -4.36% 1.80 -0.34% 1.65 15.77% 1.64 6 2 -5.99% 2.44 -2.27% 2.03 9.93% 2.07 6 1 -9.85% 3.83 -7.88% 2.70 0.76% 2.71 3 6 -8.42% 1.04 -2.94% 1.14 3.39% 1.16 2 6 -10.57% 1.04 -6.67% 1.19 -8.39% 1.21 1 6 -18.89% 1.06 -11.69% 1.27 -23.37% 1.30 3 3 -11.69% 1.91 -4.84% 1.94 -2.79% 2.06 2 2 -17.62% 2.20 -12.70% 2.78 -25.66% 2.83 1 1 -39.06% 2.44 -26.68% 4.96 -45.14% 4.84 Table 8: Relationship between accuracy and speedup of encoder only, the decoder only, encoder and decoder pruning on FLAN-T5 Large models on CNN/DM, XSUM, and QIWS. Speedup is measured by comparing the improvements in latency for batch size one vs. the uncompressed baseline. The impact is the relative loss of Rouge-2 of compressed models vs. the uncompressed baseline. Looking at the focused set of results for large models across datasets in table 8 on the impact of R-2 vs. inference speedup, we can see a clear relationship between asymmetry and inference efficiency. While detailed inference results can be found in the appendix A.4 on this focused set of results, we can see that pruning only the encoder leads to no more than 30% improvement in inference efficiency at a sizable loss in accuracy. Pruning the model symmetrically leads to realizable inference improvements of up to 5x at the expense of summarization accuracy. Alternatively, when only the decoder is pruned, it is possible to see most of the inference speedups seen during constant pruning with a substantially lower impact on accuracy. On the CNN/DM dataset, constant pruning leads to 8% better inference but losses nearly four times the performance of nonuniform compression. Small Base Large lenc ldec Impact Speedup Impact Speedup Impact Speedup 6 3 -3.76% 1.79 -1.42% 1.76 -4.36% 1.80 6 2 -14.39% 2.69 -6.62% 2.13 -5.99% 2.44 6 1 -46.92% 3.97 -17.97% 3.69 -9.85% 3.83 3 6 -13.18% 1.02 -5.72% 1.04 -8.42% 1.04 2 6 -18.45% 1.02 -19.65% 1.05 -10.57% 1.04 1 6 -37.21% 1.03 -25.22% 1.06 -18.89% 1.06 3 3 -20.36% 1.40 -16.05% 1.86 -11.69% 1.91 2 2 -34.08% 1.30 -22.40% 2.48 -17.62% 2.20 1 1 -79.01% 3.91 -42.57% 3.95 -39.06% 2.44 Table 9: Relationship between accuracy and speedup of encoder only, decoder only, encoder and decoder pruning on FLAN-T5 models on QIWS concerning model size. Speedup is measured by comparing the improvements in latency for batch size one vs. the uncompressed baseline. The impact is the relative loss of Rouge-2 of compressed models vs. the uncompressed baseline. lenc ldec Impact Speedup (BS1) Speedup (BS8) Speedup (BS16) 6 3 -0.34% 1.65 1.18 1.15 6 2 -2.27% 2.03 1.25 1.22 6 1 -7.88% 2.70 1.36 1.29 3 6 -2.94% 1.14 1.48 1.54 2 6 -6.67% 1.19 1.68 1.89 1 6 -11.69% 1.27 2.21 2.43 3 3 -4.84% 1.94 1.96 1.97 2 2 -12.70% 2.78 2.88 2.92 1 1 -26.68% 4.96 5.54 5.64 Table 10: Relationship between accuracy and speedup of encoder only, decoder only, encoder and decoder pruning on FLAN-T5 large models on CNN with variation in inference batch size. Speedup is measured by comparing the improvements in latency vs. the uncompressed baseline at various batch sizes. The impact is the relative loss of Rouge-2 of compressed models vs. the uncompressed baseline. 5 Discussion 5.1 Scale, Inference, and Pruning As shown in table 9, the gains found by pruning are extremely consistent independently with scaling. 7 \f0 1 2 3 4 5 60 65 70 75 80 Portion of Model Pruned (Out of Six) Generation Length (tokens) Genl vs. Model Pruning on CNNDM smallencoder smalldecoder smallboth baseencoder basedecoder baseboth largeencoder largedecoder largeboth Figure 4: Role of scale and compression on generation length Pruning only the encoder leads to a 4-6% improvement in latency, and pruning just the decoder leads to 400%, as does uniform compression. This is expected as structural pruning removes a constant portion of the network, which leads to consistent latency gains irrespective of model scale. 5.2 Scale, Pruning and Generated length Despite expecting a significant trend in the role of scale and pruning in a generation, we do not see any noticeable trends. As shown in figures 6 and 4, there is no discernible trend of the Role of scale and pruning in generation length. There is a minor jump in generation length from FLAN-T5 small to FLAN-T5 base across all datasets but no such jump from FLAN-T5 base to FLAN-T5 large. We believe this is because the smaller models are less fluent and need more tokens to ensure accurate coverage. As models scale, this is no longer needed, and the models converge to a uniform summary length. 5.3 Asymmetry with large batches Despite the allures of asymmetrical pruning, it is not without fault. As shown in table 10 and Figure 5, the improvements in inference efficiency are heavily influenced by the batch size. When the batch size is minimal, the difference in the type of non-uniformity has a significant impact 1 8 16 1 2 3 4 5 Batch Size Speedup Impact of batch size on inference speedups 6enc \u22123dec 6enc \u22122dec 6enc \u22121dec 3enc \u22126dec 2enc \u22126dec 1enc \u22126dec 3enc \u22123dec 2enc \u22122dec 1enc \u22121dec Figure 5: Relationship between inference batch size and realized inference speedup with uniform and no uniform pruning of FLAN-T5 large on CNNDM 0 100 200 300 400 500 600 700 800 20 30 40 50 60 70 Model Parameters (millions) Generation Length (tokens) Genl vs. Model Size QIWS CNN/DM XSUM Figure 6: Role of scale on generation length 8 \fon inference efficiency. As batches scale, the speedup from encoder only or decoder only becomes much closer and becomes minor when compared to uniform methods. This indicates why further work on improving generative inference methods is highly relevant, as this problem impacts other efficiency-driven processes like CALM (Schuster et al., 2022). 6" + }, + { + "url": "http://arxiv.org/abs/2304.00114v1", + "title": "Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval", + "abstract": "Vector-based retrieval systems have become a common staple for academic and\nindustrial search applications because they provide a simple and scalable way\nof extending the search to leverage contextual representations for documents\nand queries. As these vector-based systems rely on contextual language models,\ntheir usage commonly requires GPUs, which can be expensive and difficult to\nmanage. Given recent advances in introducing sparsity into language models for\nimproved inference efficiency, in this paper, we study how sparse language\nmodels can be used for dense retrieval to improve inference efficiency. Using\nthe popular retrieval library Tevatron and the MSMARCO, NQ, and TriviaQA\ndatasets, we find that sparse language models can be used as direct\nreplacements with little to no drop in accuracy and up to 4.3x improved\ninference speeds", + "authors": "Daniel Campos, ChengXiang Zhai", + "published": "2023-03-31", + "updated": "2023-03-31", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "main_content": "Introduction The use of language models for information retrieval has been a well-established \ufb01eld of research for several decades. However, traditional language models rely on dense representations of language, which may result in computational challenges when dealing with large datasets or resource-intensive tasks. To address these challenges, recent research has explored the use of unstructured sparsity in language models, \ufb01nding it an effective way to improve inference ef\ufb01ciency when paired with sparsity-aware inference frameworks. While pruning models can require specialized knowledge and frameworks, using already pruned models, which can be sparse-transferred to new tasks, is attractive as it amortizes compression costs. It allows non-experts to improve inference performance. By using a pruned (sparse) language model, practitioners can realize inference speedups without any additional effort. This paper explores the use of sparse language models for vector-based retrieval. We provide a simple case study demonstrating how the sparse language model, oBERT (Kurti\u00b4 c et al., 2022), can directly replace the commonly used BERT-base (Devlin et al., 2019) model. In our experiments on the MSMARCO (Campos et al., 2016), Natural Questions (Kwiatkowski et al., 2019), and TriviaQA (Joshi et al., 2017) passage retrieval datasets, we \ufb01nd sparse language models can serve as drop-in replacements for existing models with minor optimization and they deliver up to 4.3x more queries per second (QPS) with little to no loss in accuracy. The contributions of this work are as follows \u2022 We demonstrate the use of unstructured sparsity in vector-based retrieval combined with a sparsity aware inference engine to provide up to 4.3x QPS with little to no loss in accuracy. 2 Related Work Vector-based retrievers, often called biencoders,dual-encoders, or dense retrievers, are retrieval models which leverage an implicit ranking signal from the inner product of query and document representations of contextual language models. They have become incredibly popular as they can be highly accurate (Pouran Ben Veyseh et al., 2021) (Karpukhin et al., 2020) and when paired with an Approximate Nearest Neighbor (ANN) such as FAISS (Johnson et al., 2019), allow for incredibly ef\ufb01cient retrieval. Unstructured Sparsity is a compression approach in which some portion of a model\u2019s weights or groups of weights receive a mask and are effectively removed from the network. Recent work has shown it possible to introduce sparsity into small (Kurti\u00b4 c et al., 2022), (Sanh et al., 2020), (Zafrir et al., 2021) and large language models (Frantar and Alistarh, 2023) with little to no loss in accuracy. Using a sparsity-aware inference arXiv:2304.00114v1 [cs.IR] 31 Mar 2023 \fengine or modern Ampere generation GPUs1, it is possible to gain 3-5x speedups in inference throughput. While existing research has studied novel tasks where sparsity can perform badly (Liu et al., 2023), transferring to novel domains (Campos et al., 2022), to the best of our knowledge, there has been no study in leveraging model sparsity in information retrieval. 3 Leveraging Sparsity For Dense Retrieval 3.1 Dense Retrieval Background Vector-based retrieval leverages a representation model to create representations for the query and the document in a shared latent space. This can be achieved by using a single model, called a tied bi-encoder, or with independent query and document models, called a untied bi-encoder. Using these models, representations are made for queries and documents, and a notion of relevance is learned by training to minimize the distance of positive query-document pairs as shown in equation 1 where x is a query vector and y is a document vector, and \u00b7 denotes the dot product of the vectors. L = 1 \u2212x \u00b7 y |x||y| (1) After models have been trained, an index from the document corpus is created by encoding every document into a vector, and these vectors are loaded into an ANN index. When a query is issued, it encodes into the shared latent space using the query encoder, and the closest documents are retrieved. Since the document index is generated once (or any time an index is refreshed) and of\ufb02ine, it typically leverages batch processing and large accelerators. Conversely, the query encoder must run each time a user query is issued with small batches and commonly on many small query processors, which commonly lack GPU acceleration. 3.2 Sparse Transfer Learning While it is possible to introduce sparsity in a model using an open-source library like SparseML2, this 1https://developer.nvidia.com/blog/acceleratinginference-with-sparsity-using-ampere-and-tensorrt/ 2https://github.com/neuralmagic/sparseml Parameter Possible Values Training Length 3,40 Epochs Initial learning rate 1e-5, 5e-5, 7e-5, 9e-5 Learning rate schedule Linear Batch size 8,128, Negative Passages 1,8 Table 1: Hyperparmaters used to train bi-encoder models for retrieval can require non-trivial tuning and expertise. Instead, we use already sparsi\ufb01ed language models and apply them to new domains. Speci\ufb01cally, we leverage the oBERT 12-layer encoder model (Kurti\u00b4 c et al., 2022) and \ufb01x the sparsity pro\ufb01le in the new task. The oBERT model has been sparsi\ufb01ed during pre-training, and it has a variant with 90% unstructured sparsity and 80% block sparsity (Eldar and B\u00f6lcskei, 2008). Using these already sparse models, we explore their usage as drop-in replacements for the uncompressed BERT base. 3.3 Experimental Design We evaluate the effectiveness of the oBERT models by evaluating how well they can serve as drop-in replacements for BERT-base on various retrieval datasets. We alternate the used model without any major optimization and compare retrieval performance. All of our experiments leverage the open source retrieval library the Tevatron (Gao et al., 2022) 3 library, which makes use of hugginface\u2019s transformers (Wolf et al., 2020). Datasets We use a wide variety of standard dense retrieval benchmarks, including MSMARCO V1.1 4 (Campos et al., 2016), NQ 5 (Kwiatkowski et al., 2019), and TriviaQA 6 (Joshi et al., 2017) passage retrieval datasets. For each dataset, we train models to converge with tied and untied bi-encoders, generate full indexes, and evaluate performance by measuring recall accuracy with retrieval depths of 20,100, and 200. Computational Experiments are all performed on 16 GB V100 GPUS using 1 V100 for MSMARCO and 4 for each other experiment. We use the training hyperparameters found in 1, \ufb01nding the sparse language models use higher learning 3https://github.com/texttron/tevatron 4https://huggingface.co/datasets/Tevatron/msmarcopassage 5https://huggingface.co/datasets/Tevatron/wikipedia-nq 6https://huggingface.co/datasets/Tevatron/wikipediatrivia \fTable 2: Retrieval Accuracy@100 on NQ, MSMARCO, and TriviaQA with respect to inference throughput (queries per second) and relative speedup Model MSMARCO NQ TriviaQA QPS Speedup BERT-Base (Pytorch) 69.80% 86.34% 85.33% 47.278 1.00 BERT-Base (DeepSparse) 69.80% 86.34% 85.33% 80.92 1.71 oBERT 90\\% 70.04% 85.84% 84.41% 202.67 4.28 oBERT 80\\% (block) 69.63% 85.62% 84.81% 141.78 3.00 rates consistent with Kurtic et-al. (Kurti\u00b4 c et al., 2022) \ufb01ndings. 3.4 Inference Benchmarks We benchmark inference speeds of query encoding to evaluate the impact of using sparse language models. We benchmark using an Intel Xeon Gold 6238R Processor using native Pytorch inference and leverage the sparsity-aware inference library DeepSparse7. For each variant model, we evaluate the performance on encoding 6500 queries with a batch size of one and a max context length of 32.We repeat each run \ufb01ve times to ensure consistency and report the mean. Detailed results are in the B. 4 Experimental Results In \ufb01gure 1, we plot the retrieval performance of retrieval accuracy at 100 relatives to the dense BERT-base baseline \ufb01nding it is possible to improve inference performance by nearly 4.3x with losing under% loss in accuracy. Looking at the more detailed results in 2, we \ufb01nd that simply by changing the inference engine from PyTorch to DeepSparse can lead to a 1.7x speedup. Looking at the variation in models, we \ufb01nd that the 90% sparse model outperforms the 80% block sparse model regarding inference ef\ufb01ciency and retrieval accuracy. This \ufb01nding is similar to Kurtic et al., given that block-sparse models see larger inference improvements with quantization, and introducing sparsity in blocks reduces network expressivity. We explored introduced quantization both during training using QAT and post-training, but in both cases, this caused a near-complete collapse of retrieval accuracy. Looking at the more detailed results found in tables 5, 3, and 4, we can see that the use of sparse language models works well with both tied and untied bi-encoders. Still, performance improves when the bi-encoder is tied, which we attribute to the query and document encoder sharing the less expressive 7https://neuralmagic.com/deepsparse/ sparse language model. Additionally, the impact of using sparse language models is more pronounced with smaller recall sets. When the recall set is 20, losses range from 1.3% to 3%; however, when the recall set is expanded to 100 items, losses range from minor improvements to under 2%. 100 150 200 100 99.75 99.5 99.25 99 98.75 Queries Per Second Retrieval Accuracy @100 Speedup vs. Retrieval Performance MSMARCO NQ TriviaQA Figure 1: Measuring the impact on recall accuracy at 100 vs. inference throughput on the MSMARCO, NQ, and TriviaQA retrieval datasets 5" + }, + { + "url": "http://arxiv.org/abs/2304.01016v3", + "title": "Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders", + "abstract": "In this paper, we consider the problem of improving the inference latency of\nlanguage model-based dense retrieval systems by introducing structural\ncompression and model size asymmetry between the context and query encoders.\nFirst, we investigate the impact of pre and post-training compression on the\nMSMARCO, Natural Questions, TriviaQA, SQUAD, and SCIFACT, finding that\nasymmetry in the dual encoders in dense retrieval can lead to improved\ninference efficiency. Knowing this, we introduce Kullback Leibler Alignment of\nEmbeddings (KALE), an efficient and accurate method for increasing the\ninference efficiency of dense retrieval methods by pruning and aligning the\nquery encoder after training. Specifically, KALE extends traditional Knowledge\nDistillation after bi-encoder training, allowing for effective query encoder\ncompression without full retraining or index generation. Using KALE and\nasymmetric training, we can generate models which exceed the performance of\nDistilBERT despite having 3x faster inference.", + "authors": "Daniel Campos, Alessandro Magnani, ChengXiang Zhai", + "published": "2023-03-31", + "updated": "2023-06-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR" + ], + "main_content": "Introduction A bi-encoder-based retrieval, often called dense retrieval, is a retrieval function that leverages the vector representation of queries and documents as a proxy for relevance. Using two encoders, one for the query and one for the document, the input data is mapped into a common latent space where closeness becomes a proxy for relevance. Dense retrievers have become increasingly popular due to their ability to capture the semantic relationships between query and document terms. However, bi-encoder-based models can also be computationally expensive, particularly when dealing \u2217Corresponding author: dcampos3@illinois.edu 1 2 3 4 5 0 \u22120.25 \u22120.5 \u22120.75 \u22121 \u22121.25 \u22121.5 \u22121.75 \u22122 Speedup Impact to Recall Accuracy Retrieval Accuracy vs. Speedup BERTBASE DistilBERT BERTBASE-KALE-6layer (ours) BERTBASE-KALE-3layer (ours) BERTBASE-KALE-2layer (ours) Figure 1: Using KALE and asymmetric training on the lead to when measuring QPS vs. Recall at 100 on the NQ dataset. Using Asymmetry and KALE, it is possible to 3x QPS with nearly no loss in accuracy and 4.5x with under 2% loss in accuracy. We calculate QPS as the mean number of queries per second with a batch size of 1 and a max sequence length of 32 on a T4 GPU. Impact on retrieval accuracy is measured by the relative drop in retrieval accuracy at 100 with large datasets. As a result, there has been a growing interest in methods for compressing these models to reduce their computational complexity without sacrificing performance. While the use of smaller models (Wang et al., 2020) has provided a path to improving model performance, compression cannot be adjusted to suit varying latency needs. In other words, a model must match latency requirements before it can be experimented with. Additionally, since bi-encoders require a complete index generation to evaluate performance iteratively compressing models and arXiv:2304.01016v3 [cs.CL] 1 Jun 2023 \fretraining them can be very expensive. Seeing the bottleneck caused by trying to train compressed models for retrieval we explore approaches to compress models after training. By doing so it becomes cheaper to evaluate the impact of compression of retrieval and generate variants of many sizes. In this paper, we explore the role of asymmetry in the size of query and document encoders that leverage language models. Through experiments on several benchmarks, we demonstrate that our approach can significantly reduce the number of parameters in the bi-encoder model without sacrificing performance. As shown in figure 1, the combination of asymmetric bi-encoders and post-training KALE allows for 3x more QPS than an uncompressed bi-encoder with less than 1% loss in accuracy and nearly 5x with less than 2%. Building on the favorable implications of asymmetry for efficient inference, we introduce a compression mechanism called Kullback-Leibler Allingment of Embeddings (KALE). KALE uses an alignment of representations to compress models without requiring any form of retraining or index regeneration. To ground our approaches, we evaluate the effectiveness of KALE and asymmetry on several benchmark datasets and compare the results to existing efficient inference approaches. The following research questions drive our work: \u2022 Is the performance of dense retrieval methods more driven by the query or document encoder size? \u2022 Is it possible to compress query encoders without retraining and index regeneration? \u2022 How can dense retrieval asymmetry and posttraining alignment be leveraged to improve query encoder latency? It is in answering these questions that we deliver the following contributions: \u2022 We present the first robust studies on the role of document-query encoder symmetry, demonstrating that the size of the document encoder dominates performance. \u2022 We introduce and demonstrate the effectiveness of KALE, a post-training compression and alignment approach demonstrating its effectiveness and \u2022 We empirically demonstrate on various benchmarks how Asymmetric Compression can lead to 4.5 better QPS with 1% loss in recall accuracy at 100. 2 Related Work Transformer Based Language Models such as BERT (Devlin et al., 2019) provide contextual language representations built on the Transformer architecture (Vaswani et al., 2017) which can be specialized and adapted for specific tasks and domains (Lee et al., 2020). Using contextual word representations, it becomes relatively easy to excel at a broad range of natural language processing tasks such as Question Answering, Text Classification, and sentiment analysis. Bi-Encoders, commonly called dual-encoders or dense retrievers, decompose ranking by leveraging the inner product of query and document representations to produce a relevance score for query document pairs. While not as accurate at cross-encoders (Reimers and Gurevych, 2019), they are more efficient for inference and easier to deploy. Bi-encoder document representations are query invariant, allowing them to be pre-computed and loaded into an Approximate Nearest Neighbor (ANN) such as FAISS (Johnson et al., 2019). At runtime, a query is an encoder into a latent space, and the k documents are retrieved using a nearest neighbor algorithm such as HNSW (Malkov and Yashunin, 2016). Since the entire document index has already been created the retrieval latency is limited to a single call of the query encoder. Bi-encoders commonly leverage LLM such as BERT (Devlin et al., 2019) to retrieve short passages of text leading to the task descriptor of Dense Passage Retrievers (DPR) (Karpukhin et al., 2020). Driven by their efficiency in deployment and relevance performance, DPR-based models have rapidly become the building blocks for systems doing product search (Magnani et al., 2022), open domain question answering (Karpukhin et al., 2020) and customer support (Mesquita et al., 2022). Efficient Inference study methods and models which decrease the model execution cost while 2 \fminimizing the losses to model performance. Knowledge Distillation (Hinton et al., 2015) is a training method where a model, called the student, learns to emulate a teacher model, which is commonly larger or better performing than the student. Unstructured pruning removes individual weights or groups of weights in a model by applying a mask or setting the weight values to 0. When paired with a sparsity-aware inference engine, it is possible to gain 3-5x speedups in inference throughput with little to no loss in accuracy (Kurti\u00b4 c et al., 2022). Structured pruning removes fundamental structural components in a language model, such as individual attention heads (Voita et al., 2019) or entire model layers (Sanh et al., 2019). Removing entire model layers is one of the most pervasive approaches, as latency gains are easy to realize, and pruning is straightforward. While their training regimes may differ, models like DistilBERT (Sanh et al., 2019) and TinyBERT (Jiao et al., 2020), and MiniLM (Wang et al., 2020) leverage structural pruning as ways of generation 2-10x speedups. Methods like quantization (Pouransari and Tuzel, 2020) (Zafrir et al., 2019), early exiting (Xin et al., 2020) or token pruning (Kim et al., 2021) have been effective in other NLP tasks. Still, our work primarily focuses on structured pruning and its relationship with asymmetry. We leave studying the impacts of asymmetry on these compression methods to future work. Asymmetrical deep learning broadly refers to any non-uniformity in shape or attribute of models. Traditional modeling approaches favor uniformity as it is preferable for optimization algorithms (Mihaylova and Martins, 2019), and using models for inference should match training as closely as possible (Ranzato et al., 2015) as improvements in training loss during optimization result in improvements in model performance during inference. However, this does not account for cost or latency asymmetries during usage. Kasai et al. demonstrated how the sequence-to-sequence encoder depth dominates language model performance for machine translation (Kasai et al., 2020). Tay et al. 2021 extend this work by finding a DeepNarrow which shows that for broad language modeling, it is possible to have 50% fewer parameters and a 40% faster inference with no loss in accuracy. Embedding Distillation Concurrent to our work on bi-encoder compression, Kim et al. 2023 study how distillation in embeddings leads to general compression of bi-encoders and cross-encoders (Kim et al., 2023). Our work differs from theirs as we focus on the role of asymmetry between query and document encoders and how to leverage it for improved inference efficiency. 3 Method The use of representation models for retrieval begins with a document space d and a query space q where each of which is generated by some model m. Models do not need to share the same initialization, shape, or size, but their representation vectors must share size without some projection. These two models learn a notion of relevance by training to minimize the distance of positive query-document pairs as shown in equation 1 where x is a query vector and y is a document vector, and \u00b7 denotes the dot product of the vectors. L = 1 \u2212x \u00b7 y |x||y| (1) The query and document encoder models are commonly initialized with a pre-trained language model such as BERT. Then, using pairs of labels for positive relevance scores for queries and documents, the models are trained to minimize the distance between queries and their relevant documents (Karpukhin et al., 2020) While it is common practice to initialize the query encoder and document encoder with identical language models, this ignores the cost asymmetry of the usage patterns. The document encoder is usually only used once during a large-scale batch generation of the index. Index generation happens in a latency-insensitive environment and can easily leverage many GPUs and large batch sizes to improve efficiency. The query encoder runs every time a user issues a query, which can be irregular and sporadically. The query encoder responds to each user query independently. Thus, query encoders often use a batch size of 1 and commonly leverage small inference-optimized hardware like the T4 GPU or small CPUs. 3 \f1 2 3 6 9 12 0 \u221210 \u221220 \u221230 Query Encoder Layers Impact to Retrieval Accuracy Encoder layers Vs. Impact on Retrieval Accuracy 12 document layers 9 document layers 6 document layers 3 document layers 2 document layers 1 document layer Figure 2: Measuring the impact on recall at 20 on the NQ retrieval dataset by varying the number of transformer layers for the query encoder and document encoder Since the document encoder does not run very often, any improvement in latency produces a single fixed gain utterly dependent on the corpus size and index refresh cycle. The query encoder\u2019s userfacing nature means latency improvements occur whenever a user queries. 3.1 Role of model symmetry with Bi-encoders Since the query encoder runs many times online and the document encoder runs once, offline, we question: Is there some form of asymmetry between the query encoder and the document encoder that can be exploited? Do the two encoders need to be compressed symmetrically? To answer this question, we explore the impact on the performance of pruning the query and document encoders on the NQ passage retrieval dataset (Kwiatkowski et al., 2019). Using a BERT-base uncased model with 12 transformer encoder layers, we generate structurally pruned models with 9,6,3,2 and 1 layer. We also further pre-train the three and six-layer models using knowledge distillation, represented as 6KD and 3KD, from a 12layer model on the Wikipedia-book corpus similar to distilBERT (Sanh et al., 2019). Then, using each of these seven models, we train dense retrieval models on the NQ passage retrieval dataset with variations of query and document models resulting in 72 variants. With each of these models, we generate a full index and evaluate retrieval performance on the development portion of the dataset. We do not tune any parameters to avoid overfitting and to explore asymmetry without overoptimizing. Each model\u2019s retrieval accuracy is evaluated with retrieval sets of depth 20, 100, and 200. We compare the impact of varying the encoders to the uncompressed baseline and a distilBERT model (denoted by 6db). Looking at the impact of symmetric compression Table 1: Impact of Structural pruning before fine-tuning on Retrieval Accuracy on NQ passage retrieval dataset Layers enc Top 20 Impact Top 100 Impact Top 200 Impact 12 79.86% 0.00% 85.84% 0.00% 88.42% 0.00% 6db 73.88% -7.49% 84.74% -1.29% 87.26% -1.31% 9 73.41% -8.08% 83.68% -2.51% 86.51% -2.16% 6KD 75.04% -6.04% 85.15% -0.80% 87.45% -1.10% 6 71.69% -10.23% 83.30% -2.96% 86.04% -2.69% 3KD 73.32% -8.19% 83.43% -2.80% 86.20% -2.51% 3 66.93% -16.20% 80.61% -6.09% 84.49% -4.45% 2 66.87% -16.27% 80.42% -6.32% 83.85% -5.17% 1 54.96% -31.18% 71.88% -16.26% 76.73% -13.22% as shown in table 1, we see that the impact of compression is more pronounced with a small recall set as retrieval accuracy impact at 20 is 3x that of at 200. As shown in table 1 we observe major accuracy gains by fine-tuning the pruned model with a 4% gap between 6 and 6KD and a 8% gap between 3 and 3KD with a 4% gap for recall at 20 on the NQ dataset. Looking at the impact of asymmetry of the depth 4 \fof encoders as shown in table 2 and figure 2 we find there is the size of the query and document encoders cause similar impacts on retrieval accuracy. A retriever with 3 layers in the query encoder and 12 in the document encoder loses 11.9% of its retrieval accuracy and 12.55% when the sizes of the document encoder and query encoders are flipped. These asymmetric retrievers perform better than the symmetric 3-layer models, which lose 16.2% which highlights the ability to improve retrieval performance by having non-uniform compression. It is worth noting that having a larger document encoder is preferable to a larger query encoder which supports the notion that the document encoder is more important than the query encoder (Li and Lin, 2021).// Similar results can be seen with the introduction of fine-tuned three and 6-layer models as shown in table 6. Unsurprisingly, KD-optimized language models outperform non-distilled models, and any asymmetrical variant that leverages a distilled model outperforms the un-distilled variant. Without further optimization, a model with a distilled 3-layer query encoder and a 12-layer document encoder will outperform a model with symmetrical 6-layer models despite being 2x faster. 3.2 Inference Benchmarks To evaluate the impact of structural pruning, we benchmark inference speeds of query encoding while varying the number of transformer layers. We perform benchmarking using an Intel Xeon Gold 6238R Processor and a T4 Nvidia GPU. For each model, we evaluate the performance on encoding 6500 queries with a batch size of one and a max context length of 32. For CPU inference, we evaluate the performance of models using the ONNX library 1, and for GPU inference, we evaluate native Pytorch inference. We repeat each run five times to ensure consistency and report the mean. Summary statistics can be found in table 3 and full results, including percentile, standard deviation, and confidence intervals, can be found in the appendix .5. Table 2: Impact of Structural pruning before fine-tuning on Retrieval Accuracy on NQ passage retrieval dataset layersq layersd Top 20 Impact Top 100 Impact Top 200 Impact 12 12 79.86% 0.00% 85.84% 0.00% 88.42% 0.00% 9 12 74.27% -7.00% 84.40% -1.67% 86.95% -1.66% 6 12 73.63% -7.80% 84.27% -1.83% 86.79% -1.85% 3 12 69.83% -12.55% 82.58% -3.80% 85.35% -3.48% 2 12 69.67% -12.76% 82.19% -4.25% 84.68% -4.23% 1 12 59.00% -26.12% 75.37% -12.19% 81.00% -8.39% 12 9 74.21% -7.07% 84.40% -1.67% 87.06% -1.53% 9 9 73.41% -8.08% 83.68% -2.51% 86.51% -2.16% 6 9 71.63% -10.30% 83.05% -3.25% 85.98% -2.76% 3 9 67.89% -14.98% 80.94% -5.71% 84.79% -4.10% 2 9 67.15% -15.92% 80.53% -6.19% 83.66% -5.39% 1 9 56.04% -29.83% 73.35% -14.55% 78.12% -11.65% 12 6 72.22% -9.57% 83.41% -2.83% 85.84% -2.91% 9 6 71.61% -10.33% 83.30% -2.96% 85.93% -2.82% 6 6 71.69% -10.23% 83.30% -2.96% 86.04% -2.69% 3 6 66.93% -16.20% 80.28% -6.48% 83.96% -5.04% 2 6 66.12% -17.20% 80.33% -6.42% 83.49% -5.58% 1 6 59.53% -25.46% 75.37% -12.19% 79.83% -9.71% 12 3 70.36% -11.90% 81.72% -4.80% 84.60% -4.32% 9 3 68.67% -14.01% 80.47% -6.25% 84.46% -4.48% 6 3 67.92% -14.95% 80.06% -6.74% 83.85% -5.17% 3 3 66.93% -16.20% 80.61% -6.09% 84.49% -4.45% 2 3 63.30% -20.74% 78.37% -8.71% 83.02% -6.11% 1 3 59.53% -25.46% 75.68% -11.84% 80.08% -9.43% 12 2 69.56% -12.90% 81.33% -5.25% 84.49% -4.45% 9 2 67.92% -14.95% 80.75% -5.93% 84.32% -4.64% 6 2 67.53% -15.43% 80.33% -6.42% 83.82% -5.20% 3 2 66.90% -16.23% 80.36% -6.38% 84.24% -4.73% 2 2 66.87% -16.27% 80.42% -6.32% 83.85% -5.17% 1 2 60.06% -24.80% 75.29% -12.29% 79.75% -9.80% 12 1 57.40% -28.13% 73.24% -14.68% 78.56% -11.15% 9 1 57.51% -27.99% 73.24% -14.68% 77.87% -11.94% 6 1 57.26% -28.30% 73.52% -14.35% 78.34% -11.40% 3 1 57.04% -28.58% 73.93% -13.87% 78.39% -11.34% 2 1 56.57% -29.17% 73.71% -14.13% 77.98% -11.81% 1 1 54.96% -31.18% 71.88% -16.26% 76.73% -13.22% layers size compressed size method QPS Speedup 12 418 387 GPU 105.852 1.00 9 337 212 GPU 139.494 1.32 6 256 236 GPU 172.338 1.63 3 175 161 GPU 299.45 2.83 2 148 136 GPU 441.422 4.17 1 121 111 GPU 660.64 6.24 12 418 387 CPU 47.278 1.00 9 337 212 CPU 63.24 1.34 6 256 236 CPU 90.386 1.91 3 175 161 CPU 166.012 3.51 2 148 136 CPU 229.666 4.86 1 121 111 CPU 378.534 8.01 Table 3: Variation in model throughput according to the serving method and the number of transformer layers. Structural pruning can lead to a 6 and 8-layer performance increase on GPU and CPU and pruning a model to 3 layers allows a CPU to offer better inference performance than the GPU. Table 4: Impact of structural pruning with and without KALE on Accuracy at 100 across various datasets. Layers KALE NQ TriviaQA MSMARCO SCIFACT SQUAD 12 N/A 85.84% 85.84% 88.77% 90.70% 77.16% 9 N 79.97% 79.97% 82.01% 71.07% 71.38% 9 Y 84.90% 84.90% 86.16% 84.87% 73.54% 6 N 68.20% 68.20% 72.68% 22.98% 59.97% 6 Y 83.68% 83.68% 84.68% 85.13% 69.87% 3 N 43.88% 43.88% 11.39% 40.80% 34.42% 3 Y 81.14% 81.14% 82.11% 82.57% 64.37% 2 N 46.90% 46.90% 31.46% 42.66% 37.01% 2 Y 81.94% 81.94% 81.96% 82.57% 63.72% 1 N 12.22% 12.22% 0.00% 3.17% 11.66% 1 Y 71.33% 71.33% 54.36% 66.83% 51.39% 5 \f4 KL Alignment of Embeddings While training asymmetric models can improve latency, it requires novel training regimes and experimentation, and existing workloads need to regenerate their entire index to take advantage of any inference speedups. Generation of the passage index can take longer than model training (Karpukhin et al., 2020), which makes regenerating a new index and retraining a model to meet changing latency requirements an inefficient experimentation pathway. Moreover, coupling asymmetry into training makes generating query encoder variants more difficult, as each encoder requires its own index and document encoder. Motivated by this bottleneck, we introduce Kullback-Leibler Allingment of Embeddings (KALE), a simple method of improving bi-encoder latency by aligning the embeddings of compressed models. KALE is applied after model training and leverages large batch sizes to make compression computationally inexpensive and independent of training. A single V100 GPU KALE can produce a compressed query encoder in less than 5 minutes. First, a bi-encoder model trains with separate query and document encoders. When training is complete, the document encoder, edocument, is frozen, and using the query encoder, eq, a structurally pruned copy, eq\u2032, is made. Then, using a sample of queries, the eq\u2032 model is fine-tuned to minimize the KL divergence of their query representations as shown in equation 2. While the KL divergence is a measure of differences in probability distributions it has been applied successfully for representation alignment (Kim et al., 2023). To leverage it, we treat each of the representation vectors as a probability over a set of logits. DKL(eq\u2032 \u2225eq) = X x\u2208X eq\u2032(x) log \u0012eq\u2032(x) eq(x) \u0013 . (2) We explored the use of various distance functions such as cosine similarity, Manhattan distance, and the KL divergence but found little sensitivity in any metric besides KL divergence. We believe this is due to us freezing the document representations, 1https://onnx.ai/ and as a result, cosine distance allows the query embeddings to drift more than probability distribution matching methods. To explore this further, we experiment with tuning the temperature for the KL divergence and add a loss scaling factor but find a temperature of one and a scaling factor of ten to be most optimal. Additionally, we explored using a contrastive loss with random negative and hard negatives mined from the trained encoder but found no positive impact for either method. We leave further exploration of training objective improvement for future work. 4.1 Experimental Results We evaluate the effectiveness of KALE by taking uncompressed BERTBASE models and pruning them with and without KALE on a variety of wellestablished passage retrieval benchmarks. First, models are trained, and indexes are generated using un-optimized BERTBASE models. Next, the document encoders are frozen, and the query encoders are structurally pruned to have 9,6,3,2 or 1 transformer layer. Finally, query encoders are aligned using KALE, and we compare the performance of compressed models by comparing the impact on retrieval accuracy at 20,100, and 200. To aid reproducibility, each model is trained using the Tevatron (Gao et al., 2022) 2 library, which makes use of hugginface\u2019s transformers to provide a simple interface for exploring neural ranking models. Our experiments focus on the plain BERTBASE-uncased 12-layer transformer model. While never more capable models exist, the unaltered BERT model is widely used in production workloads, which our experiments seek to emulate. Our work aims not to produce the highest possible retrieval accuracy for a dense encoder. Instead, our goal is to find the role of asymmetry in biencoder models. As a result, we leverage the wellestablished parameters in all of our experiments without using an advanced methodology like contrastive or curriculum learning. There are fewer parameters for using KALE, and we deliberately do not optimize on anything but the loss between eq and eq\u2032. In general, higher degrees of pruning require longer training with smaller batches. 2https://github.com/texttron/tevatron 6 \f1 2 3 6 9 12 \u221210 \u221220 \u221230 \u221240 \u221250 \u221260 \u221270 \u221280 \u221290 Query Encoder Layers Impact to Retrieval Accuracy Query Encoder layers Vs. Impact on Retrieval Accuracy on NQ BERTBASE @20 BERTBASE with KALE @20 BERTBASE @100 BERTBASE with KALE @100 BERTBASE @200 BERTBASE with KALE @200 1 2 3 6 9 12 \u221210 \u221220 \u221230 \u221240 \u221250 \u221260 \u221270 \u221280 \u221290 Query Encoder Layers Impact to Retrieval Accuracy Query Encoder layers Vs. Impact to Retrieval Accuracy on TriviaQA BERTBASE @20 BERTBASE with KALE @20 BERTBASE @100 BERTBASE with KALE @100 BERTBASE @200 BERTBASE with KALE @200 1 2 3 6 9 12 \u221210 \u221220 \u221230 \u221240 \u221250 \u221260 \u221270 \u221280 \u221290 Query Encoder Layers Impact to Retrieval Accuracy Query Encoder layers Vs. Impact to Retrieval Accuracy on MSMARCO BERTBASE @20 BERTBASE with KALE @20 BERTBASE @100 BERTBASE with KALE @100 BERTBASE @200 BERTBASE with KALE @200 1 2 3 6 9 12 \u221210 \u221220 \u221230 \u221240 \u221250 \u221260 \u221270 \u221280 \u221290 Query Encoder Layers Impact to Retrieval Accuracy Query Encoder layers Vs. Impact on Retrieval Accuracy on SQUAD BERTBASE @20 BERTBASE with KALE @20 BERTBASE @100 BERTBASE with KALE @100 BERTBASE @200 BERTBASE with KALE @200 1 2 3 6 9 12 \u221210 \u221220 \u221230 \u221240 \u221250 \u221260 \u221270 \u221280 \u221290 Query Encoder Layers Impact to Retrieval Accuracy Query Encoder layers Vs. Impact on Retrieval Accuracy on SCIFACT BERTBASE @20 BERTBASE with KALE @20 BERTBASE @100 BERTBASE with KALE @100 BERTBASE @200 BERTBASE with KALE @200 Figure 3: Impact of structural pruning with and without KALE on the NQ, MSMARCO, TriviaQA, SciFACT, and SQuAD Passage Retrieval dataset with the recall set sizes of 20,100, and 200. Across datasets, we see a consistent trend where KALE is effective but most effective when the network is heavily pruned and recall set sizes are small. When the model is pruned to 2 or 1 layer with a recall set size of 20, the difference between using KALE or not can be up to 10 times the loss in recall accuracy 7 \fDatasets We use a wide variety of standard dense retrieval benchmarks, including MSMARCO V1.1 3 (Campos et al., 2016), NQ Passage Ranking 4 (Kwiatkowski et al., 2019), SciFact Passage Ranking 5 (Wadden et al., 2020), TriviaQA passage Ranking 6 (Joshi et al., 2017), and SQUAD Passage Ranking 7 (Rajpurkar et al., 2016). For each dataset, we evaluate performance by measuring the recall accuracy with retrieval depths of 20,100, and 200. Additionally, for the MSMARCO dataset, we also report MRR@10; for Scifact, we also report NDCG @10 and RR@10. Computational Experiments Our experimentation on fine-tuning our compressed models uses a 16 GB V100 GPU. Experiments in bi-encoder model training leverage 1 V100 for the MSMARCO and 4 for each other experiment. Due to the vast number of models and datasets we train on, each experiment happens with the same fixed seed. 4.2 Evaluating KALE We compare the performance of using KALE for post-training compression in figure 3 across the five datasets and see a fairly consistent trend. When the recall set is small and the query encoders are pruned to a high degree, the impact of KALE is most visible, often driving over 50 improvements in retrieval accuracy. Additionally, using KALE allows the models to have a steady and gradual drop in recall accuracy relative to speedup instead of the sharp drop shown by the regular usage of structural pruning. Without KALE, post-training compression causes a 20-50% loss in retrieval accuracy. With the use of KALE, these losses are cut to 1-10%. In practice, this allows using one or 2-layer encoder models running with CPU-based inference with minor impacts on accuracy. We also notice a surprising performance improvement between 3 and 2-layer query encoders with and without KALE. We believe this shows the phenomena studied elsewhere: the first and last layers 3https://huggingface.co/datasets/Tevatron/msmarcopassage 4https://huggingface.co/datasets/Tevatron/wikipedia-nq 5https://huggingface.co/datasets/Tevatron/scifact 6https://huggingface.co/datasets/Tevatron/wikipediatrivia 7https://huggingface.co/datasets/Tevatron/wikipediasquad Model Layers KALE MSMARCO NQ TriviaQA SQUAD SCIFACTS BERTBASE 12 N 88.77% 85.84% 85.03% 77.16% 90.70% BERTBASE 6 Y 84.68% 83.68% 83.01% 69.87% 85.13% 6kd \u22126kd 6 N 88.19% 85.15% 84.96% 71.94% 91.23% 6db \u22126db 6 N 88.35% 84.74% 84.83% 71.69% 89.37% 6kd \u22123kd 6 N 86.50% 85.37% 84.04% 70.89% 89.20% BERTBASE 3 Y 82.11% 81.14% 81.67% 64.37% 82.57% 3kd \u22123kd 3 N 86.13% 83.66% 84.11% 71.98% 89.40% 3kd \u22126kd 3 N 84.79% 85.76% 83.91% 67.85% 88.63% 6kd \u22123kd 3 Y 82.95% 83.43% 82.33% 63.77% 90.37% 6kd \u22126kd 3 Y 86.75% 80.78% 83.48% 64.14% 91.70% BERTBASE 2 Y 81.96% 81.94% 81.23% 67.00% 82.57% 3kd \u22123kd 2 Y 84.23% 82.71% 83.02% 67.02% 91.33% 3kd \u22126kd 2 Y 85.57% 84.27% 82.90% 62.75% 88.37% 6kd \u22123kd 2 Y 83.24% 83.02% 82.13% 62.52% 89.93% 6kd \u22126kd 2 Y 85.77% 80.39% 83.32% 52.74% 91.93% BERTBASE 1 Y 48.05% 71.33% 75.40% 51.39% 66.83% 3kd \u22123kd 1 Y 66.69% 77.17% 80.82% 55.62% 76.03% 3kd \u22126kd 1 Y 72.13% 79.81% 80.23% 52.26% 78.67% 6kd \u22123kd 1 Y 71.26% 76.57% 78.65% 50.88% 77.07% 6kd \u22126kd 1 Y 70.70% 74.71% 80.31% 52.74% 77.89% Table 5: Impact of model asymmetry and use of KALE for structural pruning on the Retrieval at 100 accuracies across various datasets. Layers refer to the number of transformer encoder layers in the query encoder. do most of the work (Oh et al., 2022). 4.3 Aiding Asymmetry with KALE Seeking to optimize compression further, we combine KALE with asymmetrical finetuning and evaluate the results similarly to our earlier experiments. Results on the impact of KALE and asymmetry on the five datasets on the recall accuracy at 100 can be found in table 5 where 3kd \u22126kd denotes a three-layer query encoder and six-layer document encoder, 3kd \u22123kd denotes dual three layer encoders. Full results and metrics for each task can be found in the appendix section .4. First, it is immediately observable that posttraining compression via KALE performs worse than models natively designed for that size. We believe this is due to the convergence of the KALE models to have some distance from the uncompressed model because of dropout. We experimented with not using dropout in KALE, but model performance quickly suffered. Looking at the best retrieval accuracy vs. the model speedups shown in figure 4, we can see a substantial variation in the impact of compression across datasets. In tasks like SCIfacts, it is possible to get over 4x speedup while improving accuracy, while on tasks like SQuAD, even minor speedups lead to major losses in accuracy. We believe this variation is driven by the relative difficulty of each dataset, where easier tasks are more compressible than harder tasks. We believe these variations in results highlight the utility of post-training compression methods like KALE. Given the task variability in the impact of 8 \f100 200 300 400 500 600 60 70 80 90 Queries Per Second Retrieval Accuracy Inference Speed (GPU) Vs.Retrieval Accuracy @100 MSMARCO NQ TriviaQA SQUAD SCIfacts Figure 4: The impact on retrieval accuracy of the best combinations of asymmetrical training and KALE across the NQ, MSMARCO, TriviaQA, SQUAD, and SCIfacts retrieval datasets compression, iteration speed and cost are essential to effectively tuning model inference speed and accuracy. 5 Limitations While our work makes a broad study on how to improve model efficiency our scope is limited. Our work is limited to the usage of BERT-base and it is not clear how our compression approaches scale to more varied architectures like the sequence-tosequence models used by DocT5 (Lee et al., 2022) or more optimized models like RoBERTa (Liu et al., 2019) or compressed models like MiniLM (Wang et al., 2020). 6" + }, + { + "url": "http://arxiv.org/abs/2303.17612v3", + "title": "oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes", + "abstract": "In this paper, we introduce the range of oBERTa language models, an\neasy-to-use set of language models which allows Natural Language Processing\n(NLP) practitioners to obtain between 3.8 and 24.3 times faster models without\nexpertise in model compression. Specifically, oBERTa extends existing work on\npruning, knowledge distillation, and quantization and leverages frozen\nembeddings improves distillation and model initialization to deliver higher\naccuracy on a broad range of transfer tasks. In generating oBERTa, we explore\nhow the highly optimized RoBERTa differs from the BERT for pruning during\npre-training and finetuning. We find it less amenable to compression during\nfine-tuning. We explore the use of oBERTa on seven representative NLP tasks and\nfind that the improved compression techniques allow a pruned oBERTa model to\nmatch the performance of BERTbase and exceed the performance of Prune OFA Large\non the SQUAD V1.1 Question Answering dataset, despite being 8x and 2x,\nrespectively faster in inference. We release our code, training regimes, and\nassociated model for broad usage to encourage usage and experimentation", + "authors": "Daniel Campos, Alexandre Marques, Mark Kurtz, ChengXiang Zhai", + "published": "2023-03-30", + "updated": "2023-06-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction The massive improvement in contextual word representations driven by the usage of the Transformer architecture (Vaswani et al., 2017) has led to the wide-scale deployment of language models. These models are customized for various use cases and tasks like question answering, sentiment analysis, information retrieval, and document classification and deployed into general domains and specialized domains such as financial, medical, and legal. While these models are effective, they commonly 1https://github.com/neuralmagic/sparseml/ 2https://sparsezoo.neuralmagic.com/ 82 83 84 85 86 87 88 89 90 91 92 0 1 3 5 7 9 11 13 15 F1 Score on SQUAD v1.1 Inference Speedup Accuracy (F1) vs. Speedup on SQUAD v1.1 BERTbase PruneBert oBERTlarge PruneOFA oBERTa (Ours) Figure 1: Performance of Sparse Language Models on the SQUAD V1.1 (Rajpurkar et al., 2016a) compared to an uncompressed BERTbase (Devlin et al., 2019) with relation to realized inference improvements with regards to mean latency with a batch size of 1. contain hundreds of millions of parameters, which can lead to slow inference times without using specialized hardware accelerations like graphics processing units (GPU) or Tensor Processing Units (TPU). Without hardware acceleration, the inference on CPUs can be slow and impractical for real-world deployments. Approaches such as knowledge distillation (KD) (Hinton et al., 2015), quantization (Zafrir et al., 2019), and pruning (Kurtic et al., 2022) have been leveraged to improve model efficiency and, when paired with specialized inference engines3, it is possible to accelerate inference times on CPUs and GPUs significantly. While there has been substantial effort to create effective methods for com3https://github.com/neuralmagic/deepsparse arXiv:2303.17612v3 [cs.CL] 6 Jun 2023 \fpression (Jiao et al., 2020; Sun et al., 2020) and improved model performance (Liu et al., 2019), general users of language models have been slower to adopt these methods. Years after its release, the original BERTbase uncased (Devlin et al., 2019) is still the most popular language model 4, followed by the slightly compressed DistilBERT (Sanh et al., 2019a) for latency-sensitive deployments. To enable broad adoption, regular users must be able to leverage more efficient language models without additional compression steps or tuning. We present a case study on how to compress a language model for efficient CPU inference leveraging KD, structured pruning, unstructured sparsity, and quantization such that the compressed models can be applied to a broad range of natural language processing (NLP) tasks without expertise in compression of language models. As part of this study, we release a set of efficient language models optimized to deliver the greatest improvement in inference while minimizing losses in accuracy. We then show how these models can be used for sparse transfer learning (Iofinova et al., 2021; Zafrir et al., 2021) such that most compression happens during the pre-training stage. The pre-trained sparse models can be transferred to various NLP tasks, preserving sparsity without extensive optimization. Using these sparse transfer models and the DeepSparse inference engine, we show these sparse models can be fine-tuned to produce task-specific sparse models with minimal accuracy loss and result in greatly improved inference speeds with minimal accuracy loss. As shown in Figure 1, oBERTa provides stateof-the-art performance for sparse language models on the SQUAD v1.1 Question Answering dataset. oBERTa variants exceed the performance of BERTbase despite being eight times faster, exceed the performance of Prune OFAlarge and oBERTlarge while being two to five times faster. In this paper, we focus on the following research questions: \u2022 RQ1: Is RoBERTa more sensitive to unstructured pruning than BERT? \u2022 RQ2: What is the impact of using a larger teacher for KD during the pruning of language 4Based on monthly downloads on the huggingface model hub in march 2023 models? \u2022 RQ3: Can frozen embeddings improve the accuracy of pruned language models? As part of our experimentation, we release the associated models and the training regimes to aid reproducibility and encourage efficient inference models. In summary, our contributions are as follows: \u2022 We provide a thorough case study on how to compress a less studied language model5, RoBERTa (Liu et al., 2019), and evaluate performance on a set of seven NLP tasks finding that it is possible to effectively compress a language model without using its original pretraining dataset. \u2022 We demonstrate the impact of varying the size of teachers in KD, freezing embeddings, and variations in learning rates when applied to sparse language models. \u2022 We demonstrate that our compressed models can be leveraged to deliver accuracy of over 91% on the popular SQUAD v1.1 (Rajpurkar et al., 2016a) Question Answering Task with nearly three times faster inference than the previous state-of-the-art uses of unstructured sparsity. 2 Background and Related work While many methods to improve model efficiency exist, the same goal generally underpins them: given an original model \u03b8 with an accuracy of acc(\u03b8) and an inference cost of c(\u03b8) minimize the inference cost. While the methods used for compression can be highly optimized and specialized, they can commonly be used together to deliver massive improvements in inference speeds with minimal losses in accuracy. Transformer Based Language Models such as BERT (Devlin et al., 2019) and T5 (Raffel et al., 2020) provide contextual language representations built on the Transformer architecture (Vaswani et al., 2017) which can be specialized and adapted for specific tasks and domains (Lee et al., 2020). 5While the RoBERTa model was downloaded over 10m times in May 2023 on the huggingface hub it has not a model of focus for compression research. 2 \fUsing these models, it becomes relatively easy to excel at a broad range of natural language processing tasks such as Question Answering, Text Classification, and sentiment analysis. Unstructured Pruning is a compression approach that removes individual weights or groups of weights in a model by applying a mask or setting the weight values to 0. This compression approach has been broadly studied in computer vision (Han et al., 2015), and many methods can remove 70% or more of model weights with little to no loss in accuracy. Models pruned can be 20x smaller in terms of pure model size and, when paired with a sparsity-aware inference engine such as DeepSparse (Magic, 2023), provide 3-5x speedups in inference throughput. Focused on language models, recent work has shown that it is possible to prune models during fine-tuning (Sanh et al., 2020) (Kurti\u00b4 c et al., 2022) or during pre-training (Zafrir et al., 2021) and transfer to novel domains (Campos et al., 2022) and datasets. Structured Pruning is a compression approach that removes fundamental structural components in a language model such as individual attention heads (Voita et al., 2019) or entire model layers such as transformer encoders (Sanh et al., 2019b). Structural pruning has become one of the most popular methods for inference optimization as it is easy to estimate the speedups and implement. Freezing Embeddings, as introduced by Devlin et al. (Devlin et al., 2019), involves training the embedding layer of a language model and then toggling the ability to continue to optimize, or not, the values of in the embeddings as training continues. Knowledge Distillation (Hinton et al., 2015) is a training method where a model is not explicitly a compression method but a training method where a model, called the student learns to emulate a teacher model which is commonly larger or better performing. The loss extracted from the original training data in KD is augmented or replaced by KL divergence between the student and teacher model. KD leverages the hardness parameter h to control the mixture of regular and distillation loss (with a higher distillation favoring the KL divergence loss) and a temperature parameter t to control the softness of the distribution. As applied to language models, the approach has been used to improve the performance of structurally pruned language models resulting in models like DistilBERT (Sanh et al., 2019b) and TinyBERT (Jiao et al., 2020). Quantization reduces the precision for the model weights and activations to lower the computational requirements of model execution. While researchers have explored reducing representation to binary representations (Pouransari and Tuzel, 2020), current hardware limits inference speedups to 8 or 4-bit representations. Quantization can be applied after the model is trained in a one-shot fashion, but this can lead to large losses in accuracy because of rounding errors. To avoid this pitfall, quantization is applied as quantization-aware training (QAT), where the forward pass of the model is simulated with lower precision. In contrast, the backward pass happens in full precision. By using QAT models, learn to be robust to rounding errors and can result in quantization having little to no loss in accuracy. In language models, research has produced quantized language models such as Q8BERT (Zafrir et al., 2019) and is commonly used in conjunction with structured and unstructured pruning (Zafrir et al., 2021) as a way of introducing compounding compression. Additional approaches such as early exiting (Xin et al., 2020) or token pruning (Kim et al., 2021) have also improved inference efficiency. Still, the inference improvements can be very dataset dependent and, as a result, out of our experimentation frame. For a broader survey on compression approaches, we recommend Treviso et al. recent work (Treviso et al., 2022) 3 Improving Sparse Transfer Learning While quantization and pruning have been well studied as applied to language models, work has studied the compression BERTbase or BERTlarge. Despite existing research, we find that a clear case study that explores how best to create a family of compressed models is lacking, and this work seeks to remedy that. As part of our research, we compare the impact of varying pruning methods, pruning stage, teachers for KD, and freezing portions 3 \fof the model as applied to the RoBERTa language model. While performing task-specific compression allows NLP practitioners to broadly adopt improvements in inference efficiency, having access to preoptimized models is key. We produce a family of 8 general purpose language models, collectively called oBERTa, which progressively get smaller and faster with minimal losses in accuracy. The oBERTa models leverage a combination of structured and unstructured pruning to provide a set of compressed models which can meet a wide set of latency needs. This compression approach has not been extensively documented nor discussed. Our approach to producing the oBERTA models builds on prior explorations of the combination of compression methods (Kurti\u00b4 c et al., 2022) and addresses compression approaches in a staged manner as shown in Figure 2. First, we create three structural variants starting with a RoBERTabase model. The base uses 12 transformer layers, the medium uses 6, and the small uses 3. Following prior work, we select interleaved layers for the 6-layer model and the first, middle, and last layers for the 3-layer model. Then, each of these 3 models is further pre-trained using masked language modeling on the Wikipedia-Bookcorpus text dataset, leveraging KD from a RoBERTalarge teacher. After that, each model is pruned using gradual magnitude pruning (GMP) to a desired sparsity level (90% and 95%) during additional pre-training based on masked language modeling, similar to Zafir et al. (Zafrir et al., 2021). Further background on the RoBERTA model and why we did not prune using the WebText corpus can be found in the appendix. After pre-training, the sparsity profile is fixed, and models are fine-tuned and quantized on their target task with a small set of variable hyperparameters. Experimentation on the impact of larger teachers, frozen embeddings, and variations in pruning algorithms are discussed in subsequent portions of this work. 3.1 Downstream Compression We explore the impact of introducing unstructured sparsity during task-specific fine-tuning. We repeat each experiment with three different seeds and report the average F1 and Exact Match (EM) metrics in tables 2 and 3. Following a basic hyperparameter sweep, our baseline RoBERTabase model achieves a performance of 83.95 EM and 91.13 F1 in the broadly used question-answering benchmark SQUAD V1.1 (Rajpurkar et al., 2016a). We also perform unstructured pruning varying the sparsity 50-95% and the pruning method: GMP and Optimal BERT Surgeon (OBS) (Kurti\u00b4 c et al., 2022). We prune each model for eight epochs, followed by an additional two epochs to allow the network to stabilize and re-converge. Knowledge distillation is used during training with the dense baseline model as a teacher, hardness set to 1.0 and temperature set to 5.0. Further hyperparameters are in the appendix A.7. Table 1 shows the impact of sparsity on BERTbase, as reported by previous work. Comparing these results with tables 2 and 3, we conclude that RoBERTa is more sensitive to pruning than BERT, although RoBERTabase pruned with OBS remains substantially more accurate than BERTbase for the same level of sparsity. Table 2 shows that pruning RoBERTAbase to 90% with OBS results in a relative drop in F1 of 1.59%, which is three times the relative drop reported for BERTbase with the same pruning algorithm. Moreover, table 3 shows that RoBERTAbase becomes very sensitive to pruning with GMP for sparsities above 85%, with the relative drop in F1 increasing almost threefold between 85% and 90% sparsity. We conjecture that RoBERTa is more sensitive to pruning than BERT because the latter is relatively under-trained (Liu et al., 2019), making the more optimized RoBERTa more sensitive to the loss in expressivity caused by pruning. Model Sparsity F1 Impact BERTbase (Devlin et al., 2019) 0 88.50 N/A BERTlarge (Devlin et al., 2019) 0 90.9 N/A RoBERTabase (Liu et al., 2019) 0 91.13 N/A RoBERTAlarge (Liu et al., 2019) 0 94.60 N/A PruneBertbase (Sanh et al., 2020) 90 84.90 -4.07 % PruneOFAlarge (Zafrir et al., 2021) 90 87.25 -1.41 % oBERTlarge (Kurti\u00b4 c et al., 2022) 90 87.98 -0.58% GMP\u22c6large (Kurtic and Alistarh, 2022) 90 86.7 -2.03% Table 1: Performance of existing dense and sparse language models on the SQUAD v1.1 Question Answering Dataset 3.2 Upstream Compression Based on our fine-tuning experiments, achieving a high degree of sparsity on the RoBERTA model 4 \fSparsity (%) EM Impact F1 Impact 50 84.80 1.01% 91.49 0.40% 60 84.64 0.82% 91.33 0.22% 70 84.42 0.56% 91.13 0.00% 80 84.64 0.82% 91.33 0.22% 85 82.89 -1.26% 90.12 -1.11% 90 82.48 -1.75% 89.68 -1.59% 95 79.01 -5.89% 87.05 -4.47% Table 2: Impact of Sparsity introduced by OBS on the F1 and EM scores of pruned RoBERTa models on the SQUAD V1.1 Dataset Sparsity (%) EM Impact F1 Impact 50 84.90 1.13% 91.46 0.36% 60 84.27 0.38% 90.91 -0.24% 70 83.37 -0.69% 90.30 -0.91% 80 81.64 -2.76% 88.86 -2.49% 85 81.64 -2.76% 88.86 -2.49% 90 76.51 -8.86% 84.90 -6.83% 95 69.39 -17.34% 79.35 -12.93% Table 3: Impact of Sparsity introduced by GMP on the F1 and EM scores of pruned RoBERTa models on the SQUAD V1.1 Dataset leads to improvements in performance, but there are greater than expected losses in accuracy. Additionally, such compression is task-specific and non-amortizable, so we explore how best to generate general pruned RoBERTa models. While we eventually apply the winning set of training combinations to all of our variants of oBERTa, we first seek to answer the following questions: Does GMP or OBS perform better during pretraining pruning? Does Freezing the Embeddings during pretraining pruning further improve performance? Does the use of larger teachers further improve performance? We prune various models while varying individual variables during pretraining to evaluate these questions. We experiment by pruning an oBERTabase (12 layers) model to 90% and 95% sparsity on all non-embedding layers. All pretraining pruning happens using the Wikipedia-BookCorpus dataset, where we train for five epochs using a learning rate of 5e-5 and a batch size of 256 using 4 A100 GPUS. To evaluate the impact of these models, we evaluate performance on the previously used SQUAD v1.1 question-answering dataset, where we train with a fixed training regime of 10 epochs with a learning rate of 1.5e-4 based on the work of Kurtic et al. We train without KD for each finetuning run with an unpruned RoBERTabase or an unpruned RoBERTalarge. Details for the hyperparameters used to train all teacher models can be found in the appendix A.5. Comparing the use of OBS vs. GMP as shown GMP OBS Model F1 Impact EM Impact F1 Impact EM Impact RoBERTAbase 92.18 0.00% 85.59 0.00% 92.18 0.00% 85.59 0.00% oBERTa 90% No KD 88.34 -4.17% 80.19 -6.31% 87.72 -4.83% 79.35 -7.29% oBERTa 90% RoBERTAbase KD 88.75 -3.72% 81.35 -4.95% 88.60 -3.88% 81.37 -4.93% oBERTa 90% RoBERTAlarge KD 89.65 -2.75% 83.12 -2.88% 89.63 -2.76% 82.94 -3.09% oBERTa 95% No KD 86.58 -6.07% 78.81 -7.92% 84.90 -7.90% 76.82 -10.25% oBERTa 95% RoBERTAbase KD 86.99 -5.63% 79.41 -7.22% 86.14 -6.55% 78.63 -8.13% oBERTa 95% RoBERTAlarge KD 87.60 -4.97% 80.44 -6.01% 86.14 -6.55% 79.84 -6.72% Table 4: Impact on F1 of SQUAD V1.1 of using OBS vs. GMP as the pruning method during pretraining. Impact measures the relative loss in performance vs. the unpruned RoBERTabase baseline. in table 4, we can see that GMP consistently outperforms OBS. This is the opposite of what is seen when pruning downstream or, in prior work, pruning BERT. Without access to the original training corpus OBS is likely unable to leverage the loss aware saliency importance as well as it can when it has the original dataset. Evaluating the impact of variations in the hardness Hardness 0.5 Hardness 1.0 Model F1 Impact EM Impact F1 Impact EM Impact RoBERTAbase 92.18 0.00% 85.59 0.00% 92.18 0.00% 85.59 0.00% oBERTa 90% No KD 88.21 -4.31% 80.19 -6.31% 88.34 -4.17% 80.19 -6.31% oBERTa 90% Base KD 89.19 -3.25% 81.74 -4.50% 88.75 -3.72% 81.35 -4.95% oBERTa 90% Large KD 90.14 -2.21% 83.51 -2.43% 89.65 -2.75% 83.12 -2.88% oBERTa-95 No KD 85.82 -6.90% 77.77 -9.14% 86.58 -6.07% 78.81 -7.92% oBERTa-95 Base KD 86.98 -5.64% 79.23 -7.43% 86.99 -5.63% 79.41 -7.22% oBERTa-95 Large KD 87.66 -4.91% 80.40 -6.07% 87.60 -4.97% 80.44 -6.01% Table 5: Impact on F1 of SQUAD V1.1 by hardness in KD during pretraining pruning. Impact measures the relative loss in performance vs. the unpruned RoBERTabase baseline. of KD as shown in table 5, there is a bit more of a muted set of conclusions. The 95% sparse models perform better with a hardness of 1.0, while the 90% models do better with a hardness of 0.5. Given that our goal is to preserve most of the RoBERTa model without actually using its large dataset, we set our hardness to 1.0 as it keeps the model from explicitly learning the new dataset. When we evaluate the impact of freezing embeddings during pre-training, as shown in table 6, we find strong evidence that using frozen embeddings consistently leads to worse performance and, as a result, does not freeze embeddings during our model pruning. Looking at the impact of varying the size of the teacher for pretraining KD as shown in table 7, we unsurprisingly find clear evidence 5 \fFrozen Embeddings Trained Embeddings Model F1 Impact EM Impact F1 Impact EM Impact RoBERTabase 92.18 0.00% 85.59 0.00% 92.18 0.00% 85.59 0.00% oBERTabase 90% no KD 87.71 -4.85% 79.62 -6.98% 88.21 -4.31% 80.19 -6.31% oBERTabase 90% RoBERTabase KD 89.7 -2.69% 81.74 -4.50% 89.19 -3.24% 83.07 -2.94% oBERTabase 90% RoBERTalarge KD 89.59 -2.81% 82.98 -3.05% 90.14 -2.21% 83.51 -2.43% Table 6: Impact on F1 of SQUAD V1.1 concerning the use of frozen embeddings or not during pretraining pruning. Impact measures the relative loss in performance vs. the unpruned RoBERTabase baseline. that using a larger teacher during pretraining pruning leads to improvements in performance. Using these experiments, we generate the recipe, Base Upstream Teacher Large Upstream Teacher Model F1 Impact EM Impact F1 Impact EM Impact RoBERTAbase 92.18 0.00% 85.59 0.00% 92.18 0.00% 85.59 0.00% oBERTa 90% no KD 88.34 -4.17% 80.59 -5.84% 88.1 -4.43% 80.06 -6.46% oBERTa 90% Base KD 88.75 -3.72% 81.35 -4.95% 89.22 -3.21% 82.02 -4.17% oBERTa 90% Large KD 89.65 -2.74% 83.12 -2.89% 89.98 -2.39% 83.14 -2.86% Table 7: Impact on F1 of SQUAD V1.1 with respect variation is the size of the teacher in KD during pretraining pruning. Impact measures the relative loss in performance vs. the unpruned RoBERTabase baseline. which we then use to create the many variants of oBERTa. We evaluate their performance in Table 17 where it is important to note that these results are accuracy, loss, and perplexity relative to the RoBERTa-large teacher, not the true dataset. The compression recipe, as shown in Figure 2 is as follows: 1. Starting with a pre-trained language model, removing some portion of transformer layers in an interleaved fashion. 2. Using Knowledge Distillation from a large uncompressed model, pre-train the pruned model with a hardness of 1.0 and without freezing embeddings. 3. Using Knowledge Distillation from a large uncompressed model, prune during further pretraining using GMP where sparsity levels are enforced at the parameter level. The resulting model is the sparse-transfer-student. 4. Train an uncompressed large language model on the desired NLP task\u2019s dataset. This is the sparse-transfer teacher. 5. Using the sparse-transfer teacher fine-tune the sparse-transfer-student with knowledge distillation to convergence. Experiment with the use of frozen embeddings and various sizes of sparse-transfer teachers. 6. Using the fine-tuned sparse-transfer student and teacher, train with quantization-aware training. If embeddings were frozen during initial fine-tuning they should be unfrozen here. 4 Experimental Results Based on the aforementioned experiments, we generate 8 variants of oBERTa, each with a different size and sparsity profile; details can be found in table 18. Within this table, we report the impact on the model size as measured by the raw and compressed size of the ONNX 6 model file. Embeddings are unpruned and each layer is pruned to the target sparsity profile independent of the rest of the model. As a result, the overall sparsity profile may vary as modules in the network may not be able to reach exactly 90% or 95% sparsity. Using these inference-optimized models, we evaluate their sparse transfer performance by finetuning these models on their target task using a fixed training regime and minor hyperparameter exploration. For each task, we train them for 10 epochs or 20 (10 of which are Quantization Aware Training), with the longer schedule being reserved for models which are being quantized. We evaluate performance on a benchmark of diverse NLP tasks ranging from question answering, sentiment analysis, document classification, token classification, and text classification. For question answering, we leverage the SQuAD v1.1 (Rajpurkar et al., 2016a) and SQuAD V2.0 (Rajpurkar et al., 2018) datasets. We leverage the SST2 (Socher et al., 2013) dataset for sentiment analysis. For text classification, we use the Quora Duplicate Query Detection (QQP) (SambitSekhar, 2017) and the MNLI (Williams et al., 2018) datasets. We leverage the IMDB (Maas et al., 2011) dataset for document classification and CONLL2003 (Tjong Kim Sang and De Meulder, 2003) for token classification. Looking at performance on question answering as shown in table 8 and 9. Moving to text classification on QQP and MNLI as shown in tables 11 and 10 Shifting focus to document classification 6https://onnx.ai/ 6 \fSparse Transfer Sparse Transfer With Quantization model F1 Recovery EM F1 Recovery EM oBERTabase 92.15 100.00% 85.78 93.18 101.11% 87.29 oBERTabase 90% 90.95 98.69% 84.42 89.46 97.08% 82.61 oBERTabase 95% 89.84 97.49% 83.08 89.23 96.83% 81.12 oBERTaMEDIUM 90.37 98.06% 83.84 83.77 90.91% 90.37 oBERTaMEDIUM 90% 89.26 96.86% 82.18 88.65 96.20% 81.88 oBERTaSMALL 84.87 92.09% 76.55 84.82 92.05% 76.77 oBERTaSMALL 90% 84.66 91.87% 76.18 82.18 92.18% 74.21 Table 8: Sparse Transfer performance of the oBERTA family on the SQUAD V1.1 dataset. The sparse transfer was performed over 10 epochs and sparse transfer with quantization over 20. Recovery is based on the relative performance of the unpruned oBERTabase. Sparse Transfer Sparse Transfer With Quantization model F1 Recovery EM F1 Recovery EM oBERTabase 82.77 100.00% 79.56 85.298 103.06% 82.347 oBERTabase 90% 81.33 98.26% 78.27 81.43 98.38% 78.92 oBERTabase 95% 77.98 94.22% 74.67 78.09 94.35% 74.82 oBERTaMEDIUM 77.51 93.65% 74.25 78.137 94.41% 75.179 oBERTaMEDIUM 90% 76.64 92.60% 73.34 76.24 92.11% 73.51 oBERTaSMALL 71.54 86.44% 67.93 71.591 86.50% 68.087 oBERTaSMALL 90% 70.79 85.53% 67.31 69.35 87.79% 65.21 Table 9: Sparse Transfer performance of the oBERTA family on the SQUAD V2.0 dataset. The sparse transfer was performed over 10 epochs, and sparse transfer with quantization over 20. Recovery is based on the relative performance of the unpruned oBERTabase. Sparse Transfer Sparse Transfer With Quantization model Accuracy Recovery Accuracy(MM) Accuracy Recovery Accuracy(MM) oBERTabase 87.88% 100.00% 87.57% 88.06% 100.20% 88.01% oBERTabase 90% 85.17% 96.91% 84.73% 85.09% 96.83% 84.76% oBERTabase 95% 84.32% 95.95% 84.08% 83.73% 95.28% 83.83% oBERTaMEDIUM 85.29% 97.05% 85.17% 83.62% 95.15% 83.74% oBERTaMEDIUM 90% 81.61% 92.87% 81.32% 82.37% 93.73% 81.79% oBERTaSMALL 80.80% 91.95% 81.55% 81.10% 92.29% 81.51% oBERTaSMALL 90% 79.23% 90.15% 79.24% 79.14% 90.06% 79.42% Table 10: Sparse Transfer performance of the oBERTA family on the MNLI dataset. Sparse transfer was performed over 10 epochs and sparse transfer with quantization over 20. Recovery is based on the relative performance of the unpruned oBERTabase. Sparse Transfer Sparse Transfer With Quantization model Accuracy Recovery F1 Combined Accuracy Recovery F1 Combined oBERTabase 91.52% 100.00% 90.09% 88.66% 89.86% 98.18% 88.12% 86.73% oBERTabase 90% 91.01% 99.44% 89.47% 87.92% 91.21% 99.66% 89.68% 88.16% oBERTabase 95% 90.85% 99.26% 89.21% 87.58% 90.72% 99.12% 89.08% 0.87% oBERTaMEDIUM 91.35% 99.81% 89.90% 88.44% 91.33% 99.79% 89.80% 88.28% oBERTaMEDIUM 90% 90.48% 98.86% 88.85% 87.21% 90.60% 99.00% 89.01% 87.42% oBERTaSMALL 90.72% 99.13% 89.21% 87.71% 89.74 98.06% 87.99 86.25 oBERTaSMALL 90% 89.74% 98.06% 87.99% 86.25% 89.73 98.04% 87.98 86.08 Table 11: Sparse Transfer performance of the oBERTA family on the QQP dataset. The sparse transfer was performed over ten epochs, and sparse transfer with quantization over 20. Recovery is based on the relative performance of the unpruned oBERTabase. as shown in table 12 and sentiment analysis in 13 Finally, looking at performance on token classification as shown in table 14 4.1 Inference Benchmark To evaluate the performance of our inferenceoptimized models, we benchmark performance usSparse Transfer Sparse Transfer With Quantization model Accuracy Recovery Accuracy Recovery oBERTabase 95.24% 100.00% 95.44% 100.21% oBERTabase 90% 93.64% 98.32% 93.28 97.94% oBERTabase 95% 93.48% 98.15% 92.80 97.23% oBERTaMEDIUM 93.36% 98.03% 94.08 98.78% oBERTaMEDIUM 90% 92.24% 96.85% 92.08 96.69% oBERTaSMALL 93.04% 97.69% 92.52 97.15% oBERTaSMALL 90% 91.60% 96.18% 91.28 95.84% Table 12: Sparse Transfer performance of the oBERTA family on the IMDB dataset. The sparse transfer was performed over ten epochs, and sparse transfer with quantization over 20. Recovery is based on the relative performance of the unpruned oBERTabase. Sparse Transfer Sparse Transfer With Quantization model Accuracy Recovery Accuracy Recovery oBERTabase 94.60 100.00% 92.66 97.95% oBERTabase 90% 92.78 98.08% 92.546 97.83% oBERTabase 95% 91.51 96.74% 91.399 96.62% oBERTaMEDIUM 92.89 98.19% 91.06 96.26% oBERTaMEDIUM 90% 88.76 93.83% 89.91 95.04% oBERTaSMALL 90.48 95.64% 91.28 96.49% oBERTaSMALL 90% 89.34 94.44% 88.65 93.71% Table 13: Sparse Transfer performance of the oBERTA family on the SST-2 dataset. The sparse transfer was performed over ten epochs, and sparse transfer with quantization over 20. Recovery is based on the relative performance of the unpruned oBERTabase. Sparse Transfer Sparse Transfer With Quantization model Accuracy Recovery F1 Accuracy Recovery F1 oBERTabase 99.26% 100.00% 95.51% 99.30% 100.05% 95.98% oBERTabase 90% 99.11% 99.85% 94.98% 99.05% 99.79% 94.51% oBERTabase 95% 98.89% 99.63% 93.32% 98.75% 99.48% 92.61% oBERTaMEDIUM 99.04% 99.77% 94.39% 99.18% 99.92% 95.15% oBERTaMEDIUM 90% 98.79% 99.53% 93.31% 98.73% 99.46% 92.70% oBERTaSMALL 99.01% 99.75% 94.00% 98.98% 99.72% 94.13% oBERTaSMALL 90% 98.47% 99.20% 91.13% 98.25% 98.98% 89.79% Table 14: Sparse Transfer performance of the oBERTA family on the CONLL-2003 dataset. The sparse transfer was performed over ten epochs, and sparse transfer with quantization over 20. Recovery is based on the relative performance of the unpruned oBERTabase. ing the popular DeepSparse library version 1.3.2 7 and an Intel Xeon Gold 6238R Processor. Performance is measured using models that have been sparse-transferred to the SQuAD v1.1 dataset and exported to a standard ONNX model format. Benchmarks are run on 4 and 24 cores and a sequence length of 384 with batch sizes of 1, 16, and 64. For each model, the benchmark is run for 60 seconds with a warm-up period of 10 seconds, and we report the throughput (items per second) and the mean, median, and standard deviation per item latency. We present a set of summary statistics of relative speedup across batch sizes and infer7pip install deepsparse==1.3.2 7 \f24 Cores 4 Cores Model BS 1 BS 16 BS 64 BS 1 BS 16 BS 64 BERTbase 1.00 1.00 1.00 1.00 1.00 1.00 oBERTabase 1.00 1.00 1.00 1.00 1.00 1.00 oBERTabase Quantized 3.10 4.29 4.46 4.09 4.31 4.32 oBERTabase 90% 3.29 3.80 3.80 3.60 3.34 3.40 oBERTabase 90% Quantized 4.12 7.05 7.37 7.67 7.59 7.40 oBERTabase 95% 8.72 4.56 4.65 4.12 3.85 4.37 oBERTabase 95% Quantized 4.73 8.22 8.56 9.41 9.06 8.68 oBERTaMEDIUM 1.96 1.99 1.99 1.96 1.99 2.02 oBERTaMEDIUM Quantized 6.20 8.04 8.44 8.43 8.33 8.45 oBERTaMEDIUM 90% 6.35 7.41 6.84 7.83 6.56 6.72 oBERTaMEDIUM 90% Quantized 8.94 12.86 13.65 14.99 14.81 14.95 oBERTaSMALL 3.89 3.96 3.99 3.95 3.97 4.03 oBERTaSMALL Quantized 12.47 14.12 14.08 15.50 15.48 15.70 oBERTaSMALL 90% 12.22 14.40 14.67 14.05 14.19 14.13 oBERTaSMALL 90% Quantized 16.21 21.35 23.96 29.77 27.14 27.58 Table 15: Latency reduction of the oBERTa family concerning the unpruned oBERTabase as measured on 24 and 4 cores. Speedup is measured relative to the latency reduction in MS/batch, and BS refers to batch size. ence server configurations as shown in table 15. Full inference performance results can be found in the appendix. In analyzing performance, we can see that the introduction of quantization to a dense model delivers roughly a 4x speedup while quantization on sparse models is closer to 2x. With the introduction of sparsity, 90% leads to slightly under 4x speedup, while 95% leads to slightly over 4x. The impact of structural pruning is roughly equivalent to the size of the as a 6-layer model is two times faster than a 12-layer, and a 3-layer model is four times faster. Combing compression forms is only partially additive, as a small (3-layer) 90% quantized model performance is 24x vs the expected 32x (4x from structural pruning, 2x quantization, 4x unstructured pruning. Looking at the variation in a speedup by batch size and the number of cores, we can see that allocating more cores leads to a smaller gap in inference speedup, especially with small batches. From this, we extract that compression is significant when performing streaming inference (batch size 1) on smaller CPUs. Next, we go ahead and benchmark the oBERTa model performance against existing sparse-transfer models such as oBERT and PruneOFA using the models that have been published 8 in Neural Magic\u2019s Sparse-Zoo 9. We run these models using four cores and a batch size of 1 and compare their speedup (or slowdown) relative to their per8Since the PruneBERT model is not available in the zoo, we extrapolate numbers using the performance of our oBERTabase pruned 90% as both models feature 12 transformer encoders and 90% sparsity. 9https://sparsezoo.neuralmagic.com/ formance on the SQUAD v1.1 question-answering benchmark. Results can be found in table 16 and full results in 45. Looking at the improvements in accuracy and inference throughput, we find the oBERTa models are 1.3 to 4 times better than models with approximately the same accuracy. Looking at the competitive results, we find Vs. BERTbase Vs. BERTlarge Model F1 Recovery Speedup Recovery Speedup oBERTabase 90% 91.00 102.77% 3.57 100.44% 20.21 oBERTlarge 95% Quantized 90.21 101.87% 3.41 99.57% 19.31 prunedOFAlarge 90% Quantized 89.96 101.59% 2.38 99.29% 13.47 oBERTabase 90% Quantized 89.46 101.03% 7.62 98.74% 43.07 oBERTaMEDIUM 90% 89.26 98.99% 7.78 96.75% 43.99 obertbase 90% Quantized 88.00 99.38% 6.96 97.13% 39.37 oBERTaSMALL 90% 84.66 90.97% 13.95 88.91% 78.91 pruneBERT 90% 84.90 95.88% 3.57 93.71% 73.82 Table 16: Speedups of the oBERTa-family compared to existing published sparse models compared to the performance of BERTbase and BERT-large. Speedup measures the reduction in latency of MS/batch. oBERTabase 90% exceeds the accuracy of oBERTlarge 95% quantized despite being faster, oBERTabase 90% quantized performs at the level of pruneOFAlarge 90% Quantized despite being 3x faster, oBERTaMEDIUM 90% can outperform oBERTbase 90% Quantized despite being 30% faster, and oBERTaSMALL 90% performs on par with pruneBERT 90% despite being nearly four times faster. that the oBERTa-* models can deliver significant gains in performance (F1) relative to speedups. The oBERTabasePruned 90% Quantized model achieves an undertaking that nearly matches pruneOFA-large 90% Quantized while delivering nearly 13x faster inference. Similarly, the oBERTASMALL 90% model provides similar accuracy to PruneBERT despite being over four times faster. 5 Discussion Sparse Models require higher learning rates as shown in the tables in A.8 sparse language models can be used as general-purpose contextual language models but require the use of a much higher learning rate. When using structurally pruned models like the 6-layer oBERTaMEDIUM and the 3-layer oBERTaSMALL, the optimal learning rate does not vary much within the same task despite the model size. With the introduction of sparsity, the learning rate needs to scale, usually by a factor of five or ten. We find this counterintuitive as the sparse models have fewer parameters to tune, so we would expect 8 \fthem to prefer a much lower learning rate. We attribute this to the loss of expressivity in the network driven by its sparsity. Since the network has fewer degrees of freedom to optimize the points which can be optimized move much more than those that cannot. Larger models compress better as shown by the gap between the sparse and dense models and the gap between models and their quantized counterparts. While 12-layer models can receive 90 or 95 % sparsity and quantization with little to no loss in accuracy, the three and 6-layer models see a much bigger dip. This aligns with Li et al. 2020 (Li et al., 2020) in which they demonstrate that larger models are more robust to pruning and quantization. Empirically, this makes sense as the smaller models have fewer degrees of freedom, and other portions of the network cannot counteract the reduction in expressivity caused by pruning and quantization. Bigger Teachers are not always better as shown in the table in A.9 the introduction of larger teachers does not always lead to improvements in accuracy. The impact is highly task and model dependent as some datasets like MNLI or QQP see little impact in using larger teachers, yet datasets like SQUAD or SQUAD v2.0 see large impacts, which are even more pronounced when the student model is smaller. Frozen embeddings can help, but not always. As shown by A.10 the impact of freezing the embeddings is highly task-specific and inconsistent across tasks or models. In question answering, freezing leads to 1-2 point movement for unpruned models and 5-7 points for pruned models. In other tasks like QQP and MNLI, the impact of frozen embeddings tends to be minor or none. 6 Limitations While our approach is effective at compressing models, it is not the most efficient. In order to discover the most optimal compression approaches and evaluate their performance performed hundreds of experiments. As a result, scaling our approach to every novel language understanding language model is not tractable. Another limitation of our work is we did not track the complete compute utilization of our entire experimentation process but we can provide some estimates. Experiments in pruning during fine-tuning leveraged a single V100 16 GB GPU and took approximately 14 hours per experiment. The pre-training of structurally pruned models with knowledge distillation required 4 A100 40GB GPUs for approximately 72 hours. Pruning during pre-training with Knowledge distillation required approximately 100 hours on the same setup. Task-specific fine-tuning happened on a single V100 16GB GPU and depending on the size of the task was anywhere from a few minutes to 20 hours. Based on all of our experiments we estimate 400 V100 hours of pruning during fine-tuning, roughly 16,000 A100 hours10 for pretraining, and assuming an average of 10 V100 hours per sparse transfer run, a total of 4000 V100 hours for sparse-transfer and sparse-transfer with quantization. 7" + }, + { + "url": "http://arxiv.org/abs/2211.15927v1", + "title": "Compressing Cross-Lingual Multi-Task Models at Qualtrics", + "abstract": "Experience management is an emerging business area where organizations focus\non understanding the feedback of customers and employees in order to improve\ntheir end-to-end experiences. This results in a unique set of machine learning\nproblems to help understand how people feel, discover issues they care about,\nand find which actions need to be taken on data that are different in content\nand distribution from traditional NLP domains. In this paper, we present a case\nstudy of building text analysis applications that perform multiple\nclassification tasks efficiently in 12 languages in the nascent business area\nof experience management. In order to scale up modern ML methods on experience\ndata, we leverage cross lingual and multi-task modeling techniques to\nconsolidate our models into a single deployment to avoid overhead. We also make\nuse of model compression and model distillation to reduce overall inference\nlatency and hardware cost to the level acceptable for business needs while\nmaintaining model prediction quality. Our findings show that multi-task\nmodeling improves task performance for a subset of experience management tasks\nin both XLM-R and mBert architectures. Among the compressed architectures we\nexplored, we found that MiniLM achieved the best compression/performance\ntradeoff. Our case study demonstrates a speedup of up to 15.61x with 2.60%\naverage task degradation (or 3.29x speedup with 1.71% degradation) and\nestimated savings of 44% over using the original full-size model. These results\ndemonstrate a successful scaling up of text classification for the challenging\nnew area of ML for experience management.", + "authors": "Daniel Campos, Daniel Perry, Samir Joshi, Yashmeet Gambhir, Wei Du, Zhengzheng Xing, Aaron Colak", + "published": "2022-11-29", + "updated": "2022-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG", + "I.2.7" + ], + "main_content": "Introduction Experience management enables businesses and organizations to effectively adapt to actionable feedback from their customers and employees. Understanding and managing a customer or employee experience requires analyzing a combination of survey questions, social media, and other sources of experience data together in order to derive insight. Deriving insight requires effectively predicting the feelings, emotions, as well as the requested actions from feedback in a scalable way. To accurately predict these insights we leverage state-of-the-art pretrained language models. \fCompressing Cross-Lingual Multi-Task Models at Qualtrics A PREPRINT However, many of the best pretrained models require signi\ufb01cant resources to deploy at scale. For example, the best performing models for tasks such as semantic similarity of sentences Wang et al. [2018] can have hundreds of billions of parameters Smith et al. [2022]. Even more pedestrian (in terms of size) models such as BERT-base Devlin et al. [2019] can still be relatively expensive and latency-prone for a typical business use case, especially without specialized hardware or accelerators. One way to both achieve high prediction accuracy and scale up is to leverage model compression. While there is substantial literature on model compression, it can be dif\ufb01cult to sort through all the methods and evaluate them on a case by case basis. Our contribution is a speci\ufb01c case study evaluation of model compression methods for Qualtrics models on experience management data and its unique challenges. In this work, we are particularly interested in building ef\ufb01cient text classi\ufb01ers, an important problem in the experience management domain. Indeed, unstructured data constitutes more than 80% of the experience management data. As such, analyzing text data across dimensions such as sentiment, emotion, actionability, effort, intent, topic, urgency, and toxicity is one of the most the foundational challenges in this emerging space. We share details about what worked and did not, which can bene\ufb01t the industry at large as others begin to adopt model compression in their organizations for their use cases. This is particularly timely in our current market as many companies in emerging business areas are looking to reduce costs and model compression is an effective way to reduce ML inference costs, both \ufb01nancially and environmentally. 1.1 Motivating Constraints: Engineering Overhead, Cost, Latency Our goal in pursuing this compression and consolidation work was to reduce overall model hosting costs while preserving model quality. Two areas we are focused on are reducing burden for engineering support of hosting our new models in production, as well as the direct cost and latency of the models themselves. Since we deploy and support our models in a microservices framework, each model typically lives behind a speci\ufb01c model endpoint or service so each model has a static cost for the base capacity, and variable cost for the elastic capacity. If we use single-task monolingual models, this results in needing to support in production a speci\ufb01c service per task language pair. Similarly, for single-task NLP models, the encoder, which can account for 90+% of the computation for classi\ufb01cation models, must run for each task, regardless of how similar it is between tasks. In contrast, a multi-task cross-lingual model consolidates this repetitive computation and removes the instance hosting overhead for additional languages. For this reason, we focused on the ability to support multiple tasks per model, as well as a cross-lingual model. In addition, by developing smaller models, we hope to achieve reduced latency for runtime needs while also reducing costs by providing \ufb02exibility to deploy on less costly hardware. 1.2 The Tension Between Model Consolidation and Compression There is an interesting tension that arises as we both combine multiple models into a single multi-task cross-lingual model and also reduce the size and capacity of that model. While prior work has also looked at these different facets of model consolidation and compression in isolation Wang et al. [2020a,b], Mukherjee et al. [2021], Jiao et al. [2021], Sanh et al. [2019], Jiao et al. [2020], de Wynter and Perry [2020], Yang et al. [2019], in this work we investigate how these approaches work together to consolidate and compress a model, and how that impacts model performance on the target tasks. We are unable to analyze this tension for all NLP tasks in general, but here we present evidence towards understanding the tradeoffs for speci\ufb01c cases, relevant to work at our company. These results can inform future theoretical work as well as more practical application at other organizations. 2 Cross-Lingual Multi-Task (XLMT) Model Compression Methods As described above, we are motivated to both consolidate task and language support into a single cross-lingual multitask (XLMT) model and at the same time pursue a compressed version of that model to reduce capacity and make the model faster and less expensive to run. 2.1 Cross-lingual Modeling There has been a strong movement towards multi-lingual and cross-lingual models. One of the \ufb01rst multi-lingual BERT models was \u201cmulti-lingual BERT\u201d (mBert), from Devlin et al. [2019], which extended \u201cmonolingual BERT\u201d by training across a dataset with multiple languages represented. Cross-lingual modeling (XLM), presented in 2 \fCompressing Cross-Lingual Multi-Task Models at Qualtrics A PREPRINT Conneau and Lample [2019], further improved over multi-lingual modeling by introducing additional cross-lingual pretraining tasks, and XLM-Roberta (XLM-R) Conneau et al. [2019] developed a strong cross-lingual model using techniques from Roberta Liu et al. [2019] and showed better performance beyond previous multi-lingual and crosslingual models. In this work we show results using both the mBert and XLM-R pretrained models on which we build our task-speci\ufb01c classi\ufb01ers. In the original paper Conneau et al. [2019] the authors showed a decrease in model performance as more and more languages were introduced. We explore the effect of training on monolingual vs cross-lingual settings, and how it impacts our combined model performance. 2.2 Multi-task Learning for NLP Multi-task learning (MTL) can not only merge tasks into a single model but also improve task performance by sharing common layers. For instance, Lin et al. [2018] proposed an architecture that shares the same character embedding layer showing effective results for low-resource settings. Other types of MTL include hierarchical architectures, such as Vijayaraghavan et al. [2017] where separate tasks are learned and then combined using a \ufb01nal attenuation layer and He et al. [2019] where the \ufb01rst task output feeds into a second task in sequence. In this work we explore how combining multiple tasks into a single cross-lingual model impacts performance on each of those tasks individually. Our approach leverages a common base model with multiple task heads. The multi-task multiclass classi\ufb01cation loss function we use consists of a simple sum of cross-entropy losses, LMT = 1 N T X t=1 N t X i=1 \" \u2212 X c\u2208Ct \u0000\u2113t i,c log pt i,c \u0001 # , (1) where N = PT t=1 N t is the total number of data points from T tasks and N t is the number of data points for the t-th task. Ct is the number of classes for task t. \u2113t i,c is either 0 or 1, indicating whether class label c is the correct classi\ufb01cation of the i-th data point from the t-th task, and pt i,c are the corresponding predicted probabilities. 2.3 Model Compression 2.3.1 Knowledge Distillation Knowledge distillation (KD) popularized by Hinton et al. [2015] and aims to create smaller models which approximate the performance of the larger models by teaching the smaller model (student model) to emulate the larger model (teacher model). The original approach used the \ufb01nal layer logit-based knowledge distillation, where the concept is to minimize the distance (i.e., KL divergence loss function) of logit output (\ufb01nal layer) between teacher and student models. Later work, including many applications in NLP, introduced variations on this idea, including Sanh et al. [2019] which applied a combined loss, including masked language modeling loss, cosine distance loss, and KL divergence loss to reduce BERT model size. More generally, we can also align the intermediate features between the teacher and the student models rather than just the \ufb01nal layer, such as Jiao et al. [2020] which used many intermediate layers for distillation. MiniLM was introduced in Wang et al. [2020a] using self-attention distribution transfer and self-attention value-relation transfer to achieve competitive performance in both monolingual and multilingual models. In this work, we have primarily investigated distilling using the task speci\ufb01c logits produced by the \ufb01nal layer. Exploring additional intermediate representation distillation is left to future work to potentially improve performance in the smallest models we tested. Focusing on the last layer results in the following modi\ufb01ed loss: LMT-KD = 1 N T X t=1 N t X i=1 \" \u2212 X c\u2208Ct \u2113t i,c log pt i,c ! +\u03b1F 2 X c\u2208Ct \u02c6 qt i,c log \u02c6 qt i,c \u02c6 pt i,c !# , (2) where qt i,c is the teacher model prediction of the i-th data point from the t-th task, \u02c6 qt i,c = exp(qt i,c/F ) P j exp(qt j,c/F ) is the temperature modi\ufb01ed teacher prediction, \u02c6 pt i,c = exp(pt i,c/F ) P j exp(pt j,c/F ) is the temperature modi\ufb01ed student prediction, F is the 3 \fCompressing Cross-Lingual Multi-Task Models at Qualtrics A PREPRINT temperature parameter Hinton et al. [2015], and \u03b1 is the teacher coef\ufb01cient term controlling the relative impact of distillation to the label loss. 2.3.2 Structural Pruning In LeCun et al. [1989] the author introduced a notion that neural networks can be compressed by removing entire sections without major impact to accuracy. Structural pruning compress networks by removing entire structural components like attention heads, neurons, and even transformer layers, and leverage KD to limit model degradation. While most previous work in compression has focused on monolingual models, there is also a growing body of work around multilingual and cross-lingual model compression Jiao et al. [2021], Mukherjee and Awadallah [2020], Mukherjee et al. [2021], Wang et al. [2020a,b]. We focus on two speci\ufb01c compressed architectures, MiniLM Wang et al. [2020b] and XtremeDistil Mukherjee et al. [2021] and compare them in our use case. Ultimately we found MiniLM to be the most effective at learning our speci\ufb01c set of tasks. 2.3.3 Quantization Quantization enables reduction in model size and memory footprint while also potentially increasing inference speed. Here we consider integer quantization, in which the precision is reduced from 32-bit \ufb02oating point to 8-bit integer. Quantization can be done during training, known as quantization aware training (QAT), to minimize degradation, or after training, known as post training quantization (PTQ), to compress an already trained model. Zafrir et al. [2019] shows that by leveraging QAT, their \"Q8Bert\" quantized model was able to match the performance of the base BERT model on various NLP tasks. In this work we explore combining quantization via QAT with structural pruning to further reduce the model size while maintaining good model performance. 3 Experimental Results Our core set of results are developed around a multi-task cross-lingual model developed internally at Qualtrics to help develop understanding around customer feedback. The model handles three separate but related multiclass classi\ufb01cation tasks on text input, we refer to these tasks throughout this paper as Task-1, Task-2, and Task-3. They refer to three text classi\ufb01cation tasks our group actively uses and develops, with similarities to models such as sentiment or toxicity prediction He et al. [2019]. Each of these task is implemented as a sequence classi\ufb01cation task where the input is direct customer feedback. Task-1 is a multi class classi\ufb01cation with 6 labels, Task-2 is a multi-class classi\ufb01cation with 4 labels, and Task-3 is a multi-class, multi-label sequence classi\ufb01cation with 9 classes and each class has independent binary labels. In our experiments our focus is on exploring the relationship between knowledge distillation, multi-task modeling, quantization and multilingualism. We do not seek to provide a complete understanding of how each axis impacts the outcomes but instead seek to \ufb01nd the optimal way to optimize the performance of pruned and quantized model by exploring the impact of multi-lingual \ufb01ne tuning, variations in knowledge distillation, and task speci\ufb01c teachers. 3.1 Dataset and Compressed Architecture Selection Our dataset consists of a collection of internal customer experience data across multiple industries. The data has been fully anonymized and aggregated, and is used with permission. This process protects customer data privacy and ensures data from any speci\ufb01c industry or company is not over-represented or identi\ufb01able. The resulting text dataset consists of 257k text documents across 16 languages labeled for Task-1, and 127k text documents across 12 languages labeled for Task-2 and Task-3. A description of the task types, number of labels, and label can be seen in Table 1. This experimental data is similar in nature to the production datasets used in our production modeling system. For modeling we primarily use PyTorch Paszke et al. [2019] and the Transformers library Wolf et al. [2019]. For model quantization, we made use of the SparseML library Kurtz et al. [2020], Singh and Alistarh [2020]. Instead of developing our own target architecture, we leverage existing cross-lingual models from the literature as a \ufb01rst approach to model compression. After a review of the literature, we settled on experimentation around two crosslingual models, XtremeDistil Mukherjee and Awadallah [2020], Mukherjee et al. [2021] and MiniLM Wang et al. [2020a], Wang et al. [2020b]. We summarize the characteristics of the architectures evaluated in Table 2, where the smallest model considered was 6 layers and 22M parameters. 4 \fCompressing Cross-Lingual Multi-Task Models at Qualtrics A PREPRINT Task-1 Task-2 Task-3 (Multi-class) (Multi-class) (Multi-label) # Samples # Samples # Samples T1-L1 93738 T2-L1 83545 T3-L1 33463 T1-L2 70218 T2-L2 36018 T3-L2 21562 T1-L3 38786 T2-L3 4317 T3-L3 22556 T1-L4 26359 T2-L4 3198 T3-L4 1090 T1-L5 18792 T3-L5 7525 T1-L6 9837 T3-L6 44485 T3-L7 11341 T3-L8 2518 T3-L9 1951 Table 1: Breakdown of task label distribution. Task labels are listed as T#-L#, where T1-L1 represents Label 1 for Task-1. Name #Layer #Param Size XLM-R(XLM-R Base) 12 85M 1.12GB XtremeDistil 6 22M 91MB MiniLM-L12(mMiniLMv2) 12 41M 236MB MiniLM-L6(mMiniLMv2) 6 22M 215MB Table 2: Description of model architectures evaluated. #Params refers to the number of transformer parameters. To further narrow to a single cross-lingual model, we performed an experiment using a subset of our datasets that covered 11 languages and evaluated how well the models perform in two settings: with a distillation teacher and without a teacher. The subset contained 57k responses labeled for Task-1 and 20k labeled for Task-2 and Task-3. Method Task-1 Task-2 Task-3 XLM-R 83.32 80.81 39.41 Xtremedistil (no teacher) 67.69 69.24 31.23 Xtremedistil (with teacher) 67.82 70.99 28.93 MiniLM-L12 (no teacher) 80.79 77.55 35.99 MiniLM-L12 (with teacher) 81.43 78.44 36.54 Table 3: Results on each task for each model architecture, reported in Macro-F1. All models were trained for 2 epochs and reported results are the per-task macro F1 scores. This experiment, as shown in Table 3, indicated that MiniLM (and its variants) would be easier to train and perform model distillation in our setting. Due to the above results, for compressed models we targeted the MiniLM-L12 architecture. Our de\ufb01nition of performing better, worse, or the same was based on the 95th percentile con\ufb01dence interval of a random sample of 5 models trained from different random seeds. If we observe differences greater than these intervals we consider them signi\ufb01cant; otherwise we consider the result to be the same. 3.2 Cross-Lingual Model Results Our goal in developing a cross-lingual model is to reduce the overhead of hosting multiple monolingual models. However, the single cross-lingual model should perform at least on par with the monolingual model. To test this assumption, we trained a single cross-lingual model and tested it across all languages. We then trained 12 separate monolingual models, starting from the publicly available XLMR-base pretrained weights (to avoid confounding factors from alternative monolingual base models). We then evaluated these monolingual models against the same crosslingual evaluation dataset as a benchmark. A summary of results is shown in Table 4, where we report results for fr, en, de, and ja languages. We also evaluated 8 other languages and observed the same overall relative results. The best monolingual Task-1 result overall was 73.39 (en) and worst was 14.52 (pl), the corresponding cross-lingual reaching 79.12 (pl). The best cross-lingual result was 91.65 (en) and the worst was 71.05 (ko) with the corresponding monolingual result dipping to 48.84 (ko). We observe in every language we examined the cross-lingual model does better than the monolingual model, strongly supporting a move to cross-lingual modeling for our tasks. 5 \fCompressing Cross-Lingual Multi-Task Models at Qualtrics A PREPRINT Train Lang Eval Lang Task-1 Task-2 Task-3 all fr 87.69 82.43 36.16 fr 68.91 74.04 26.24 all en 91.65 79.67 40.47 en 73.39 77.2 33.69 all de 86.07 77.74 34.71 de 68.71 70.65 23.52 all ja 80.3 70.71 32.2 ja 56.22 64.21 15.9 Table 4: Cross-lingual model comparison with monolingual models, evaluated on the same target language. Across all languages and tasks we evaluated, we observed the cross-lingual models to outperform monolingual models. 3.3 Cross-Lingual Multi-Task (XLMT) Model Results We are also interested in combining multiple tasks into a single model to reduce the engineering overhead of hosting our model. To evaluate whether our model maintained similar performance to single-task models, we evaluated the combined XLMT model in comparison to the single-task models for both XLM-R and mBert pretrained models. The experimental results in Table 5 show that the XLMT model performed similarly to, if not better than, the single-task model on Task-1 and Task-2 prediction. For Task-3 we observed some signi\ufb01cant degradation in the task performance. To further con\ufb01rm these results, we performed a similar analysis using another multilingual model, mBert. Using mBert, we again observed some modest gains for the \ufb01rst two tasks and then signi\ufb01cant degradation for the third task. These results indicate our current multi-task architecture does bene\ufb01t two of the three tasks. However, for \ufb01nal deployment it will be important to consider moving our third task into a separate model or develop alternative multitask architectures to reduce the performance gap. Model Train Method Eval Task F1 XLM-R multi-task Task-1 82.23 Task-2 76.03 Task-3 38.32 single-task Task-1 81.12 Task-2 74.67 Task-3 51.27 mBert multi-task Task-1 78.88 Task-2 75.27 Task-3 35.88 single-task Task-1 78.63 Task-2 74.31 Task-3 51.12 Table 5: Task-speci\ufb01c results for cross-lingual single-task models and multi-task models. Macro-F1 results are reported on the full evaluation set, consisting of all languages (16 for Task-1, 12 for Task-2/3). 3.4 Compressed XLMT Model Results In developing the XLMT model, engineering overhead was reduced from 16 \u00d7 1 + 12 \u00d7 2 = 40 individual models to a single cross-lingual multi-task models, or two based on the outcomes of Task-3 above. However, given the size of the XLM-Roberta model, the hosting costs associated with serving inference, speci\ufb01cally given the need for GPU instances to generate predictions at low latency, remained high. To reduce this base cost and reduce the latency of this model, we focused on compressing the model itself. As mentioned earlier this compressing of the model, reducing its overall capacity, is in tension with the goals of maintaining performance of the combined XLMT model. Our results in Table 6 show that simply performing structured layer pruning on the model resulted in some degradation of task performance. For Task-1 with MiniLM-12 architecture, the larger of the smaller architectures considered, we see about 1.6% relative degradation. MiniLM-6 shows 3.9% degradation, while XtremeDistil shows over 20% degradation. This same pattern holds for the Task-2, and for Task-3 we see even less degradation for MiniLM-12. These results strongly favor MiniLM-12 and MiniLM-6 for compressing our speci\ufb01c use case. 6 \fCompressing Cross-Lingual Multi-Task Models at Qualtrics A PREPRINT Model Task-1 Task-2 Task-3 XLM-R 82.23 76.80 35.90 MiniLM-L12 80.85 75.86 35.09 MiniLM-L6 78.97 72.42 35.34 XtremeDistil 61.83 61.59 24.00 Table 6: Results comparing the original MiniLM and XtremeDistil models with the full-size XLM-R model across Task-1, Task-2, and Task-3 macro-F1 scores. 3.5 Distilled XLMT Model Results To address the degradation resulting from structured layer pruning we incorporated some model distillation using the \ufb01nal layers of the full-size and compressed models. We explored using a single multi-task teacher and task-speci\ufb01c teachers, as well as using a single cross-lingual teacher and language-speci\ufb01c teachers. However we ultimately use cross-lingual task-speci\ufb01c teachers because the performance of Task-3 as a single task model outperformed the multitask model, as shown in Table 5 and cross-lingual models consistently out-performed language-speci\ufb01c models as shown in Table 4. To provide additional model compression after enabling distillation, we trained the model with QAT in order to further reduce model complexity. To evaluate model speedup, each model was run for sequences of length 128 with batch size 32 on 1 Nvidia T4 GPU leveraging TensorRT 2. Speedup was measured relative to the baseline model, XLM-R (fp32). Model Speedup Task-1 Task-2 Task-3 XLM-R (fp32) x1 82.23 76.80 35.90 XLM-R (int8 quantized) x3.64 81.09 73.60 35.80 MiniLM-L12 (fp32) x3.29 79.42 75.1 35.36 MiniLM-L12 (int8 quantized) x8.11 79.29 73.69 35.71 MiniLM-L6 (int8 quantized) x15.61 79.05 73.90 35.84 MiniLM-L12-mBert (int8 quantized) x8.11 79.05 73.48 35.57 Table 7: Results on model distillation and quantization aware training. Task-1, Task-2, and Task-3 results are reported in macro-F1 scores. XLM-R models were used as the teacher in all results, except for MiniLM-L12-mBert which used mBert teachers. The two best models after distillation were the quantized MiniLM-L6 model with 2.60% average relative task degradation and non-quantized MiniLM-L12 with 2.37% average relative task degradation. We found that quantized MiniLML6 was able to improve more from distillation than MiniLM-L12. While we are still investigating full cause our current hypothesis is that the smaller model provides some regularization against over\ufb01tting versus the larger model. In terms of speedup our quantized MiniLM-L6 model provided the most speedup at 15.61x speedup over the baseline. In our \ufb01nal assessment we found that using task-speci\ufb01c model distillation on the MiniLM-L6 model with quantization provided a strong result in model size while maintaining model performance, as shown in Table 7. However, in considering the best model overall the MiniLM-L12 in Table 6 provided the least overall degradation of 1.71% and a modest speedup of 3.29x. 4 Business Impact An implementation of these compression techniques and choices was developed in our production training system. At Qualtrics, this system has generated signi\ufb01cant business impact across customers and new features, \ufb01nancial, environmental, and operational cost, system \ufb02exibility and robustness. 4.1 Feature impact Given the speedup the compressed and multi-task models provide, we observe signi\ufb01cant increases in throughput across use cases, enabling us to serve more customers, and enabling new features for customers that were previously too compute intensive. As an example, this speedup enables ML pipelines that run 3-4 models serially in the same time window as a single model. Additionally, the \ufb02exibility of cross-lingual models enables us to serve more customers in more languages, without large training sets or comprehensive language specialization. 2https://developer.nvidia.com/tensorrt 7 \fCompressing Cross-Lingual Multi-Task Models at Qualtrics A PREPRINT 4.2 Financial impact Conservatively, we estimate approximately 44% savings in terms of hardware cost from the developments in compression of the multi-task cross-lingual model, in comparison to an uncompressed system at similar latency and throughput. In addition, by combining multiple tasks that are invoked with similar load into a single model, we achieve a fraction of the total inference cost. This savings is driven by several factors: reducing base instance capacity; reducing the amount of dynamic scaling; and allowing for deployment of lower cost hardware. We note that the savings is limited by the persistent cost of the base capacity, even when reduced, which creates a \ufb02oor for cost savings even with models that are multiple times faster. Currently, we deploy our models on a public cloud with a cost of approximately $80K/month/model. The compression technique used in this paper reduces cost by 44%, resulting in $35K savings/month for a single model compared to the current model in production for the same tasks. As we push our compression framework to various other NLP systems currently developed by this group, it can result in a potential yearly cost savings in the single digit millions of dollars. Considering current macro-economic conditions when there is industry wide need for \ufb01nancial costs reduction, the signi\ufb01cant savings can strengthen the fundamentals of a typical SaaS company like Qualtrics. 4.3 Ethical impact We are pleased that our efforts to compress models will support environmental sustainability. By reducing the amount of power and resource needed to run the same inference, we anticipate meaningful impact on environmental footprint, though we are unable to quantify it concretely at this time. 4.4 Operational impact & robustness While MiniLM-L6 provided better speedup, business needs required the lower degradation provided by MiniLM-L12 for the \ufb01rst set of models. By leveraging the compressed XLMT model, we enable additional \ufb02exibility in production deployment scenarios, including different instance types (CPU, GPU, custom accelerators) that enable us to serve high throughput inference while balancing cost. This was not previously viable with larger models, which required GPUs in order to serve high throughput inference. By enabling this \ufb02exibility, we also improve system robustness, as the models are robust to instance unavailability or instance downtime for any type of instance. Additionally, savings from moving to a single multi-task model results in overall reduced workload for our engineering teams, removing per-task deployments and deployment pipelines and lowering the barriers to new tasks and new language support. Speci\ufb01cally, multi-task and cross-lingual modeling reduces the number of models for these tasks from 36 potential models (12 languages, 3 tasks) to a single model, reducing operational cost from 6-7 on-call/operations engineers to 1. Compression additionally reduces this cost by lowering latency and increasing throughput, reducing the operational cost of mitigating latency spikes and scaling issues. 5" + }, + { + "url": "http://arxiv.org/abs/2205.12452v3", + "title": "Sparse*BERT: Sparse Models Generalize To New tasks and Domains", + "abstract": "Large Language Models have become the core architecture upon which most\nmodern natural language processing (NLP) systems build. These models can\nconsistently deliver impressive accuracy and robustness across tasks and\ndomains, but their high computational overhead can make inference difficult and\nexpensive. To make using these models less costly, recent work has explored\nleveraging structured and unstructured pruning, quantization, and distillation\nto improve inference speed and decrease size. This paper studies how models\npruned using Gradual Unstructured Magnitude Pruning can transfer between\ndomains and tasks. Our experimentation shows that models that are pruned during\npretraining using general domain masked language models can transfer to novel\ndomains and tasks without extensive hyperparameter exploration or specialized\napproaches. We demonstrate that our general sparse model Sparse*BERT can become\nSparseBioBERT simply by pretraining the compressed architecture on unstructured\nbiomedical text. Moreover, we show that SparseBioBERT can match the quality of\nBioBERT with only 10\\% of the parameters.", + "authors": "Daniel Campos, Alexandre Marques, Tuan Nguyen, Mark Kurtz, ChengXiang Zhai", + "published": "2022-05-25", + "updated": "2023-04-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction Foundational Models (Bommasani et al., 2021) based on the Transformer architecture (Vaswani et al., 2017) has quickly become the most common building block in the modern language understanding stack, providing robust language representations which can be leveraged to provide impressive accuracy on tasks like question answering, text classi\ufb01cation, and token classi\ufb01cation. These Large Language Models (LLMs) are able to adapt to novel domains through pretraining resulting in models like BioBERT (Lee et al., 2020), LegalBERT (Chalkidis et al., 2020), and SciBERT (Beltagy et al., 2019) have become a popular strategy for improving performance further. While accurate and robust, LLMs \u2217Corresponding author: dcampos3@illinois.edu Figure 1: Impact of stage and domain of pruning when evaluating BERT base uncased on medical NLP Entity Extraction, Relation Extraction, and Question Answering. For pruned and unpruned models medical pretraining can improve performance, and the performance of the pruned model and the performance of BioBERT and SparseBioBERT is comparable. are not without drawbacks. They commonly have hundreds of millions or billions of parameters requiring large specialized computer clusters to run inference at scale. Several approaches have been successfully used to improve the performance of these LLMs, such as approximating attention (Peng et al., 2021), removing portions of the models (Sridhar and Sarah, 2020), and reducing the precision of activation and weight values. Recent work (Zafrir et al., 2021) (Kurti\u00b4 c et al., 2022) has shown that the application of unstructured and semistructured (block) pruning mechanisms on LLMs can signi\ufb01cantly compress models with little to no loss in accuracy. While these approaches are successful and applicable during model general domain pretraining and task-speci\ufb01c \ufb01ne-tuning, prior work has not studied how pruned models transfer to new domains nor the impact of pretraining stage pruning on transfer accuracy. Given that most applications would require the transfer of the general domain LLMs to a speci\ufb01c application domain, it is important to study the generality and robustness of the pruned LLMs when applied to multiple tasks in an application domain. While existing pruning research has found it possible to prune models heavily without loss in accuracy, most arXiv:2205.12452v3 [cs.CL] 5 Apr 2023 \fapproaches have focused on the compression of individual tasks or textual domains. These specialized models match or exceed the accuracy of the dense model but commonly require vast amounts of hyperparameter turning and task-speci\ufb01c optimization to achieve this result. Compressed models like DistillBERT (Sanh et al., 2019) and TinyBERT (Jiao et al., 2020) are some of the most popular LLMs because they provide compression without any additional know-how or optimization. For pruned models to become a common architecture for NLP tasks, they must be as robust and easy to use as their uncompressed counterparts. This paper explores this potential and proposes Sparse*BERT, a new pruned LLM that can adapt effectively to new domains without extensive \ufb01ne-tuning or task-speci\ufb01c pruning. Our work studies generalizable pruned LLMs by evaluating how well they can transfer to previously unstudied tasks in the biomedical domain. Speci\ufb01cally, we study these questions by focusing on transferring pruned and unpruned LLMs to the biomedical domain and evaluating the accuracy of said models on downstream tasks like Entity Extraction(EE), Relation Extraction(RE), and Question Answering(QA). Our experiments demonstrate that pruned LLMs generalize well and are robust to domain transfer and variation in target task, dataset size, and dataset dif\ufb01culty. In summary, our contributions are as follows: \u2022 We reinforce Zafrir et al.\u2019s \ufb01ndings that LLMs pruned on the general domain data can transfer to new domains without extensive hyperparameter tuning and extend their work, demonstrating these pruned models can be transferred to new pre-training domains without additional parameter optimization. \u2022 We introduce a sparse model, Sparse*BERT, and its domain adaptation adapted for the Medical/Bio NLP domain called SparseBioBERT. This model matches the accuracy of the BioBERT with 10% of the active parameters. 2 Background and Related Work Large Language Models such as language representation and generation models commonly use multiple layers of transformer encoders or decoders. Each transformer layer usually contains some form of multi-head attention (MHA) and fully-connected feed-forward networks (FFN). The MHA is made up of multiple selfattention heads (Vaswani et al., 2017), each of which has 3 sub-components: queries (Q), keys (K), and values (V). Equation 1 shows the expression used to compute the attention of each head, where d is the dimensionality of K. The output of the attention heads is concatenated and fed into the FFN. Attention(Q, K, V ) = softmax \u0012QK \u221a d \u0013 V (1) Attention, while simple, has proven to be incredibly robust as it allows models to scale to hundreds of layers, hundreds of attention heads (Brown et al., 2020), and seemingly most modalities (Chen et al., 2021) (Arnab et al., 2021). Despite its generalization ability, attentionbased models are also brittle as removing less than 0.0001% of parameters can cause complete model collapse (Kovaleva et al., 2021a). Unstructured pruning compresses a model by removing individual weights from a network by setting them to zero. Pruning methods commonly remove weights based on their saliency to the network and, to avoid model collapse, usually do so gradually while \ufb01netuning the remainder of the weights. Since it is dif\ufb01cult to quantify the true saliency of weight concerning a network, zeroth, \ufb01rst, and second-order estimation methods exist to approximate saliency. Zero-order methods use weight magnitudes as a proxy, i.e.., remove the smallest weights without evaluating the impact of their removal on model accuracy. These approaches are prevalent for Convolution Neural Networks (Han et al., 2015) and have recently been successful for LLM (Gordon et al., 2020) (Chen et al., 2020) (Zafrir et al., 2021). First-order methods like Movement Pruning (Sanh et al., 2020) use a gradient-based approximation to remove the weights moving toward zero. Second-order methods like OBS (Singh and Alistarh, 2020) estimate the impact of individual weight removal via approximations of second-order derivatives and use it as a proxy for saliency. Using unstructured and semi-structured pruning has proved to be a convenient way of compressing LLM for ef\ufb01cient inference and decreased model size. For example, a BERT-Base-Uncased model which has had 90% of its parameters pruned runs inference \u223c4.5 times faster and is 2.75 times smaller with no drop in accuracy (Kurti\u00b4 c et al., 2022). If this compression leverages additional methods like quantization and layer dropping, inference speed can improve by 28.75 times, and model size can drop by \u223c19 times. While many successful compression approaches exist, Transformer models are fragile (Kovaleva et al., 2021b), as minor perturbations can lead to model collapse. Moreover, there is a lack of understanding of the generality and robustness of compressed LLMs when transferring them to different tasks in an application domain, a gap our work attempts to \ufb01ll. 2 \f3 Sparse*BERT: General Sparse Models Can Adapt to New Domains We can formulate the Sparse*BERT model as \u03b8\u2217, which can approximate the accuracy of the dense model \u03b8 and does not suffer from model collapse when transferred to new domains. The architecture of Sparse*BERT matches BERT, but a portion of its weights are pruned and masked to avoid future updates. To ensure that our sparse model can approximate the accuracy of the dense model, we leverage Knowledge Distillation between the outputs of the dense and sparse networks. Following the success of Zafrir et al. (Zafrir et al., 2021), we leverage Gradual Magnitude Pruning (GMP) as shown in algorithm 1. Pruning models using gradual magnitude pruning have been shown to be quite effective and easy to use. At its core, the goal of GMP is to remove weights slowly enough that the network can learn to recover after each pruning step. To ensure that we can maximize the inference speedups, we prune each set of components in the network independently. The structure in the model graph groups these components, so the individual feed-forward layers, the queries, or keys, but not individual self-attention heads. Algorithm 1 Uniform Gradual Magnitude Pruning Input: \u03b80 a pretrained neural network, \u03b8t a dense pretrained neural network (distillation teacher), D a training dataset, N number of pruning steps, \u03c3 weights to prune at each pruning step Output: a pruned neural network for x in 1 to N do \u03b8\u2217\u2190\u03b80 for component in \u03b8\u2217do w \u2190sort(component) \u03b8\u2217\u2190prune(\u03b8\u2217, w, \u03c3) end for \u03b8\u2217\u2190train(\u03b8\u2217, \u03b8t, D) end for return \u03b8\u2217 4 Experiments To assess how well-pruned models can transfer to new domains and determine the optimal stage of pruning (pretraining, domain transfer, or \ufb01ne-tuning), we evaluate the accuracy of 10 Biomedical biomedical datasets. Our experiments \ufb01x the training parameters for all the transfer tasks and vary the stage used for pruning (no pruning, pretraining, domain transfer, \ufb01ne-tuning) and domain-speci\ufb01c pretraining. 4.1 Datasets In all of our experiments, we focus on English-only biomedical datasets/corpora. Each dataset we use is unmodi\ufb01ed from its described use in the literature, and its associated and prescribed metrics are used. 4.2 Finetuning Datasets Pretraining Datasets. To understand how the stage of pruning impacts model accuracy, we train models both pruned and dense models on the Medline/PubMed corpus and the combination of English Wikipedia (Foundation, 2021) and The Book Corpus (Zhu et al., 2015) datasets. The combination of Wikipedia and Book Corpus datasets creates a common domain language dataset featuring 3.3 billion words, which have become the backbone for general domain masked language modeling experimentation. The MEDLINE/PubMed Corpus is a publicly available1 text corpus made up of journal abstracts and documents of biomedical literature from around the world. The United States National Institute of Health updates the corpus daily and has been the primary resource used to train Biomedical LLMs like BioBERT (Lee et al., 2020) and PubMedBERT (Gu et al., 2022). For our experiments, we extracted our corpus on January 2022, \ufb01ltered and prepared the dataset for masked language modeling using the BioElectra\u2019s (Kanakarajan et al., 2021) scripts2. This formatted PubMed corpus has 34,246,348 abstracts and 4.5 billion words (Kanakarajan et al., 2021). Finetuning Datasets. We \ufb01netune pretrained models on 10 established Biomedical NLP datasets, encompassing 3 separate task types: Entity Recognition (ER), Relation Extraction (RE), and Question answering (QA). For ER, we use the BioCreative II Gene Mention Recognition (BC2GM), (Smith et al., 2008), BC5CDR Drug/Chemical (BC5-Chem), BC5CDR Disease (BC5-Disease) (Li et al., 2016), JNLPBA (Collier and Kim, 2004), and NCBI Disease (Dogan et al., 2014) datasets. For RE we use ChemProt (Taboureau et al., 2011), Drug-Disease Interaction (DDI) (Herrero-Zazo et al., 2013), and Gene-Disease Associations (GAD) (Becker et al., 2004) datasets. We leverage BioASQ task 7B (Baker et al., 2016) and PubMedQA (Jin et al., 2019) for QA. In addition, we analyze the impact of the \ufb01netuning dataset\u2019s size on the optimal pruning stage using the non-biomedical QA SQUAD (Rajpurkar et al., 2016) dataset. Details on the dataset size, evaluation metric, and domain can be found in Table 1. 4.3 Models and Experimental Setup Our experiments focus on the popular BERT-baseuncased language model (Devlin et al., 2019), an LLM composed of 12 transformer encoder layers with 110M 1https://www.nlm.nih.gov/databases/download/pubmed_medline.html 2https://github.com/kamalkraj/BioNLP-Corpus 3 \fDataset Domain Task Type Training Size Testing Size Validation Size Evaluation Metric BC5-Chem Medical Entity Recognition 5203 5347 5385 F1 BC5-disease Medical Entity Recognition. 4182 4244 4424 F1 NCBI-disease Medical Entity Recognition 5134 787 960 F1 BC2GM Medical Entity Recognition 15197 3061 6325 F1 JNLPBA Medical Entity Recognition 46750 4551 8662 F1 ChemProt Medical Relation 18035 11268 15745 F1 DDI Medical Relation 25296 2496 5716 F1 GAD Medical Relation 4261 535 534 F1 PubMedQA Medical Question Answering 450 50 500 Accuracy BioASQ Medical Question Answering 670 75 140 Accuracy SQUAD General Question Answering 87599 10570 N/A F1 Table 1: To understand how generalizable sparse models are, we evaluate a wide set of tasks that vary in dif\ufb01culty, size, and desired output parameters. We compare the performance of our sparse language models to the dense, task-tuned BioBERT model, which is also a 12 transformer layer model which features cased (BioBERT-Base-Cased) and uncased (BioBERT-Base-Uncased)variants. Following previous work, we do not prune the embedding layers of the network or any task-speci\ufb01c heads and focus on the \u223c85 million parameters found in the encoder layers. To ensure that our experiments are reproducible 3 we use the popular open-source libraries SparseML 4 and Transformers 5. Model Pretraining Pretraining is when the model is trained on an unsupervised NLP dataset using a masked language modeling (MLM) approach (Devlin et al., 2019). Pretrained models are \ufb01ne-tuned on labeled task-speci\ufb01c datasets to optimize for task-speci\ufb01c accuracy. Our experiments use existing dense pretrained models for BERT-base-uncased (Devlin et al., 2019) and PubMedBERT (Gu et al., 2022) and prune them using gradual magnitude pruning based on the corresponding dataset and MLM approach. During model pretraining, we train for three epochs on 4 A100 GPUS using a batch size of 256 and a sequence length of 512, and, following early experiments and \ufb01ndings from Kurtic et al. and Za\ufb01r et al., we cycle the learning rate during pretraining and found cycling twice per epoch from 5e-4 to 0 to be most effective. First, we take the existing dense models and run this training setup for 3 epochs to ensure model convergence; then, taking these converged models, we retrain and apply gradual magnitude pruning over the \ufb01rst two epochs. During pruning, we start from an initial sparsity of 30% and gradually prune to a \ufb01nal sparsity of 90%, pruning 100 times an epoch. After model pruning, we continue to train for one additional epoch to ensure that the sparse 3we will be releasing our code and models to allow reproducibility and extensibility 4https://github.com/neuralmagic/sparseml 5https://github.com/huggingface/transformers model is converged. Based on early experiments, we \ufb01nd knowledge distillation bene\ufb01cial. For all of our experiments in pretraining, we leverage well-trained dense teachers using a hardness of 0.5 and a temperature of 2. When pruning weights, their values are \ufb01xed to 0 and are masked to avoid future updates. This means that our experiments effectively evaluate the discovery of the most optimal sub-architecture. Model Finetuning Finetuning is the stage in which the model is trained on a supervised NLP dataset using a task-speci\ufb01c training regime. In this stage, one or many classi\ufb01cation heads have connected the model, and these classi\ufb01cation heads and the pretrained model are trained for optimal performance on an individual task. We \ufb01x the training procedure across \ufb01ne-tuning tasks to isolate the effects of task-speci\ufb01c hyperparameter tuning and pruning stages. Speci\ufb01cally, we train each model for ten epochs on a V100 GPU using a batch size of 16, a learning rate that linearly decays from 5e-5 and replicates using \ufb01ve random seeds for larger tasks and ten random seeds for smaller tasks. We use the same setup for \ufb01ne-tuning already pruned models and when applying gradual magnitude pruning during \ufb01ne-tuning (pruning on the \ufb01ne-tuning stage). For pruned models, we preserve the sparsity patterns. When pruning models during \ufb01ne-tuning, we \ufb01ne-tune the dense model for two epochs, prune over the preceding six epochs, and stabilize the pruned network for two epochs. In our early experiments and matching prior \ufb01ndings (Zafrir et al., 2021), we \ufb01nd that when pruning models on transfer tasks, accuracy is best when the learning rate cycles. Cycling only occurs when pruning during \ufb01ne-tuning, and the learning rate cycles at epochs 2 (start of pruning) and 8 (end of pruning). Unlike previous work, we do not \ufb01nd a signi\ufb01cant effect in accuracy improvement by leveraging knowledge distillation on the \ufb01ne-tuning task. As a result, we do not use knowledge distillation during \ufb01ne-tuning. 4 \fModel Pruning Stage EE RE QA Overall BERT-Base-Uncased None Fine-tuning Pretraining 84.44 76.12 80.84 86.86 85.36 85.54 61.97 65.39 68.44 77.76 75.28 78.27 BioBERT-Base-Uncased None Fine-tuning Pretraining (Medical) Pretraining (General) 85.96 63.53 82.50 86.36 87.56 75.14 86.60 88.57 68.33 54.00 65.71 66.33 80.62 66.34 78.27 80.42 Table 2: Overall results on the impact of task and dataset of model pruning. Models trained for the general domain and pruned on the general domain can transfer at equal or better accuracy. Question Answering is the notable outlier as its small dataset size bene\ufb01ts from the sparse models as their pruned architecture prevents over\ufb01tting on small datasets. Model Pruning Stage BC5-Disease BC5-chem NCBI-disease BC2GM JNLPBA ChemProt DDI GAD PubMedQA Accuracy BioASQ Accuracy Training dataset size N/A 4182 5203 5134 15197 46750 18035 25296 4261 450 670 BERT-Base-Uncased None Fine tuning Pretraining 80.60 69.87 75.35 91.23 81.72 87.83 85.66 75.57 81.75 81.97 74.27 77.12 81.56 79.57 81.24 88.19 85.41 86.13 94.35 92.72 92.73 78.05 74.88 77.77 47.46 52.67 50.00 76.65 78.11 82.89 BioBERT-Base-Uncased None Fine tuning Pretraining(General) Pretraining(Medical) 83.195 66.34 83.32 80.56 93.63 66.60 93.81 92.17 83.46 52.69 84.15 78.84 86.05 59.71 87.04 81.11 84.10 62.15 81.84 81.81 90.66 80.28 90.71 88.02 95.01 87.60 95.02 93.99 77.02 57.55 79.90 77.77 54.00 54.00 51.50 49.14 82.67 82.67 81.17 82.28 Table 3: Performance on Complete set of tasks. Except for question-answering tasks and NCBI-Disease, the SparseBioBERT outperforms all other models, including BioBERT, indicating that sparse architectures can be transferred to new domains and use cases without additional optimization. 4.4 Experimental Results When we evaluate results on an average of task-speci\ufb01c scores as shown in Table 2, we can see that the SparseBioBERT model performs on par with its unpruned counter-part and outperforms it on relation extraction and entity extraction. Results on individual tasks can be found in table 3 and further consistently show how SparseBioBERT approximated BioBERT. When pruned models do not transfer to the biomedical domain, they can perform much worse than the unpruned models, as shown by the sizable gap between the pruned and dense BERT-base-uncased models. This result, coupled with the performance of SparseBioBERT, makes us believe that pruned models can adapt to new domains like unpruned models but require additional domain-speci\ufb01c pretraining to ensure performance. We believe these results prove that models pruned during general domain data can remove large portions of the model without affecting the ability to transfer to new domains or tasks. Unlike pruning on speci\ufb01c data domains and tasks, general domain data pruning can preserve accuracy invariant to task and domain. Unexpectedly, when evaluating biomedical QA, we improve accuracy by pruning, but only on a general domain language model pruned downstream or the general domain Sparse*BERT. We attribute this to the regularizing effect that pruning can have, and it likely helps in over\ufb01tting small datasets, PubMedQA and BioASQ. Tasks. Finding those models pruned outperform all others, and we believe the regularization provided by pruning can prevent over\ufb01tting on these small datasets. Our results also indicate that it is optimal to prune with general domain data and transfer it to new tasks and domains for optimal performance. Regardless of their domain expertise, BERT and BioBERT both see huge losses in accuracy when pruned on the downstream tasks, and these same losses are not found in the model pruned during pretraining. Surprisingly, the model pruned on the general domain data outperforms when pruned on biomedical domain-speci\ufb01c language modeling. This gap is nearly 4 points on entity extraction and 2 points overall, almost more signi\ufb01cant than the gap between the BERT and BioBERT. When we evaluate the impact of pruning on individual tasks pruning in \ufb01ne-tuning stage) as shown in Table 3, we can see that pruning is quite sensitive to the dataset task. Looking at the large datasets like JNLPBA in Table 3, there is nearly no distinction in pruning during pretraining or \ufb01ne-tuning. On the other hand, small datasets like NCBI and GAD see a large accuracy loss from models pruned during \ufb01ne-tuning. 5 Impact of Training Data Size Noting that there is a signi\ufb01cant variation in dataset size in the biomedical NLP tasks, we leveraged a dataset well studied in pruning, SQUAD (Rajpurkar et al., 2016), and performed variations to the training data size. Starting with the initial training dataset size, 88,000 items, we decreased the size to 75%,50%,25%,10%,5%,2%,1% and evaluated the im5 \fFigure 2: Impact of varying the training data size with pruned and dense models showcases how pretraining pruning has similar accuracy to the dense models. pact to performance. We compared the dense BERT, Sparse*BERT, and pruning BERT during \ufb01ne-tuning. The sparse models each have 90% unstructured sparsity on the encoder portion of the LLM. Each experiment was performed with \ufb01ve random seeds, using a batch size of 12, and trained for 30 epochs with a learning rate decaying linearly from 5e-5 to 0. Each model\u2019s training uses knowledge distillation from a dense teacher with the hardness set to 0.8 and the temperature to 2.0. For the model pruned during \ufb01netuning, we cycle the learning rate at the beginning of pruning(2nd epoch) and the end of pruning(20th epoch). We evaluate model performance on the F1 score on the unaltered dev portion of the SQUAD dataset to avoid losses in evaluation sensitivity. Figure 2 and table 4 demonstrate that models pruned during \ufb01netuning are not robust to variations in data size. Model performance decays slowly from 85 to 80 until the training data is decreased by 75%, but it quickly becomes nearly unusable when it becomes smaller than that. The same cannot be said about the dense or the Sparse*BERT model, as they see virtually identical losses in quality from 6 Limitations Our approach is limited in the computational time required to generate a general sparse LLM and the diversity of types of LLMs that we explore. Regarding computational expense, training a sparse model requires negligible additional resources, which is tractable for models with a hundred million parameters and a few billion tokens, not for billion parameter models commonly discussed. Our current explorations have been limited to monolingual LLMs trained in English. It is unclear how well sparse architectures will perform in a multi-lingual setting, and we expect degradation in language quality to be anything but equal across all languages. 7" + }, + { + "url": "http://arxiv.org/abs/2109.04202v1", + "title": "IMG2SMI: Translating Molecular Structure Images to Simplified Molecular-input Line-entry System", + "abstract": "Like many scientific fields, new chemistry literature has grown at a\nstaggering pace, with thousands of papers released every month. A large portion\nof chemistry literature focuses on new molecules and reactions between\nmolecules. Most vital information is conveyed through 2-D images of molecules,\nrepresenting the underlying molecules or reactions described. In order to\nensure reproducible and machine-readable molecule representations, text-based\nmolecule descriptors like SMILES and SELFIES were created. These text-based\nmolecule representations provide molecule generation but are unfortunately\nrarely present in published literature. In the absence of molecule descriptors,\nthe generation of molecule descriptors from the 2-D images present in the\nliterature is necessary to understand chemistry literature at scale. Successful\nmethods such as Optical Structure Recognition Application (OSRA), and\nChemSchematicResolver are able to extract the locations of molecules structures\nin chemistry papers and infer molecular descriptions and reactions. While\neffective, existing systems expect chemists to correct outputs, making them\nunsuitable for unsupervised large-scale data mining. Leveraging the task\nformulation of image captioning introduced by DECIMER, we introduce IMG2SMI, a\nmodel which leverages Deep Residual Networks for image feature extraction and\nan encoder-decoder Transformer layers for molecule description generation.\nUnlike previous Neural Network-based systems, IMG2SMI builds around the task of\nmolecule description generation, which enables IMG2SMI to outperform OSRA-based\nsystems by 163% in molecule similarity prediction as measured by the molecular\nMACCS Fingerprint Tanimoto Similarity. Additionally, to facilitate further\nresearch on this task, we release a new molecule prediction dataset. including\n81 million molecules for molecule description generation", + "authors": "Daniel Campos, Heng Ji", + "published": "2021-09-03", + "updated": "2021-09-03", + "primary_cat": "q-bio.QM", + "cats": [ + "q-bio.QM", + "cs.CV", + "cs.LG", + "eess.IV" + ], + "main_content": "Introduction Like many other scienti\ufb01c research \ufb01elds, chemistry has experienced an explosion of research over the last few decades. This ever-growing corpus provides ample ground for research on information extraction and text mining from vast data. Large chemistry datasets such as the patent dataset [28] have been used to create models for molecule generation [4], reaction yield prediction [43], and property prediction [32]. While these approaches have been successful, most of them have not leveraged any information learned from the corpus of chemistry papers. While some molecules have easily recognizable names like hydrogen-peroxide (H2O2), most do not. 1The programs, data and resources will be made publicly available for research purpose under a MIT License. Preprint. Under review. arXiv:2109.04202v1 [q-bio.QM] 3 Sep 2021 \fFigure 1: IMG2SMI builds on popular image caption strategies leveraging a RESNET-101 based features extraction and a transformer encoder and decoder to predict a molecule description given an image. To represent possible molecules chemists have developed a machine-readable character representation called Simpli\ufb01ed molecular-input Line-entry System (SMILES) [49]. SMILES is a method of describing the structure of a molecule using ASCII strings to generate two or three-dimensional molecular representations at scale. For example, the SMILES string \"c1ccccc1\" represents the molecule benzene. In recent years, deep learning researchers have focused on using large corpora of SMILES [10] to learn a general representation of molecules leading to successful methods in molecule property prediction [39] and molecular reaction yield [43]. Despite their robust and deterministic molecule representation, SMILES are seldom found in chemistry literature as they are distinctly non-human-readable. Instead, chemistry literature focused on two-dimensional images of molecules and their reactions created by drawing programs like ChemSketch and ChemDoodle. While some papers include formal molecule names or descriptions, most do not. Therefore we cannot rely only on the information embedded in the PDF. For chemists, the lack of machine-readable and reproducible molecule representations is a time sink. When chemists seek to look for related reactions or molecules in a paper, they use molecule drawing programs to recreate the molecules and then search using these representations. This approach is tedious, error-prone, time-consuming, and challenging to scale, which drives the need for an automated, high throughput molecular prediction system. Without a way to decode images, the literature is essentially full of recipes written in a language computers cannot understand. This problem is exceptionally impactful on molecule search, where reliance on manually labeled molecules and references in the text has become the standard operating methodology. Without methods of accurately extracting molecular information from images, the information conveyed in images is essentially ignored. Moreover, without accurate molecule extraction developing any method that tries to mine large corpora is likely challenging. As a result, any improvement in the quality of molecule prediction will provide ampli\ufb01ed gains in downstream tasks. While the notion of extracting molecules from literature is by no means a novel task, most methods rely on handcrafted rules and built on the intuitions of chemists [14]. There are essentially two main tasks to extract visual information from chemistry literature: segmentation and molecule prediction. Segmentation systems are focused on segmenting parts of the chemistry documents and inferring what the pixel coordinates of a molecule or reaction may be. Molecule prediction systems focus on extracting the output of segmentation systems and predicting the most likely molecule for each given molecule segment found in the target document. In segmentation, a system processes a chemistry paper and produces a list of X and Y pixel coordinates representing locations where molecules and reactions are present in a paper. Description generation systems leverage the segmented images to predict a molecule description in the form of a SMILES string. As we \ufb01nd segmentation software to be very effective, our work focuses on molecule prediction. Despite no large dataset of annotations, molecule prediction is a perfect candidate for data-hungry solutions because arti\ufb01cial datasets are relatively easy to create. Software tools such as RDKIT [1] can use SMILES strings to create images of molecules varying style, shape, and size (pixels) at scale. While RDKIT leverages the deterministic natures of SMILES strings to create identical molecule structures each time, it can vary stylistic components such as rotation, bond highlighting, and molecule numbering to produce multiple representations of the same molecule. Independently of chemistry, in the last few years computer science researchers have been able to use large data regimes and neural network architecture like transformers [47] and convolutional networks [15] [23], for effective image captioning [29] [48] [46] [2] [53] [19] [52]. Image captioning has been 2 \fable to leverage \ufb01ndings in computer vision and natural language processing to build systems which can provide descriptive, accurate, captions at an incredibly fast pace. Xu et. Al\u2019s [52] architecture of using a computer vision focused Convolutions Neural Network for feature extraction and using those features with a task-speci\ufb01c decoder has proven to be scalable and tune-able. Recently, some of these approaches have begun to be applied in chemistry [38], and while these approaches have provided strong starting points, their application to chemistry is neither tuned for chemistry nor leveraging the most advanced architectures in feature extraction nor description generation. The lack of task-speci\ufb01c architecture modi\ufb01cations causes accuracy to suffer and cannot outperform traditional handcrafted systems like OSRA. This paper introduces IMG2SMI, a molecule prediction approach designed for high throughput, accurate prediction of molecules. IMG2SMI is an image captioning approach that relies on a RESNET-101 [18] backbone for feature extraction and then a transformer [47] encoder-decoder architecture for caption generation. Figure 1 represents the broad IMG2SMI system architecture. While methods like DECIMER [38] have been applied to molecule description generation, no previous work has modi\ufb01ed system architecture, nor explored a non character lever representation. Besides our model and our code, we also release a novel molecule description generation dataset called MOLCAP (Molecule Caption). MOCAP is a collection of 81 million molecules used to produce molecular images for description generation. While IMG2SMI trains with only 1 million molecules, MOLCAP is large enough and comprehensive enough for any data-hungry approach. Besides our dataset and model, in our work, we provide a quantitative and qualitative evaluation for tokenization of SMILES string using SMILES, SELFIES [22], and Byte-Pair Encoding (BPE) [44]. Our evaluation suggests that SELFIES outperforms current uses of sub-word and BPE methods that have become popular because of their success in natural language processing. Finally, as part of our work, we provide a comprehensive evaluation of evaluation methods for Molecule Description Generation \ufb01nding traditional language processing metrics like ROUGE [24], and Levenshtein distance combined with molecular \ufb01ngerprinting provide a robust analysis method. In summary, the main contributions of this paper are as follows: 1. We introduce the IMG2SMI molecule prediction model. Unlike previous Neural Networkbased methods, IMG2SMI is optimized for chemistry and provides a stable, accurate, and high throughput method of generating molecule descriptions for molecular images. 2. We introduce a novel molecule description generation dataset named MOLCAP. The dataset features 81 million molecules and is large enough to enable most large data methods. 3. We provide a thorough study of methods of processing SMILES strings for image captioning, \ufb01nding SELFIES to be the most adept for image captioning. 2 Related Work 2.1 Chemistry Literature Processing Chemistry Literature processing is an established image processing task. Filippov et al. [14] create OSRA, an open-source and easy-to-use system, to understand chemistry literature and predict molecule descriptions in the form of SMILES. While effective, the OSRA system is designed for humans in the loop generation as predicted molecule descriptors require 30 seconds or less editing. This system can be useful to extract individual images of molecules for individual experimentation, it does not scale to high throughput processing. Systems like ChemReader [36], ChemGrapher[34], and ChemSemanticResolver [6] introduce improvements to OSRA by improving recognition of Bonds and R-Groups but they are still built on the traditional Optical Character Recognition structure of OSRA. To the best of our knowledge, DECIMER [38] is the \ufb01rst system to leverage advances in computer vision and natural language processing and apply them to molecule description generation. DECIMER [38] uses unaltered Show, Attend and Tell architecture [52] with a Inception V3 [45] based Convolutional feature extractor and a recurrent neural network-based character model for description generation. Rajan et al. prove the viability of neural-network-based methodologies applicability to chemistry literature which we leverage in our research. For further readings, please reference Rajan et Al.\u2019s [37] comprehensive review on optical, chemical structure recognition tools. 3 \f2.1.1 Molecular Representation The most widely used method for representing molecules in a machine-readable format is Simpli\ufb01ed Molecular Input Line Entry System (SMILES) [49]. SMILES is a non-unique ASCII character-based representation used to generate molecular structure as a two-dimensional graph which is akin to how chemists would draw the molecule. SMILES are machine-readable, and most smiles strings are under 20 characters. Smiles strings are depth-\ufb01rst traversal of molecules graphs, and as a result, there are multiple ways to represent the same molecule. For example, C(O)C, CC(O), CCO, OCC all represent the molecular structure of ethanol. Canonicalization converts all equivalent SMILES into one representation [50] to simplify multiple representations for the same molecule. Additionally, not all SMILES strings are valid molecules as parenthesis must have pairs, and certain atoms do not bond. DeepSMILES [33] only uses closing parenthesis and introduces a single symbol for ringclosing, improving the number of possible molecule representations which map to simple molecules. SELFIES (SELF-referencIng Embedded Strings) [22] is a formal Chomsky type-2 grammar with two self-referencing, recursive functions to ensure the generation of syntactically and semantically valid molecules. By \ufb02ipping a single bit in a valid molecule, 100% of SELFIES, 38.2% of DeepSMILES, and 18.1% of SMILES molecules are still valid. As shown in \ufb01gure 2 each molecule has associated SMILES, SELFIES, and DeepSELFIES representations which vary in length and format. As long as the string corresponds to a valid molecule, there is a deterministic, valid translation between the three molecule representations. Figure 2: A Molecule depicted as a two-dimensional graph where nodes are speci\ufb01c atoms and edges are their bonds. Each molecule has one valid SELFIES representation, one valid DeepSMILES representation, and potentially many valid SMILES representations. 2.2 Image Captioning Image Captioning is at the intersection of computer vision and natural language processing. While captioning systems and hybrid approaches are not a recent concept with the introduction of the COCO (Common Objects in Context) [25] allowed data-intensive neural-network-based captioning approaches to thrive. Xu et al. [52] provide a successful framework leveraging an Image Classi\ufb01cation based encoder which matched with LSTM based decoder. This Encoder-Decoder formulation has become very successful as it allows for straightforward encoder or decoder modi\ufb01cation to understand optimal architecture. More recently, advances in Transformer\u2019s [47] have been applied as decoder mechanisms such as with DETR [8] or with the visual encoder as with CrossViT [9]. 2.3 Neural Networks Applied Chemistry The availability of large datasets like the task-speci\ufb01c datasets such as MoleculeNet [51], and GuacaMol [7], and public molecule and reactions like the PubChem and ChEMBL databases has led to the broad application of neural network-based methods to chemistry. In molecule generation, methods like Generative Adversarial Networks [16] and Variational Auto Encoders [21] have encouraging results [7] [4]. In molecule property prediction, Graph Convectional Networks [26], and Message Passing Neural Network [30] have been used to predict quantum, physical, biophysics, physiological properties of molecules. Other systems like ChemBERTa [10] have been built on the availability of SMILES strings to create general molecule neural networks. To learn more about neural methods applied to chemistry, we suggest Elton et al.\u2019s study on Deep Learning for molecular design [13]. 4 \fDataset Name Size Original Purpose Average SMI Length Randomized CHEMBL 1562045 Molecule Generation 47.89 CHEMBL 462301 Molecule Generation 54.43 PubChem 77028926 General Chemistry 44.36 GDB13 2000000 Molecule Generation 22.28 Decorator Scaffolds 177024 Molecule Generation 39.88 MOLCAP 81230291 Molecule Description Generation 44.12 Table 1: Summary statistics of datasets used to created MOLCAP, their properties and average molecule length 3 Method Molecule description generation is an application of image captioning to the chemistry domain. Our model takes in an image of a molecule, X, and generates a molecular representation of the molecule Y. The molecule image is a N pixels by M pixels color or image generated by a molecule drawing program such as Chemdoodle or RDKIT. A SMILES string represents a molecule as a two-dimensional graph using a string of 1 to C characters. Where c is the length of the caption and V is the length of the vocabulary. y = {y1, y2..., yc}, yi \u2208RV (1) Building on Successful approaches in other domains, we describe the task as having separate encoding and decoding stages, each with its objective. As an image of a molecule is a two-dimensional representation, an encoder can act as a feature extractor which, given an image, produces a dense representation in a feature space. Then, leveraging the molecule representation, a decoder can produce a caption based on the extracted embedding. 3.1 Dataset Creation Due to lacking a large annotated dataset for experimentation, we have created MOLCAP. MOLCAP consists of 81 million SMILES strings mapping to 81 million molecules. To generate MOLCAP we combined existing available datasets from existing chemical databases [20] and various experiments in molecule generation [17] [4] [13] [3]. MOLCAP is unique as its molecules are much more complex than those which previous systems have been trained to recognize molecules under 40 characters in length. Each of the datasets used to create MOLCAP is publicly available in an open access format but licensing usage may vary for commercial applications 2 3 4. To create MOLCAP, \ufb01rst, we merge these independent datasets and keep unique molecules. Next, we convert all molecules to the canonical representation [50] and remove any molecules which do not have good smiles representations leaving 81,230,291 unique molecules. Table 1 provides detailed statistics on the sizes and attributes of the sources we use for MOLCAP. We produce character and SELFIES based dataset and following the \ufb01ndings of CHEMBERTA [10], we train a variety of Byte Pair Encoding (BPE) tokenizers for vocabulary size 100, 200, 500, 2000, and 20000. BPE is a hybrid between character and word-level representations, which can handle the diversity and scale of natural language corpora. BPE tokenizers are created by training custom BPE models for speci\ufb01c corpora size using Hugging Face\u2019s Tokenizers library 5. We produce different corpora to study how the modern tokenization approach can work with the world of molecule descriptors. BPE tokenization methods have shown to be incredibly effective in natural language as they leverage the composition nature of words and have become a popular processing method for SMILES strings. Next, we select 1,000,00 molecules at random for our training corpus and 5000 for our evaluation/validation dataset, and we use RDKIT 6 to create a 256x256 image for each molecule and produce a tokenized caption using the six tokenization methods previously described. Our work 2https://pubchemdocs.ncbi.nlm.nih.gov/downloads 3https://www.ebi.ac.uk/chembl/g/browse/compounds 4https://gdb.unibe.ch/downloads/ 5https://github.com/huggingface/tokenizers 6https://github.com/rdkit/rdkit 5 \fdoes not study how well models deal with variations in image creation and image size as we found most molecule drawing software leverages RDKit for image creation and early experiments with augmenting data by varying size did not show major differences in output. We have kept the evaluation portion of the dataset small as we did not see increased sensitivity with larger samples, evaluation on larger samples was slow, and this is the size of validation on other image captioning datasets such as MS-COCO. Each set of 1,000,000 molecules creates a 200GB dataset so the full image dataset for MOLCAP would be 16.24 TB which was size prohibitive for our initial experiments. 3.2 Evaluation Methodology In traditional image captioning, the evaluation metrics focus on the overlap in words present in the candidate caption compared to the reference caption. Following the standard evaluation methodology in image captioning, we employ ROUGE [24], and BLEU [35] to measure the overlap between candidates. BLEU and ROUGE measure the token overlap on tokenized SMILES strings, and as we did not see signi\ufb01cant differences based on variation in tokenizer, we use the tokenizer with a vocabulary size of 200. Since SMILES is a character-level representation, we also explore the use of Levenshtein Distance (LD) [31] as an evaluation metric. Spelling prediction or captioning task commonly uses LD to measure the edit distance in characters between the reference and caption. Additionally, we use the concept of an exact match, which we calculate by turning both target and candidate molecules into by their canonical form and searching for a direct match. It is important to note that the exact match metric is quite hard and even minor deviations in predicted molecular structure will be treated as complete failures. Besides traditional Natural Language Processing methods, we evaluate models based on task-speci\ufb01c metrics like caption validity and caption generation. Our \ufb01rst metric is the image captioning percentage since in molecule description generation, systems are not always able to recognize valid molecules, and as a result, it is common for them not to generate any caption whatsoever. Our second task-speci\ufb01c metric is the molecular validity of generated captions. As covered in our discussion of SMILES, caption generation is complex because few SMILES strings are valid molecules. Before evaluating how good the generated molecules descriptions are, we must evaluate how often systems can generate captions and how often these captions generate valid molecular graphs. Finally, we evaluate systems based on traditional molecular similarity methods using Molecule Fingerprinting and Tanimoto Similarity. Molecule \ufb01ngerprinting is a common practice of representing molecule structures with bit strings using various heuristics. In our experiments, we use RDK, Morgan, and MACCS \ufb01ngerprinting. RDK Fingerprinting [42] is a topological Fingerprinting is an enumeration of linear fragments from size 1 to 7, which produces a string 2048 bits long. Morgan \ufb01ngerprinting [40] is a form of Extended Connectivity Fingerprinting in which a molecule is decomposed into atoms and its neighbors and folded into a 2048 bit representation. MACCS \ufb01ngerprinting [12] consists of representing the presence of 166 molecular substructure in each molecule. For each of these molecules, we calculate the Tanimoto Similarity as shown in equation 2 (commonly known as Jaccard index), which measures the set intersection between two representations. Molecules with a Tanimoto similarity of over 0.85 have similar characteristics [5] and activities [11]. J(A, B) = A \u2229B A \u222aB = A \u2229B |A| + |B| \u2212|A \u2229B| (2) 4 Experiment In order to evaluate the validity of our model and our dataset, we have trained a wide variety of models and compared their performance to the existing benchmark of OSRA 7. To evaluate OSRA, we leverage the python-based implementation built by Beard et al. [6] using their suggested con\ufb01dence level of 70 percent and captioning each image independent. In order to understand how our various metrics are related and produce a naive baseline we select a random set of 5000 molecules and compute metrics on every molecule pair in the set. This random baseline is meant to simulate the effect of choosing a molecule at random from possible molecules when generating a caption. To 7We experimented with using neural network-based methods such as DECIMER, but the model failed to generate successful captions for any molecules in the MOLCAP dataset which we attribute to the lack of task speci\ufb01c tuning. 6 \fcompare to neural methods such as DECIMER, we leveraged their open source code 8 and their best performing pretrained model without any task speci\ufb01c training. 4.1 IMG2SMI Model Description As shown in Figure 1 IMG2SMI consists of an image encoder, often referred to as backbone, and a caption generation decoder. Our implementation builds on Carion et al.\u2019s [8] work on image captioning. The image encoder consists of the 4th layer of a RESNET-101 [18] model pretrained on IMAGENET [41]. By keeping only convolutions, the remaining model produces a 2048 dimensional vector for each molecule which represents the molecule in a dense feature space. The decoder builds on the encoder-decoder structure of Vaswani et al. [47] and has three stacked layers of transformer encoders and decoders, eight attention heads, and 2048 dimensional for the feed-forward networks. We train with a batch size of 32 and train for \ufb01ve epochs with the dropout set to 0.1, and layer norm of 1e-12, use AdamW [27] as our optimizer with weight decay of 1e-4, an initial learning rate of 5e-5, and the random seed of 42. Each model was trained with a single 2080ti GPU and training lasted approximately 5 hours an epoch(or about a day for a full training run). Our code builds of of the open source implementation of DETR 9 and model size and code con\ufb01gurable makes it simple to reproduce our runs and to improve on our methods. In our hyperparameter sweep we explore variations in initial learning rates of 1e-4,5e-4,1e-5,3e-5,5e-5,1-e6 \ufb01nding 5e-5 to be the most optimal. We evaluated our highest performing model with a variety of random seeds including, 82, 56, 11, and 47 but saw no major variation in performance. We perform a deep ablation study where we vary the image encoder and decoder, vary the structure of the tokenized caption, and variations of the two. To vary the image encoder we experiment with \ufb01ne tuning the encoder and keeping it \ufb01xed. To explore variation in the decoder we explore using LSTM + Attention identical to that which is used by Xu et al. [52]. 4.2 Results As we can see from the results in table 2 IMG2SMI out performs all other existing systems outperforming the OSRA baseline MAACS Fingerprint Tanimoto Similarity (FTS) by over 163%. When other metrics like ROUGE, a similar story is told where IMG2SMI achieves almost 10x improvement. It is worth calling out that these sizable gaps are due to the dif\ufb01culty of the MOLCAP dataset. Since the median MOLCAP molecule is over 45 characters long its associated molecule is a lot larger and more complex than those which OSRA and DECIMER were tested on. Moving our focus to the molecular similarity of various \ufb01ngerprinting method it become quite clear how well the transformer encoder is able to create relevant molecule descriptions despite variability in input. It is worth noting that despite major changes in most metrics, Levenshtein distance stays quite high with a distance of 21. We believe this is due to IMG2SMI\u2019s tendency to produce longer captions. For example, on the evaluation set IMG2SMI\u2019s average caption is 47.66 characters compared to the reference average of 43.94. We believe with larger variation in molecular caption input and data size IMG2SMI would come closer to approximating the true average length. One surprising metric which really shows how much more work is required in the \ufb01eld is the exact match percentage and the BLEU score. Despite high performance in most metrics exact match is still under 10% and BLEU score improves only by 17%. Qualitative analysis points to minor variations in SMILES strings which have large edit distance usually caused by alterations in double bonds which produce near identical molecules with major variations in SMILES. It is worth noting that independent of what method the model uses for molecule generation (SELFIES, DeepSMILES, SMILES, BPE variations in smiles) our comparisons are performed on the translated SMILES strings. It is important to compare the baseline performance to random molecule selection as this highlights the performance of existing systems. Essentially, existing methods predict molecule descriptions at a slightly better rate than choosing a molecule at random. This means that using existing systems on complex molecules is not feasible. Extending this compassion to IMG2SMI we can see that IMG2SMI is not only much better but above the target similarity of Tanimoto similarity of 0.85 where molecules tend to behave similarly. This means that unlike previous systems, on average IMG2SMI will produce a caption that has a molecule that at least behaves similar to the desired molecule. 8https://github.com/Kohulan/DECIMER-Image-Segmentation 9https://github.com/facebookresearch/detr 7 \f4.2.1 Ablation Experiments To further study how IMG2SMI performs we explore how tokenization strategies, encoder, and decoder variation effect model performance. Our \ufb01rst experiments we vary the decoder to use a RNN + Attention as used by Xu et al. [52]. As shown in 3 the variation of decoder has a huge effect as the RNN based model is only able to slightly outperform the random molecule baseline. Building on the use of RNN or transformer we explore the effect of \ufb01xing weights for the feature extractor during training. In other words we use a feature extractor which is trained for image classi\ufb01cation without any task speci\ufb01c training. While the performance of RNN + \ufb01xed encoder drop beneath our random baseline, surprisingly, the transformer and \ufb01xed encoder still outperforms all existing methods. We believe this is due to the learned shape and edge extractor providing enough of a signal to the transformers that they can learn some form of representation that comes close to approximating molecule attributes when measuring Tanimoto MACCS Similarity. The improvement found when we \ufb01netune out feature extractor provides compelling evidence that the feature extractor is able to represent molecules in a dense feature space extremely well and likely could be applied to many other tasks. To evaluate the effect of various tokenization strategies we use or tokenized datasets and train with the exact same parameters. Looking at table 4 we \ufb01nd tokenization method has a deep effect on molecular description generation. Few captions generated by the models using anything but sel\ufb01es are actually valid molecules. Manual inspection of generated captions demonstrates similar issues as discussed by Krenn et al. [22] as many captions have unmatched parenthesis or are generally irregular bonds. Fine-tuning of the image encoder improves performance improves for all models as shown in table 3. It is worth noting that in our experiments we attempted to alleviate this problem by leveraging beam search but this did not improve performance as increasing the beam size and branching the beam at each decoding step actually led to decreased performance. Using these results we recommend future researchers focus their efforts on SELFIES as it is likely a more fruitful representation. If researchers must use some for of BPE tokenization we recommend a vocabulary size of 2000 as it on average produces the best results. 4.3 Metrics for Molecule Description Generation Broadly focusing on the variation found in our evaluation metrics, we can see that some metrics are more suited for evaluation than others. Despite massive variations in other metrics, BLEU performance is virtually unchanged across systems and a poor predictor of molecular similarity. Unlike BLEU, ROUGE provides a more sensitive metric showing nearly a 10x difference between OSRA and IMG2SMI. We believe the difference between BLEU and ROUGE is related to the low exact match shared across systems. IMG2SMI recalls almost 63% of tokens but has a low precision indicating that IMG2SMI is including additional tokens. Molecule \ufb01ngerprinting metric agree in direction but diverge in magnitude which we attribute to the varying sensitivity of each method. MACCS only had 166 (bits) substructures thus is a simpler representation compared to Morgan or Path(2048 bits). It is worth noting \ufb01ngerprinting methods metrics are recall focused as they measure the shared presence of a substructure and does not account for potential duplication\u2019s of a structure. Model Exact Match(%) Levenshtein BLEU-1 ROUGE MACCS FTS RDK FTS Morgan FTS OSRA 0.0004 32.76 0.0511 0.0684 0.3600 0.279 0.2677 Random Molecule 0.0000 38.32 0.0532 0.0422 0.3378 0.2229 0.1081 DECIMER 0.0000 54.00 0.0000 0.0000 0.0000 0.0000 0.0000 IMG2SMI 7.240 21.13 0.0615 0.6240 0.9475 0.902 0.8707 Table 2: Performance of systems for molecule description generation on MOLCAP. Model Exact Match(%) Levenshtein BLEU-1 ROUGE MACCS FTS RDK FTS Morgan FTS Random Molecule 0.0000 38.32 0.0532 0.0422 0.3378 0.2229 0.1081 IMG2SMI(RNN) 0.0000 53.02 0.0289 0.0225 0.1526 0.0954 0.0451 IMG2SMI(RNN)-F 0.0000 33.63 0.0549 0.0624 0.4180 0.2309 0.1328 IMG2SMI(Transformer) 2.460 24.70 0.0584 0.5668 0.7674 0.5724 0.4944 IMG2SMI(Transformer)-F 7.240 21.13 0.0615 0.6240 0.9475 0.902 0.8707 Table 3: Effects of variation of encoder and decoder on molecule description generation using SELFIES 8 \fVocab Fine-tune Encoder Images Captioned (%) Valid Captions (%) OSRA(Baseline) N/A 86.5 65.2 SELFIES No 61.8 61.1 SELFIES Yes 99.4 99.4 Character No 1.3 1.0 Character Yes 2.2 2.1 BPE-100 Yes 3.7 3.5 BPE-100 No 2.9 2.9 BPE-200 No 3.7 3.5 BPE-200 Yes 21.5 21.1 BPE-500 No 3.3 3.1 BPE-500 Yes 8.7 8.5 BPE-2000 No 10.2 10.0 BPE-2000 Yes 20.2 20.0 BPE-20000 No 5.5 5.4 BPE-20000 Yes 18.1 18.0 Table 4: Effects of SMILES Tokenization Strategy. SELFIES vastly outperforms other systems but if a BPE is to be used a vocabulary size of 2000 performs best when average with \ufb01xed and \ufb01ne-tuned encoder 4.4 Limitations Despite competitive accuracy on molecular similarity metrics, even the best performing IMG2SMI model shows a sizable gap in terms of exact match, ROUGE, and LD. As IMG2SMI cannot have a complete exact match on even 10% of images we believe model performance is likely able to improve drastically. It is also important to note that unlike traditional image processing methods, neuralnetwork based methods do not perform well on data that falls outside of the training distribution. Since MOLCAP is built around complex molecules and the average molecular length > 40 tokens, the model does not provide good captions for short molecules. Since traditional systems like OSRA do well with short molecules we believe using the two systems in conjunction is likely to yield the best results. In the future we seek to improve IMG2SMI and MOLCAP by providing a more broad distribution of molecule sizes. 5" + }, + { + "url": "http://arxiv.org/abs/2108.02170v1", + "title": "Curriculum learning for language modeling", + "abstract": "Language Models like ELMo and BERT have provided robust representations of\nnatural language, which serve as the language understanding component for a\ndiverse range of downstream tasks.Curriculum learning is a method that employs\na structured training regime instead, which has been leveraged in computer\nvision and machine translation to improve model training speed and model\nperformance. While language models have proven transformational for the natural\nlanguage processing community, these models have proven expensive,\nenergy-intensive, and challenging to train. In this work, we explore the effect\nof curriculum learning on language model pretraining using various\nlinguistically motivated curricula and evaluate transfer performance on the\nGLUE Benchmark. Despite a broad variety of training methodologies and\nexperiments we do not find compelling evidence that curriculum learning methods\nimprove language model training.", + "authors": "Daniel Campos", + "published": "2021-08-04", + "updated": "2021-08-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction Seeking to represent natural language, researchers have found language models (LM) with Sesame Street-inspired names [1] [2] [3] to be incredibly effective methods of producing language representations (LR). These LM\u2019s have leverage transfer learning by training on a large text corpus to learn a good representation of language which can then be used in a down steam task like Question Answering or Entity Resolution. While these LMs have shown to be excellent methods to enable language understanding, the ability to train these models is becoming increasingly computationally expensive [4]. Since model performance is closely tied to the size of training data, model size, and compute used to train [5] the bulk of existing research has focused on scaling these aspects without much focus on increasing ef\ufb01ciency of training. Seeking to explore what methods could be used to make LM training more ef\ufb01cient we study the effect of curriculum learning by training ELMo with a wide variety of curricula. Curriculum learning (CL) is a training methodology which applies structure to a models training data. CL has been studied broadly in natural language processing and has been very successful in domains like Neural Machine Translation (NMT) where CL based models are able to train faster and produce better results [6] [7] [8] than unstructured, stochastic sampling. Focusing on LMs, Xu et al. [9] showed that CL can be used in LM \ufb01netuning as a way to improve task performance. Despite an abundance of work exploring CL and LMs to the best of our knowledge we are the \ufb01rst to examine the effect of curriculum learning in LM pre-training and transfer performance. To evaluate the effect of CL on LMs we train ELMo with a variety of curricula on the wikitext-2 and wikitext-103 [10] without modi\ufb01cation of training time or model hyperparameters. We evaluate model performance on the pre-training task and on the GLUE Benchmark [11] building on the work of Competence Based Curriculum Learning [12] by modifying training sampler within the LM to produce a dataset with gradually increasing dif\ufb01culty 2. The contributions of our work are: \u2022 Exploration of the effects of curriculum learning for language modeling \ufb01nding no clear improvement to models that use curriculum methods for training. \u2022 Experiments suggesting random curriculum in which the structure of the training regime is random can be just as effective as linguistically motivated methods. \u2217Work done pursing Masters Degree at University of Washington 2Code and results available at https://github.com/spacemanidol/CurriculumLearningForLanguageModels arXiv:2108.02170v1 [cs.CL] 4 Aug 2021 \fCurriculum Learning for Language Modeling 2 Related Work 2.1 Curriculum Learning CL subset of training regimes which introduce structure to improve training ef\ufb01ciency, model performance, or model model robustness by optimizing what kind of information a model has access at each training step. Experiments with RNNs [13] suggested that learning of complex grammatical structure improves when the initial examples the models learn with are more easier. Experiments in modifying language modeling training data \ufb01nd a lower loss can be achieved by training on incrementally more dif\ufb01cult data [14]. Recently, competence based curriculum [6] has been used to improve machine translation progressively modifying the training corpus until it matches the original distribution. It has been used to reduce training time by up to 70% and improve BLEU performance by 2.2 points on the WMT dataset. For further readings about curriculum learning, applications and current bottlenecks we recommend Soviany et al.\u2019s survey [15] 2.2 Language Modeling Language modeling is a way to assign a probability distribution over some textual representation. In other words, if the task is to model n-grams, the probability of a current input is the probability of a token wi given the previous i tokens. Language Models like ELMo [1] and BERT [2] leverage large text corpora to learn language representations that can be used for downstream tasks like text classi\ufb01cation or question answering. While LMs lead to large improvement in performance for downstream tasks they are both expensive and complex to train. A single training run of a model like GPT-2 can cost upward of $40,000, the architecture search and hyperparameter tuning can be upwards of $3,000,000, and the C02 released by training one of these models can be similar to the C02 released in the entire life-cycle of a car [16]. 3 Method Language modeling is a way to assign a probability distribution over some textual representation. This probability distribution is commonly modeled as the probability of a current token wi given the previous i tokens as formally represented in equation 2. Using language modeling as a pre-training method, LMs learn representations which can be used in downstream tasks. Since language has structure, we believe that structuring the pre-training methodology can lead to improved model performance. To introduce structure into LM training we leverage Platanios et al.\u2019s competence based curriculum (CBC)[6] as shown in Algorithm 1. CBC uses a notion of model competence and sample dif\ufb01culty to control what a model learns. First, the corpus, X, a collection of samples S, where each sample si is a sequence of words si = wi o, wi 1, ..., wi n is sorted by dif\ufb01culty using a which Using a heuristic like sentence length or unigram rarity is assigned a dif\ufb01culty \u03f5si = [0, 1]. Given a processed corpus, a model is assigned a initial competence \u03bb0 and a competence increment lambdaincrement. A model\u2019s competence score is a representation of how far along in a training regime the model is. At each training step, a model samples from data that is lower than its current competence, updates its weights, and increases its competence. The model is only able to train on samples that have a dif\ufb01culty where \u03f5si <= \u03bbt. Using CBC we explore 8 proxies for sample dif\ufb01culty: no curriculum, random, sample length, unigram sample probaAlgorithm 1: CBC Training Regime Result: Model Trained with Competence Based Curriculum Input: X, \u03bb0, \u03bbincrement ; Compute dif\ufb01culty, \u03f5si for si \u2208X; Compute Cumulative density of \u03f5si; \u03bbt = \u03bb0; for training step t = 1,...,n do Sample batch b from X such that \u03f5si < \u03bbt; Train on batch b; \u03bbt+1 = \u03bbt + \u03bbincrement; end bility, bigram sample probability, trigram sample probability, part of speech diversity (POS), and sample dependency parse complexity (DEP). For each methodology, for each si in X, we compute a dif\ufb01culty value for each sample \u03f5si and then sort the dataset by this dif\ufb01culty score. Using the sorted dataset we compute the cumulative density function (CDF) giving each sample of the dif\ufb01culty score \u03f5si \u2208[0, 1]. No curriculum sets \u03bb0 = 1 which means training samples 2 \fCurriculum Learning for Language Modeling stochastically and serves as a baseline. A random curriculum is generated by assigning values for \u03f5si at random and establishes the effect of any arbitrary structure. The remaining six heuristics are based on common NLP dif\ufb01culty metrics and linguistically motivated heuristics. 3.0.1 Sample Length Sample Length builds on the idea that is a lot harder to model longer sentences, as longer sentences require better tracking of dependencies. It is calculated by creating a CDF on sentence-length-\u03f5si = length(si). 3.0.2 Sentence Entropy: N-gram dif\ufb01culty Sentence Entropy builds uses the notion that can be dif\ufb01cult to model is words with a variety of frequency in the corpora. Models, if assumed to behave like humans, would \ufb01nd it dif\ufb01cult to understand the meaning of a word if they do not see it in a corpus nor have a diversity of usages to infer meaning. Since the statistical strength of training samples with rare words is low and the early model learned word embeddings are likely to have high variance it is likely that exposing a model early to rare words can result in badly estimated representations. To quantify this dif\ufb01culty we propose producing a sentence entropy for each sentence with respect to its unigram, bigram, and trigram probabilities. These are calculated using standard sample entropy calculations as shown below Sample entropy for each N-gram can be thought of the probability of the sample occurring given an approximate naive language modeling assuming words are sampled independently. Samples are scored by calculating the product of n-gram log likelihood given the sample. Note, we are not calculating the conditional probability of each word given the preceding N words but the probability of the N-gram given the text corpus. Calculation of \u03f5si is shown in equation 1 where uc, bc, and tc are the counts of unique unigrams, bigrams, and trigrams in the corpus, C is the corpus, c(y) is the count of y in a sample, x \u2208C is a sample in the corpus and wi \u2208x is a word in a line, and l(x) is the length of x in n-grams. p(wn) = P x\u2208C c(wn) us p(wn, wm) = P x\u2208C c(wn, wm) bs p(wn, wm, wj) = P x\u2208C c(wn, wm, wj) ts unigram-\u03f5(si) = l(si) Y n=0 log(p(wn)) bigram-\u03f5(si) = l(si)\u22121 Y n=0 log(p(wn\u22121, wn)) trigram-\u03f5si = l(si)\u22122 Y n=0 log(p(wn, wn+1, wn+2)) (1) 3.0.3 Dependency Tree Sentences are often modeled as dependency trees to model the interaction between words and groups of words in a text sample. While not infallible, sentences that have a deeper tree usually more complex and as a result more dif\ufb01cult. We leverage the language processing framework SPACY.IO\u2019s to generate parse trees for each sample and measure the depth of each tree. This information is then used to calculate dif\ufb01cult as dep-\u03f5si = depth(si). Since there are fewer unique values for tree depth this method can be inferred to have a high commonality with random dif\ufb01culty. 3.1 Part of Speech Diversity Another core part of language complexity can be derived by the diversity of parts-of-speech in a sentence. We believe that more dif\ufb01cult sentences feature a higher diversity of parts-of-speech (POS) and use SPACY.IO\u2019s part of speech 3 \fCurriculum Learning for Language Modeling tagger to produce a set for in each sample and calculate dif\ufb01culty with pos-\u03f5si = len(set(pos(si))) P(w1, . . . , wm) = m Y i=1 P(wi | w1, . . . , wi\u22121) \u2248 m Y i=1 P(wi | wi\u2212(n\u22121), . . . , wi\u22121) (2) 4 Experiments To evaluate the effect of curriculum learning on language modeling we train ELMo models varying the training corpus and using our aforementioned dif\ufb01culty proxies to generate various language models. Training leverages well-established language modeling benchmarks of wikitext-2, and wikitext-103 [10] with details can be found in table 1 . These datasets collect veri\ufb01ed good and featured articles from English Wikipedia and feature 2 million and 103 million tokens and were selected for the variations in size and speed of training. After training, each language model performance is then evaluated based on performance on the training corpus (measured in perplexity) and transfer ability on The General Language Understanding Evaluation Benchmark (GLUE) [11]. GLUE is a set of resources focused on the evaluation of natural language understanding systems. This benchmark pools eleven sentence-level language understanding tasks tasks. Corpus Name vocabulary Size Tokens lines sentences wikitext=2 33278 2507007 44836 131262 wikitext-103 267735 103690236 1809468 5343947 1B Word Benchmark 793471 829250940 N/A N/A Table 1: Training Corpus details 4.1 Pre-Training Details Using the 16 curricula (8 for each corpus) we train an ELMo model using the original code 3 with a modi\ufb01ed batch sampler created for competence based sampling. For baselines, we train Elmo models without our modi\ufb01ed CBC sampling using wikitext-2, wikitext-103. Following the original work, we train each curriculum-model variant for 10 epochs on the pre-training corpus, use 2 stacked 4096 dimensional BiLSTMs, use dropout of 0.1, batch size of 128, and a context window of 20 tokens. Training was performed using 3 Nvidia 2080 TI GPUs requiring about 30 hours for the wikitext-103 and about an hour for wikitext-2. For CBC training hyperparameters, we performed a grid search on \u03bbincrement and \u03bb0 values \ufb01nding the lowest training perplexity at \u03bb0 = 1e \u22121 \u03bb0 = 1e \u22121 for wikitext-2 and \u03bb0 = 1e \u22123 \u03bbincrement = 1e \u22125 for wikitext-103. In the original implementation, the training loader loads a \ufb01le, shuf\ufb02e all the lines, and samples batches by iterating through the shuf\ufb02ed corpus. Our method load the full corpus, then select a batch at random from the examples that meet our model\u2019s current competence. This changes data sampling to unconstrained random sampling without replacement to sampling with replacement. Since our competence based sampling has a variety of sample lengths we use the padding token < PAD > as is common in NMT. All samples are padded to length 20 and the loss on these padding tokens is set to zero. Since padding tokens for wikitext-103 we introduce 12,204,311 tokens equating to approximately 12 percent more FLOPs. 4.2 Transfer Learning After models have pretrained we evaluate on GLUE performance using the JIANT toolkit [17]. JIANT is an open-source tool for conducting multi-task and transfer learning experiments in English to implement the GLUE benchmark. JIANT builds on the notion of a con\ufb01guration which provides all settings needed to run and reproduce an experiment in a simple text \ufb01le. JIANT provides consistent data processing, classi\ufb01er implementation, and evaluation to ensure that users of the framework can focus on the outputs and not worry about implementing benchmarking tasks like GLUE. JIANT uses the pretrained model weight along with a multi-layer perceptron with 512 hidden dimensions to train on each GLUE tasks. Each JIANT experiment \ufb01xes training identically across tasks and inputs using a batch size of 3https://github.com/allenai/bilm-tf 4 \fCurriculum Learning for Language Modeling 8, random seed of 42 initial learning rate of 1e-1, dropout of 0.2. Training of each model continues until the model learning rate dips below 1e-6 or the model performance has not improved in 10 epochs. Unless another metric is explicitly mentioned, the GLUE sub-task metric is accuracy. 4.3 Experimental Results Focusing on model pretraining performance, despite attempts in variation of \u03bb0 and \u03bbincrement, no implementation of CBC is able to approximate the baselines in term of perplexity on the held out portion of the wikitext-* dataset. Complete graphs can be found in the appendix but all curricula perplexities including the baseline are order of magnitudes higher than stochastic sampling. On wikitext-2, the best performance is achieved by the curricula baseline (\u03bb0 = 1) with a perplexity of 770, followed by random with a perplexity of 2105 well above the baseline of 151. We believe this is caused by the change in dataset distribution caused by our curriculum learning implementation. Similar effects are seen on wikitext-103 where unlike stochastic sampling, which achieve a perplexity of 36, curriculum methods are unable to achieve a perplexity under one thousand. Surprisingly, as data size scales we see a larger volatility in perplexity during training with respect to validation perplexity scores which we attribute to constantly shifting sample distribution caused by curriculum methods. As we move our focus to GLUE results on wikitext-2 based models in table 2, we \ufb01nd that curriculum methods generally outperform stochastic sampling by 10%. We do not \ufb01nd strong evidence that the structure of the curriculum matters as non curriculum (\u03bb0 = 1) performs better than 4 other curricula and the baseline. Perhaps most surprising, random outperforms the baseline when measure by overall glue score despite their being no formal structure in the training regime. Observing variability at an individual task level we \ufb01nd only CoLA, STS-B and SST have broad variability in performance. We believe this is because these tasks are smaller and more linguistically challenging. Focusing on results on larger corpus in table 3 we \ufb01nd trends found in wikitext-2 no longer hold as top performance is achieved by stochastic unmodi\ufb01ed baseline. We also note that the orderign of system performance does not hold across datasets and as the pretraining dataset grows the variability between models decreases. Similar to the smaller corpus, we \ufb01nd the highest sensitivity in ColA and \ufb01nd variability in SST and STS-B has become more muted. Surprisingly, given their had the worse performance in pretraining perplexity, the trigram curricula generate the best transfer task. Overall, we \ufb01nd that CBC training provided worse performance on validation perplexity portion and improved performance on transfer tasks when he pretraining corpus is small. We believe this reinforces the importance of the size of the pretraining corpus since a large corpus allows the model to learn better language representations without any structured training. We also \ufb01nd a large disconnect in model pretraining perplexity and transfer task performance as performance on one is not predicative of the other. Method Overall Cola SST MRPC STS-B QQP MNLI QNLI RTE WNLI DX dep 0.63 0.19 0.73 0.85/0.78 0.71/0.71 0.74/0.78 0.60 0.75 0.58 0.56 0.11 unigram 0.63 0.18 0.77 0.86/0.78 0.68/0.67 0.74/0.79 0.60 0.75 0.56 0.54 0.13 trigram 0.63 0.15 0.76 0.84/0.76 0.70/0.69 0.73/0.78 0.62 0.76 0.54 0.56 0.14 length 0.63 0.19 0.75 0.84/0.77 0.66/0.65 0.73/0.78 0.60 0.75 0.57 0.56 0.13 no curricula 0.62 0.15 0.75 0.84/0.77 0.71/0.71 0.73/0.78 0.61 0.72 0.54 0.56 0.12 bigram 0.62 0.18 0.77 0.86/0.78 0.68/0.67 0.74/0.79 0.60 0.75 0.56 0.44 0.13 random 0.61 0.00 0.76 0.85/0.78 0.70/0.70 0.72/0.78 0.61 0.75 0.58 0.56 0.14 pos 0.61 0.00 0.74 0.84/0.77 0.66/0.66 0.71/0.77 0.61 0.75 0.59 0.56 0.16 baseline 0.59 0.00 0.70 0.85/0.78 0.66/0.66 0.70/0.75 0.59 0.72 0.54 0.56 0.13 length 0.53 0.01 0.75 0.81/0.67 0.71/0.71 0.54/0.68 0.33 0.51 0.59 0.52 0.01 Table 2: GLUE results for CBC models trained on wikitext-2. 4.4 Failure of Competence Based Curriculum In our experiments we were quite surprised at the failure to learn the training data found in our implementation of competence based curriculum as shown by the high perplexity on the wikitext-* datasets. Based on the changes in validation perplexity we believe the model is over-\ufb01tting on the altered training data. We believe the cause of this is our hyperparameter selection for \u03bb0 and \u03bbincrement. We realize that since each method is effusively sampling from a different training distribution comparison of training perplexities are not comparable. Additionally, if we look at the difference in validation perplexity curves of various methods it is apparent that they are not learning at the same rate. Some methods like DEP, and POS do not see major \ufb02uctuations indicating the chosen curriculum parameters work well while many of the n-gram methods consistently \ufb02uctuate in a similar fashion indicating the chosen hyperparameters are sub optimal for them. Given the non trivial computational cost to explore \u03bb0 and \u03bbincrement for each method 5 \fCurriculum Learning for Language Modeling Method Overall Cola SST MRPC STS-B QQP MNLI QNLI RTE WNLI DX baseline 0.67 0.28 0.86 0.87/0.80 0.77/0.77 0.72/0.76 0.64 0.76 0.61 0.54 0.14 trigram 0.66 0.21 0.85 0.87/0.80 0.78/0.78 0.75/0.80 0.66 0.77 0.56 0.55 0.14 no curriculum 0.66 0.21 0.83 0.87/0.8 0.77/0.77 0.75/0.79 0.64 0.77 0.58 0.56 0.15 bigram 0.66 0.18 0.83 0.85/0.79 0.77/0.77 0.75/0.79 0.65 0.77 0.56 0.56 0.14 length 0.66 0.21 0.82 0.85/0.72 0.77/0.77 0.73/0.79 0.63 0.75 0.58 0.56 0.14 unigram 0.65 0.19 0.82 0.86/0.79 0.76/0.75 0.75/0.79 0.63 0.75 0.57 0.56 0.13 random 0.65 0.18 0.84 0.86/0.79 0.77/0.77 0.75/0.80 0.64 0.77 0.58 0.49 0.14 pos 0.65 0.16 0.83 0.86/0.79 0.76/0.76 0.75/0.79 0.63 0.73 0.57 0.56 0.14 dep 0.64 0.23 0.85 0.86/0.78 0.78/0.78 0.75/0.79 0.64 0.76 0.54 0.42 0.14 Table 3: GLUE results for CBC methods trained on wikitext-103. and the disconnect seen between pre-training perplexity and performance on GLUE we did not perform additional hyperparameter optimization. 5" + }, + { + "url": "http://arxiv.org/abs/1908.01386v1", + "title": "Reconstruction of the magnetic field for a Schr\u00f6dinger operator in a cylindrical setting", + "abstract": "In this thesis we consider a magnetic Schr\\\"odinger inverse problem over a\ncompact domain contained in an infinite cylindrical manifold. We show that,\nunder certain conditions on the electromagnetic potentials, we can recover the\nmagnetic field from boundary measurements in a constructive way. A fundamental\ntool for this procedure is a global Carleman estimate for the magnetic\nSchr\\\"odinger operator. We prove this by conjugating the magnetic operator\nessentially into the Laplacian, and using the Carleman estimates for it proven\nby Kenig-Salo-Uhlmann in the anisotropic setting, see [KSU11a]. The conjugation\nis achieved through pseudodifferential operators over the cylinder, for which\nwe develop the necessary results.\n The main motivations to attempt this question are the following results\nconcerning the magnetic Schr\\\"odinger operator: first, the solution to the\nuniqueness problem in the cylindrical setting in [DSFKSU09], and, second, the\nreconstruction algorithm in the Euclidean setting from [Sal06]. We will also\nborrow ideas from the reconstruction of the electric potential in the\ncylindrical setting from [KSU11b]. These two new results answer partially the\nCarleman estimate problem (Question 4.3.) proposed in [Sal13] and the\nreconstruction for the magnetic Schr\\\"odinger operator mentioned in the\nintroduction of [KSU11b]. To our knowledge, these are the first global Carleman\nestimates and reconstruction procedure for the magnetic Schr\\\"odinger operator\navailable in the cylindrical setting.", + "authors": "Daniel Campos", + "published": "2019-08-04", + "updated": "2019-08-04", + "primary_cat": "math.AP", + "cats": [ + "math.AP" + ], + "main_content": "Introduction 2 1.1 Setting and main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Structure of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Preliminaries 6 2.1 Fourier analysis and distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.1 Distributions and Sobolev spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.2 Fourier analysis on smooth functions . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.3 Fourier analysis on tempered distributions . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Function spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3 Dirichlet problem: de\ufb01nitions and basic facts . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3.1 Trace operators and Sobolev spaces . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3.2 Weak solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3.3 Inhomogeneous problem and extension operator . . . . . . . . . . . . . . . . . . . 11 2.3.4 Solution to the Dirichlet problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3.5 Dirichlet-to-Neumann map and normal derivatives . . . . . . . . . . . . . . . . . 12 Department of Mathematics, University of Chicago, Chicago, IL, 60637, USA Escuela de Matem\u00b4 atica, Universidad de Costa Rica, 2060 San Jos\u00b4 e, Costa Rica E-mail address: campos@math.uchicago.edu 1 \f3 Semiclassical pseudodi\ufb00erential operators over R \u00d7 Td 14 3.1 De\ufb01nitions and elementary properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.1.1 Semiclassical Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.1.2 Semiclassical pseudodi\ufb00erential operators . . . . . . . . . . . . . . . . . . . . . . 15 3.2 Boundedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3 Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.4 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.4.1 Some facts about weighted spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.4.2 Some facts about semiclassical pseudodi\ufb00erential calculus on R . . . . . . . . . . 22 4 Conjugation and Carleman Estimate 24 4.1 Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.2 Lemmas: ODEs and calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.3 Estimates for the solutions of the equations . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.4 Explicit de\ufb01nition of the symbol and properties . . . . . . . . . . . . . . . . . . . . . . . 35 4.5 Proof of the Carleman estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5 Equivalent formulations and boundary characterization 38 5.1 Green functions, operators, and layer potentials . . . . . . . . . . . . . . . . . . . . . . . 39 5.1.1 \u03c4-dependent Green function and operator . . . . . . . . . . . . . . . . . . . . . . 39 5.1.2 \u03c4-dependent single layer potential . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.2 Equivalent formulations and boundary characterization . . . . . . . . . . . . . . . . . . . 43 6 Reconstruction of the magnetic \ufb01eld 46 6.1 Construction of CGOs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 6.2 Transforms and integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6.3 Determination of the Fourier coe\ufb03cients of the magnetic \ufb01eld . . . . . . . . . . . . . . . 51 6.3.1 Relation between the families {I(m, n)} and {J(m, n)} . . . . . . . . . . . . . . . 51 6.3.2 Curl vectors and Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . 53 6.4 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 6.4.1 Explicit relation between the families {I(m, n)} and {J(m, n)} . . . . . . . . . . 54 6.4.2 A linear algebra lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 6.4.3 Reconstruction of an entire function from values along a convergent sequence . . 57 1 Introduction Let us present the notion of an inverse problem through the following contrasting settings. A direct problem aims to determine, from the knowledge of the internal properties of a system, the reaction of it to certain stimuli. For example, knowing the conductivity of a medium and the voltage potential at the boundary we can determine the voltage induced in the interior of the domain and, therefore, the current \ufb02owing through the boundary. In contrast, an inverse problem looks to deduce properties of the system from the knowledge of the reactions to the stimuli. For instance, in his seminal paper [1], Calder\u00b4 on proposes to study the uniqueness and the subsequent reconstruction of the conductivity of a medium from the voltage\u2013to\u2013current measurements at the boundary. This problem came to be known as the Calder\u00b4 on inverse conductivity problem. Since then, this and other related problems have attracted a great deal of attention; see the survey [26]. Various examples of inverse problems are also presented in [27] and [7]. For a domain M \u2286 Rd, the isotropic conductivity equation can be expressed as the boundary value problem \u001a div(\u03b3\u2207u) = 0 in M, u = f on \u2202M, where the unknown conductivity \u03b3 is a function in M. The known data is the boundary measurement \u039b\u03b3 : f 7\u2192\u03b3\u2202\u03bdu|\u2202M, which maps the voltage potential at the boundary to the current \ufb02owing through 2 \fthe boundary due to the induced voltage in the interior. As mentioned before, the Calder\u00b4 on inverse problem consists in recovering the function \u03b3 from the map \u039b\u03b3. After a change of variables the conductivity equation can be expressed in the form H0,W v := (D2 + W)v = 0, where D = \u2212i\u2207is the gradient, D2 := D \u00b7 D = \u2212div \u00b7 \u2207is the (negative) Laplacian, and W is a function; we refer to H0,W as a (electric) Schr\u00a8 odinger operator. In greater generality, we can consider a magnetic Schr\u00a8 odinger operator, which has a structure similar to the previous operator but contains \ufb01rst order terms in the form HV,W := (D+V )2+W. In any of these cases, the inverse problem consists in recovering information about either (or both) of the electromagnetic potentials V and W, in the interior of the domain, from boundary measurements. We elaborate this with more detail in the following section. One of the reasons why this problem is interesting and relevant is its relation to the inverse scattering problem at \ufb01xed energy from quantum mechanics; see the introduction of the Ph.D. thesis by Haberman [6] for a detailed presentation on this. As mentioned before, there is a signi\ufb01cant body of work surrounding these problems. In the Euclidean setting, the uniqueness (or identi\ufb01ability) problem for the electric Schr\u00a8 odinger operator was explicitly addressed by Nachman\u2013Sylvester\u2013Uhlmann in [14], but it was implicitly used in the proof of the uniqueness for the conductivity problem by Sylvester\u2013Uhlmann in [23]. Their proof uses the construction of many special solutions inspired by the complex exponential solutions introduced by Calder\u00b4 on in [1]; this method of construction relies on a global Carleman estimate for the Laplacian. The Carleman estimates are a kind of parameter\u2013dependent weighted inequalities, originally introduced in the setting of unique continuation problems. The reconstruction of the electric potential is due to Nachman, see [13], and uses the uniqueness for the global Carleman estimate from [23] in two ways. First, it is shown that the uniqueness \u201cat in\ufb01nity\u201d implies a uniqueness property at the boundary, and this allows to determine the boundary values of the special solutions. Second, the smallness that is established in the estimate makes it possible to disregard certain correction terms. Later we will elaborate more carefully on this. For the magnetic operator, the uniqueness has been established in a series of papers under di\ufb00erent assumptions. This was started with the work of Sun, in [22], under smallness conditions on the magnetic \ufb01eld; then the smallness condition was replaced by a smoothness condition by Nakamura\u2013Sun\u2013Uhlmann in [15]. Further improvements of these include the results by Salo in [18] and Krupchyk\u2013Uhlmann in [11]. For a more detailed account of the available results, see [6]. Moreover, in [18], Salo carries out a constructive procedure to recover the electromagnetic parameters. As before, the reconstruction uses the existence of many special solutions which are constructed through a Carleman estimate for the magnetic Schr\u00a8 odinger operator. We will follow closely the arguments from this paper. Moving away from the Euclidean setting, the Calder\u00b4 on problem, or its corresponding problem for the Schr\u00a8 odinger operator, can also be formulated in the context of Riemannian manifolds. This problem arises as a model for electrical imaging in anisotropic media, and it is one of the most basic inverse problems in a geometric setting; for the basic results in this context we refer to [19]. Motivated by the results in the Euclidean setting, we are interested in proving analogous Carleman estimates on manifolds. Looking to deduce such an estimate, in [2] it is proven that the existence of a limiting Carleman weight implies some kind of product structure on the manifold. Since then, it has been usual to consider a cylindrical manifold, as we will do with T = R \u00d7 Td, and the Carleman weight x1; for instance, see [8] or [9]. Our setting will be slightly di\ufb00erent from the so\u2013called admissible Riemannian manifolds from [2]. The solution to the uniqueness problem for the magnetic operator was established by Dos Santos Ferreira\u2013Kenig\u2013Salo\u2013Uhlmann in [2], and the reconstruction problem for the electric Schr\u00a8 odinger operator is elaborated in [9]. For a more complete exposition of the results either in the Euclidean or Riemannian setting we refer to the surveys [26] and [27]. In this thesis we prove a global Carleman estimate for the magnetic Schr\u00a8 odinger operator and propose a reconstruction procedure for the magnetic \ufb01eld. The main motivations to attempt this question are the following results concerning the magnetic Schr\u00a8 odinger operator: \ufb01rst, the solution to the uniqueness problem in the cylindrical setting in [2], and, second, the reconstruction algorithm in the Euclidean setting from [18]. We will also borrow ideas from the reconstruction of the electric potential 3 \fin the cylindrical setting from [9]. These two new results answer partially the Carleman estimate problem (Question 4.3.) proposed in [19] and the reconstruction for the magnetic Schr\u00a8 odinger operator mentioned in the introduction of [9]. To our knowledge, these are the \ufb01rst global Carleman estimates and reconstruction algorithms for the magnetic Schr\u00a8 odinger operator available in the cylindrical setting. 1.1 Setting and main results Let Td = Rd/ Zd be the d-dimensional torus with standard metric g0 and let e be the Euclidean metric on R. Consider the cylinder T = R \u00d7 Td with the standard product metric g = e \u2295g0. We denote the points in the cylinder T by (x1, x\u2032), meaning that x1 \u2208 R and x\u2032 \u2208 Td. Let (M, g) \u2286T be a smooth connected compact (d + 1)-submanifold. Let \u2202M denote its smooth d-dimensional boundary, let M\u2212:= M \\\u2202M and M+ = T \\M. We call M\u2212and M+ the interior and exterior of M, respectively. Let Dx1 = \u2202x1/(2\u03c0i) and Dx\u2032 = \u2207x\u2032/(2\u03c0i), and de\ufb01ne the gradient D := (Dx1, Dx\u2032) and Laplacian \u2212\u2206g := D2 = D2 x1 + D2 x\u2032. We denote \u2212\u2206g0 := D2 x\u2032, so that its eigenvalues on Td consist of the set Spec(\u2212\u2206g0) := {|k|2 : k \u2208 Zd}. Let F, G1, . . . , Gd, W be functions in M, and consider the vector \ufb01eld V := (F, G) := (F, G1, . . . , Gd). We call V and W the magnetic and electric potentials, respectively. We consider the magnetic Schr\u00a8 odinger operator HV,W := (D + V )2 + W = D2 + 2V \u00b7 D + (V 2 + D \u00b7 V + W), and its associated Dirichlet problem \u001a HV,W u = 0 in M\u2212, u = f on \u2202M. (\u2217) In Chapter 2. Prelimaries we will introduce the necessary notation and motivate the following de\ufb01nitions. For f \u2208H1/2(\u2202M), we say that u \u2208H1(M) is a weak solution to the Dirichlet problem (\u2217) if tr\u2212(u) = f and Z M \u2212Du \u00b7 D\u03d5 + V \u00b7 (\u03d5Du \u2212uD\u03d5) + (V 2 + W)u\u03d5 = 0, (1) for all test functions \u03d5 \u2208H1 0(M). Under certain conditions on the potentials, which we later elaborate, there exists a unique weak solution to the Dirichlet problem (\u2217). We de\ufb01ne the Dirichlet-to-Neumann (DN) map \u039bV,W as follows: if f, g \u2208H1/2(\u2202M) and v \u2208H1(M) is any function extending g, i.e. tr\u2212(v) = g, then \u27e8\u039bV,W f, g\u27e9:= Z M \u2212Du \u00b7 Dv + V \u00b7 (vDu \u2212uDv) + (V 2 + W)uv, (2) where u \u2208H1(M) is the weak solution of (\u2217). Formally, the DN map corresponds to the boundary measurement \u039bV,Wf = i 2\u03c0 \u03bd \u00b7 (D + V )u \f \f \u2202M. The reconstruction problem then consists in using measurements at the boundary of the domain, such as the DN map \u039bV,W , to recover information about the potentials in the interior of it. Before we proceed to formulate the results, let us recall the gauge invariance of the DN map observed in [22]. The conjugation identity e\u22122\u03c0i\u03d5De2\u03c0i\u03d5 = D + \u2207\u03d5 gives that HV +\u2207\u03d5,W = e\u22122\u03c0i\u03d5HV,W e2\u03c0i\u03d5, which implies that if 0 is not an eigenvalue of HV,W on M and \u03d5 \u2208C\u221e(T ), then 0 is also not an eigenvalue of the operator HV +\u2207\u03d5,W . Indeed, e u is a solution of the Dirichlet problem \u001a HV +\u2207\u03d5,W e u = 0 in M\u2212, e u = g on \u2202M, 4 \fif and only if u = e2\u03c0i\u03d5e u solves \u001a HV,W u = 0 in M\u2212, u = e2\u03c0i\u03d5g on \u2202M. A routine computation yields that \u039bV +\u2207\u03d5,W = e\u22122\u03c0i\u03d5|\u2202M\u039bV,We2\u03c0i\u03d5 which gives the gauge invariance \u039bV +\u2207\u03d5,W = \u039bV,W if \u03d5|\u2202M = 0. Therefore, it is not possible to determine the magnetic potential V from the knowledge of \u039bV,W. Let us note, however, that the magnetic \ufb01elds are the same, i.e. curl V = curl (V + \u2207\u03d5). The main result from the thesis is that it is possible to reconstruct the magnetic \ufb01eld curl V under the following smoothness, support, and vanishing moment conditions: V \u2208C\u221e c (M\u2212), W \u2208L\u221e(M), supp(W) \u2286M, Z R V (x1, x\u2032)dx1 = 0 for all x\u2032 \u2208 Td. (\u2020) Theorem 1.1. Let M \u2286T be as before, with d \u22653. Assume that the potentials V, W satisfy (\u2020) and 0 is not an eigenvalue of HV,W in M. Then the magnetic \ufb01eld curl V can be reconstructed from the knowledge of the Dirichlet-to-Neumann map \u039bV,W . A fundamental step in the reconstruction of curl V from the DN map \u039bV,W is the construction of many special solutions to the equation HV,W u = 0. Following Sylvester\u2013Uhlmann\u2019s method of complex geometric optics (CGOs), see [23], the solutions consist in appropriate corrections, depending on a large parameter, of harmonic functions. The standard technique to perform these constructions has been the use of Carleman estimates. Following [13] and [9], we use a uniqueness result for these kind of estimates, as mentioned in the previous section, for a twofold purpose: \ufb01rst, to characterize the boundary values of the CGOs from the DN map; second, to \u201cdisregard\u201d the correction terms as the parameter grows. The other main result of the thesis is the following Carleman estimate, which holds under the following conditions on the potentials: V \u2208C\u221e c (T ), supp(V ) \u2286[\u2212R, R]\u00d7 Td, \u27e8x1\u27e92\u03b4W \u2208L\u221e(T ), Z R V (x1, x\u2032)dx1 = 0 for all x\u2032 \u2208 Td. (\u22c6) Theorem 1.2. Let 1/2 < \u03b4 < 1 and let V, W satisfy (\u22c6). There exists \u03c40 \u22651, such that if |\u03c4| \u2265\u03c40 and \u03c4 2 / \u2208Spec(\u2212\u2206g0), then for any f \u2208L2 \u03b4(T ) there exists a unique u \u2208H2 \u2212\u03b4(T ) which solves e2\u03c0\u03c4x1HV,W e\u22122\u03c0\u03c4x1u = f. Moreover, this solution satis\ufb01es the estimates \u2225u\u2225Hs \u2212\u03b4(T ) \u2272|\u03c4|s\u22121\u2225f\u2225L2 \u03b4(T ), for s = 0, 1, 2. The constant of the inequality is independent of \u03c4. In the next chapter we introduce the weighted Sobolev spaces L2 \u03b4(T ) and Hs \u2212\u03b4(T ). The solution to this equation is based on a reduction to the case of the Laplacian, i.e. when there are no electromagnetic potentials. The gain of one derivative in the estimate, meaning the constant \u03c4\u22121, allows to deduce the estimate of Theorem 1.2 in the presence of an electric potential alone through perturbative methods. The reconstruction procedure of the electric potential has been given in [13] for the Euclidean case and in [9] for the cylindrical case. However, the gain of one derivative is not enough to deal with the magnetic potential beyond the perturbative regime, i.e. when the norm of the magnetic potential may not be small. Following the ideas in [16], [15], and especially [18], we prove this by conjugating the equation through pseudodi\ufb00erential operators in order to \u201cessentially eliminate\u201d the magnetic potential. To do this we consider the small parameter \u210f= \u03c4\u22121 and use the results for semiclassical analysis on R. 1.2 Structure of the thesis In Chapter 2. Preliminaries we recall some de\ufb01nitions and results on Fourier analysis, introduce the function spaces that will appear through the problem, and present the basic facts necessary to formulate the magnetic Schr\u00a8 odinger inverse problem. 5 \fIn Chapter 3. Semiclassical pseudodi\ufb00erential operators over R \u00d7 Td we de\ufb01ne these operators over T and prove the usual results speci\ufb01c to our cylindrical setting. These results do not seem to be explicitly stated in the standard references, see [25] or [29], so, we elaborate the necessary theory for it. For zero order pseudodi\ufb00erential operators we prove an analog of the Calder\u00b4 on\u2013Vaillancourt L2\u2013boundedness theorem, as well as a norm estimate for the \ufb01rst order expansion of the composition of two such operators. In Chapter 4. Conjugation and Carleman estimate we carry out the construction of the conjugation as well as the proof of Theorem 1.2. The conjugation requires the solution of a \ufb01rst order di\ufb00erential equation, together with the appropriate estimates. In our cylindrical setting, through the expansion in Fourier series, this equation can be reduced to the solution of multiple ODEs. The ideas follow closely the results from [18]. In Chapter 5. Equivalent formulations and boundary characterization we use Theorem 1.2 to construct many solutions of the equation HV,W u = 0. Starting from a harmonic solution, we construct a unique solution (CGO) to the equation that \u201cbehaves like\u201d it at in\ufb01nity. We show that the uniqueness at in\ufb01nity implies a uniqueness property at the boundary, and so the boundary values of the CGOs can be characterized as solutions to boundary integral equations involving only the knowledge of the DN map \u039bV,W and not the unknown electromagnetic potentials. We follow the presentation from [9]. In Chapter 6. Reconstruction of the magnetic \ufb01eld we restrict the attention to CGOs that result from correcting the harmonic functions e\u00b12\u03c0|m|x1em(x\u2032). We prove that such CGOs can also be written in the form e\u00b12\u03c0|m|x1em(x\u2032)am + e\u22122\u03c0\u03c4x1rm,\u03c4, for an appropriate amplitude am making the correction term have better estimates. Then we de\ufb01ne an analog of the scattering transform from [13] and [18], and use it together with the correction estimates to obtain integrals that are basically a mixed (Laplace\u2013Fourier) transform of terms involving the magnetic potential. Finally, we show that it is possible to recover the magnetic \ufb01eld curl V from these integrals. These steps require some linear algebra lemmas over Q and the reconstruction formula for an entire function, which we prove in the appendix of the chapter. This is perhaps the most interesting chapter: not only the methods require playful ideas, but the results obtained are somewhat di\ufb00erent from analogous previous ones. 2 Preliminaries Consider the cylinder T = R \u00d7 Td with standard product metric g = e \u2295g0. The points in T are denoted by x = (x1, x\u2032), meaning that x1 \u2208 R and x\u2032 \u2208 Td. Let (M, g) \u2286T is a smooth connected compact (d + 1)-submanifold with boundary \u2202M. We denote the volume element in T and M by dx = dx1dx\u2032 and the surface measure in \u2202M by d\u03c3. Let Dx1 = \u2202x1/(2\u03c0i) and Dx\u2032 = \u2207x\u2032/(2\u03c0i), and de\ufb01ne the gradient D := (Dx1, Dx\u2032) and Laplacian \u2212\u2206g := D2 = D2 x1 + D2 x\u2032. For a multiindex \u03b1 = (\u03b11, \u03b1\u2032) = (\u03b11, \u03b1\u2032 1, . . . , \u03b1\u2032 d), we denote |\u03b1| = \u03b11 + \u03b1\u2032 1 + . . . + \u03b1\u2032 d and D\u03b1 = D\u03b11 x1 D\u03b1\u2032 1 x\u2032 1 . . . D\u03b1\u2032 d x\u2032 d . In what follows we de\ufb01ne several functions spaces over T , and we mention when the de\ufb01nitions allow for analogous spaces over R, Td, or M. Most of the de\ufb01nitions and results from this chapter can be found in [18], [24], [25], [29]. 2.1 Fourier analysis and distributions 2.1.1 Distributions and Sobolev spaces We consider the space of smooth compactly supported functions D(T ) := C\u221e c (T ) with the family of seminorms \u2225f\u2225k,l = sup{|D\u03b1f(x1, x\u2032)| : |x1| \u2264k, x\u2032 \u2208 Td, |\u03b1| \u2264l}, 6 \fwith k, l \u2208 N. We say a linear functional \u03d5 : D(T ) \u2192 C is continuous, if for all k \u2208 N there exist l \u2208 N and C > 0, both possibly depending of k, such that |\u27e8\u03d5, f\u27e9| \u2264C\u2225f\u2225k,l for all f \u2208D(T ). We call distributions to these functionals and denote its space by D\u2032(T ). In addition, we de\ufb01ne the space of Schwartz functions S(T ) as the space of rapidly decaying smooth functions with the family of seminorms \u2225f\u2225k = sup{\u27e8x1\u27e9k|D\u03b1f(x1, x\u2032)| : (x1, x\u2032) \u2208T, 0 \u2264|\u03b1| \u2264k}, with k \u2208 N. We say that fj \u2192f in S(T ) if \u2225fj \u2212f\u2225k \u21920 for all k. We say a linear functional \u03d5 : S(T ) \u2192 C is continuous if \u27e8\u03d5, fj\u27e9\u2192\u27e8\u03d5, f\u27e9whenever fj \u2192f in S(T ). We call tempered distributions to these functionals and denote its space by S\u2032(T ). We de\ufb01ne that \u03d5j \u21c0\u03d5 in S\u2032(T ) if \u27e8\u03d5j, f\u27e9\u2192\u27e8\u03d5, f\u27e9for all f \u2208S(T ). A well\u2013known result in functional analysis says that if \u03d5 \u2208S\u2032(T ), then there exist k \u2208 N and C > 0 such that |\u27e8\u03d5, f\u27e9| \u2264C\u2225f\u2225k for all f \u2208S(T ). The space of tempered distributions S\u2032(T ) is a subspace of the distributions D\u2032(T ). The de\ufb01nitions of the spaces S( R) and S\u2032( R) are analogous. For 1 \u2264p \u2264\u221e, let Lp(T ) = Lp(T, dx1dx\u2032) denote the standard Lp space in T . For a nonnegative integer s, we consider the Lp Sobolev spaces W s,p(T ) with norm given by \u2225f\u2225W s,p(T ) := P |\u03b1|\u2264s \u2225D\u03b1f\u2225Lp(T ). Similarly, we also consider the spaces W s,p( R) and W s,p( Td). 2.1.2 Fourier analysis on smooth functions For a function f \u2208L1( R) we de\ufb01ne its Fourier transform by b f(\u03be) := R R e\u22122\u03c0ix1\u03bef(x1)dx. Proposition 2.1 ([24], [29]). If f \u2208S( R), then its Fourier transform b f satis\ufb01es the following: a). the transform and its derivatives have polynomial decay bounds |D\u03b1 \u03be b f(\u03be)| \u2272\u27e8\u03be\u27e9\u22122m\u2225x\u03b1 1 f\u2225W 2m,1(R), where \u27e8\u03be\u27e9:= (1 + \u03be2)1/2 and the constant of the inequality may depend on m, b). b f \u2208S( R) and we have the inversion formula f(x1) = R R e2\u03c0ix1\u03be b f(\u03be)d\u03be, with pointwise absolute uniform convergence, as well as for its derivatives, c). Plancherel\u2019s identity holds, \u2225f\u2225L2(R) = \u2225b f\u2225L2(R). For k \u2208 Zd, let ek(x\u2032) := e2\u03c0ik\u00b7x\u2032. For a function f \u2208L1( Td) we de\ufb01ne its k-th Fourier coe\ufb03cient by fk := R Td e\u2212k(x\u2032)f(x\u2032)dx\u2032. Proposition 2.2 ([24]). If f \u2208C\u221e( Td), then its Fourier coe\ufb03cients and series satisfy the following: a). the coe\ufb03cients have polynomial decay bound |fk| \u2272\u27e8k\u27e9\u22122m\u2225f\u2225W 2m,1(Td), where \u27e8k\u27e9:= (1 + |k|2)1/2 and the constant of the inequality may depend on m and d, b). there is pointwise absolute uniform convergence of the Fourier series f(x\u2032) = P k\u2208Zd fkek(x\u2032), as well as for of its derivatives, c). Plancherel\u2019s identity holds, \u2225f\u22252 L2(Td) = P k\u2208Zd |fk|2. Similarly, for a function f \u2208S(T ) we de\ufb01ne its k-th Fourier coe\ufb03cient by fk(x1) := R Td e\u2212k(x\u2032)f(x1, x\u2032)dx\u2032. The previous results can be combined as follows. Proposition 2.3. If f \u2208S(T ), then its Fourier coe\ufb03cients fk are in S( R). Moreover, these satisfy the following: 7 \fa). the coe\ufb03cients have polynomial decay bounds \u2225fk\u2225L1(R) \u2272\u27e8k\u27e9\u22122m\u2225f\u2225W 2m,1(T ), where the constant of the inequality may depend on m and d, b). the transform of the coe\ufb03cients and its derivatives have polynomial decay bounds |D\u03b1 \u03be b fk(\u03be)| \u2272\u27e8\u03be, k\u27e9\u22122m\u2225x\u03b1 1 f\u2225W 2m,1(T ), where \u27e8\u03be, k\u27e9:= (1 + \u03be2 + |k|2)1/2 and the constant of the inequality may depend on m and d, c). the inversion formula holds, f(x1, x\u2032) = X k\u2208Zd fk(x1)ek(x\u2032) = X k\u2208Zd Z R e2\u03c0ix1\u03beek(x\u2032) b fk(\u03be)d\u03be, with pointwise absolute uniform convergence, as well as for its derivatives, d). Plancherel\u2019s identity holds, \u2225f\u22252 L2(T ) = P k\u2208Zd \u2225fk\u22252 L2(R) = P k\u2208Zd \u2225b fk\u22252 L2(R). e). for any k \u2208 Zd, the function fk(x1)ek(x\u2032) is in S(T ), and we have that \u2225fkek\u2225l \u2272\u27e8k\u27e9\u2212(2m\u2212l)\u2225f\u2225l+2m, where the constant may depend on l, m, and d. Moreover, the partial sums of the Fourier series, SNf(x1, x\u2032) := P |k|\u2264N fk(x1)ek(x\u2032), converge to f in S(T ), Proof. For \u03b1 \u2264m we have that \u27e8x1\u27e9m|D\u03b1 x1fk| \u2264\u2225\u27e8x1\u27e9mD\u03b1 x1f(x1, \u00b7)\u2225L1(Td) \u2264\u2225f\u2225m, where \u2225f\u2225m is the seminorm de\ufb01ned above for functions in S(T ). This proves that the Fourier coef\ufb01cients fk are in S( R). Moreover, using the identity \u27e8k\u27e92me\u2212k(x\u2032) = \u27e8Dx\u2032\u27e92me\u2212k(x\u2032) and integrating by parts yields that \u27e8k\u27e92m|fk(x1)| = \f \f \f \f Z Td\u27e8Dx\u2032\u27e92m(e\u2212k(x\u2032))f(x1, x\u2032)dx\u2032 \f \f \f \f = \f \f \f \f Z Td e\u2212k(x\u2032)\u27e8Dx\u2032\u27e92mf(x1, x\u2032)dx\u2032 \f \f \f \f \u2272 Z Td |\u27e8Dx\u2032\u27e92mf(x1, x\u2032)|dx\u2032. Integrating over R gives the \ufb01rst result. Similarly, using the identities D\u03b1 \u03be e\u22122\u03c0ix1\u03be = (\u2212x1)\u03b1e\u22122\u03c0ix1\u03be, \u27e8\u03be, k\u27e92m(e\u22122\u03c0ix1\u03bee\u2212k(x\u2032)) = \u27e8D\u27e92m(e\u22122\u03c0ix1\u03bee\u2212k(x\u2032)), and integrating by parts we conclude that \u27e8\u03be, k\u27e92m|D\u03b1 \u03be b fk(\u03be)| = \f \f \f \f Z T \u27e8D\u27e92m(e\u22122\u03c0ix1\u03bee\u2212k(x\u2032))x\u03b1 1 fdx1dx\u2032 \f \f \f \f = \f \f \f \f Z T e\u22122\u03c0ix1\u03bee\u2212k(x\u2032)\u27e8D\u27e92m(x\u03b1 1 f)dx1dx\u2032 \f \f \f \f \u2272\u2225x\u03b1 1 f\u2225W 2m,1(T ). Moreover, this result proves that the Fourier decomposition converges absolutely, and so the inversion and Plancherel formulas follow from Proposition 2.1 and Proposition 2.2. Finally, we can bound \u2225fkek\u2225l = sup{\u27e8x1\u27e9l|D\u03b1(fkek)| : |\u03b1| \u2264l} \u2272\u27e8k\u27e9l sup{\u27e8x1\u27e9l|D\u03b11 x1 fk| : \u03b11 \u2264l} \u2272\u27e8k\u27e9\u2212(2m\u2212l)\u2225f\u2225l+2m, where we have used Proposition 2.2 for the last step. The convergence of the partial sums in S(T ) follows from this. 8 \f2.1.3 Fourier analysis on tempered distributions From Fubini\u2019s theorem we have that R R fb g = R R b fg, for f, g \u2208S( R). This suggests to de\ufb01ne the Fourier transform of \u03d5 \u2208S\u2032( R) by \u27e8b \u03d5, f\u27e9:= \u27e8\u03d5, b f\u27e9. To see indeed that b \u03d5 \u2208S\u2032( R) we use Proposition 2.1 to get that b fn \u2192b f in S( R) if fn \u2192f in S( R). It is clear that this de\ufb01nition extends the above de\ufb01nition of Fourier transform in S( R). Finally, we proceed to de\ufb01ne the Fourier coe\ufb03cients of a tempered distribution. Let us consider the operators \u03c0k : S(T ) \u2192S( R) and \u03c8k : S( R) \u2192S(T ) given by \u03c0kf := fk and \u03c8kg := g(x1)ek(x\u2032). The Fourier inversion formula on S(T ) can be written formally as I = P k\u2208Zd \u03c8k\u03c0k. Moreover, proceeding as in the proof of Proposition 2.3 we have that \u2225\u03c0kf\u2225l \u2272\u27e8k\u27e9\u22122m\u2225f\u2225l+2m, \u2225\u03c8kg\u2225l \u2272\u27e8k\u27e9l\u2225g\u2225l, for all l, m \u22650. By duality, this gives rise to the adjoint operators \u03c0\u2217 k : S\u2032( R) \u2192S\u2032(T ) and \u03c8\u2217 k : S\u2032( T) \u2192S\u2032( R), de\ufb01ned by \u27e8\u03c0\u2217 k\u03c6, f\u27e9:= \u27e8\u03c6, \u03c0kf\u27e9, \u27e8\u03c8\u2217 k\u03d5, g\u27e9:= \u27e8\u03d5, \u03c8kg\u27e9. For a distribution \u03d5 \u2208S\u2032(T ), we de\ufb01ne its k-th Fourier coe\ufb03cient by \u03d5k := \u03c8\u2217 \u2212k\u03d5 \u2208S\u2032( R). This de\ufb01nition extends that of Fourier coe\ufb03cients for functions in S(T ). Below, we prove the formal dual of the inversion formula above, which reads I = P k\u2208Zd \u03c0\u2217 k\u03c8\u2217 k = P k\u2208Zd \u03c0\u2217 \u2212k\u03c8\u2217 \u2212k. Proposition 2.4. Let f \u2208S(T ) and \u03d5 \u2208S\u2032(T ). The Fourier coe\ufb03cients satisfy the following: a). the usual di\ufb00erentiation properties hold, i.e. (D\u03b1 x\u2032\u03d5)k = k\u03b1\u03d5k, b). Parseval\u2019s identity holds, \u27e8\u03d5, f\u27e9= P k\u2208Zd\u27e8\u03d5k, fk\u27e9, with absolute convergence. c). the partial sums of the Fourier series, SN\u03d5 = P |k|\u2264N \u03c0\u2217 \u2212k\u03d5k, converge to \u03d5 in S\u2032(T ). Proof. If g \u2208S( R), then we have that \u27e8(D\u03b1 x\u2032\u03d5)k, g\u27e9= \u27e8D\u03b1 x\u2032\u03d5, ge\u2212k\u27e9= (\u22121)\u03b1\u27e8\u03d5, D\u03b1 x\u2032(ge\u2212k)\u27e9= (\u22121)\u03b1\u27e8\u03d5, (\u2212k)\u03b1ge\u2212k\u27e9= k\u03b1\u27e8\u03d5k, g\u27e9, i.e. (D\u03b1 x\u2032\u03d5)k = k\u03b1\u03d5. To prove Parseval\u2019s identity we \ufb01rst prove that the series converges absolutely. We know that there exists l \u2208 N such that |\u27e8\u03d5, g\u27e9| \u2272\u2225g\u2225l for all g \u2208S(T ). From a remark above we have that |\u27e8\u03d5k, fk\u27e9| = |\u27e8\u03d5, fkek\u27e9| \u2272\u2225fkek\u2225l \u2272\u27e8k\u27e9\u2212(2m\u2212l)\u2225f\u2225l+2m, so it follows that the series converges absolutely by choosing m large. Recalling from Proposition 2.3 that SNf \u2192f in S(T ), we conclude that, \u27e8\u03d5, f\u27e9= lim N\u2192+\u221e X |k|\u2264N \u27e8\u03d5, fkek\u27e9= lim N\u2192+\u221e X |k|\u2264N \u27e8\u03d5k, fk\u27e9= X k\u2208Zd \u27e8\u03d5k, fk\u27e9. The convergence SN\u03d5 \u21c0\u03d5 in S\u2032(T ) follows from Plancherel\u2019s identity and the fact that fk = (f)\u2212k. 2.2 Function spaces Recall that for a nonnegative integer s we considered the Sobolev spaces W s,p(T ) with norm \u2225f\u2225W s,p(T ) := P |\u03b1|\u2264s \u2225D\u03b1f\u2225Lp(T ). For p = 2 we denote Hs(T ) := W s,2(T ). The de\ufb01nitions of these spaces over R, Td, and M are analogous. In the case of T and R, these spaces are also the completions of the corresponding space of Schwartz functions under the respective norm, while for M these spaces are the completions of restrictions to M of the Schwartz functions S(T ) under the W s,p(M) norm. 9 \fSince T has no boundary we can de\ufb01ne the dual space H\u22121(T ) := (H1(T ))\u2217; we leave the de\ufb01nition of H\u22121(M) to the next section. By Plancherel\u2019s theorem, from Proposition 2.3, we see that if s is a nonnegative integer and f \u2208S(T ), then \u2225f\u22252 Hs(T ) \u2243 X |\u03b1|\u2264s \u2225D\u03b1f\u22252 L2(T ) \u2243 X k\u2208Zd Z R \u27e8\u03be, k\u27e92s| b fk(\u03be)|2d\u03be. This allows to extend the de\ufb01nition of the spaces Hs(T ) to any s \u2208 R. Moreover, observe that this extension coincides also with the previous de\ufb01nition of H\u22121(T ). Analogous extensions can also be de\ufb01ned for R and Td. On the boundary \u2202M, we consider the usual L2 space L2(\u2202M, d\u03c3) and its corresponding Sobolev spaces Hs(\u2202M); we elaborate more on the Sobolev spaces in the next section. We also de\ufb01ne the Sobolev subspaces Hs loc(T ) := {f : f \u2208Hs([\u2212R, R] \u00d7 Td) for any R > 0}, Hs c(T ) := {f \u2208Hs(T ) : f(x1, x\u2032) = 0 when |x1| \u2265R for some R > 0}, and its analogs over R. For \u03b4 \u2208 R we de\ufb01ne the L2 weighted spaces L2 \u03b4(T ) := {f : \u27e8x1\u27e9\u03b4f \u2208L2(T )}, with the norm \u2225f\u2225L2 \u03b4(T ) := \u2225\u27e8x1\u27e9\u03b4f\u2225L2(T ). Similarly, we also de\ufb01ne L2 \u03b4( R). It follows from Proposition 2.2 that we also have Plancherel\u2019s identity for weighted spaces, \u2225f\u2225L2 \u03b4(T ) = Z T \u27e8x1\u27e92\u03b4|f(x1, x\u2032)|2dx1dx\u2032 = Z R \u27e8x1\u27e92\u03b4 X k\u2208Zd |fk(x1)|2dx1 = X k\u2208Zd \u2225fk\u22252 L2 \u03b4(R). (3) For a nonnegative integer s the weighted Sobolev spaces have two equivalent de\ufb01nitions, Hs \u03b4(T ) := {f \u2208L2 \u03b4(T ) : D\u03b1f \u2208L2 \u03b4(T ) for |\u03b1| \u2264s} = {f \u2208L2 \u03b4(T ) : \u27e8x1\u27e9\u03b4f \u2208Hs(T )}. We consider the norm \u2225f\u2225Hs \u03b4 (T ) as any of the two equivalent norms: P |\u03b1|\u2264s \u2225D\u03b1f\u2225L2 \u03b4(T ) or \u2225\u27e8x1\u27e9\u03b4f\u2225Hs(T ). We also consider the analogs of these spaces over R. By (3) we get that \u2225f\u22252 Hs \u03b4 (T ) \u2243 X |\u03b1|\u2264s \u2225D\u03b1f\u22252 L2 \u03b4(T ) \u2243 s X m=0 X k\u2208Zd \u27e8k\u27e92s\u22122m\u2225Dm x1fk\u22252 L2 \u03b4(T ). We can endow the space Hs \u03b4 (T ) with another norm by considering a (small) real parameter \u210fand de\ufb01nining \u2225f\u2225Hs \u03b4,\u210f(T ) := X |\u03b1|\u2264s \u2225(\u210fD)\u03b1f\u2225L2 \u03b4(T ) \u2243 \u0012 s X m=0 X k\u2208Zd \u27e8\u210fk\u27e92s\u22122m\u2225(\u210fDx1)mfk\u22252 L2 \u03b4(T ) \u00131/2 . We call this the semiclassical weighted Sobolev space. Analogously, we de\ufb01ne the semiclassical Sobolev spaces W s,p \u210f(T ) and their norm. 2.3 Dirichlet problem: de\ufb01nitions and basic facts In this section we introduce the necessary de\ufb01nitions give a precise formulation of the Dirichlet problem \u001a HV,W u = 0 in M\u2212, u = f on \u2202M, (\u2217) where HV,W = (D + V )2 + W = D2 + 2V \u00b7 D + (V 2 + D \u00b7 V + W). 10 \f2.3.1 Trace operators and Sobolev spaces We call the trace operator that which restricts functions in the cylinder T to its boundary values in \u2202M, and we denote it by tr. For s > 1/2, the operator tr : Hs(T ) \u2192Hs\u22121/2(\u2202M) is continuous. For functions de\ufb01ned only either in the interior or exterior of M, denoted by M\u2212and M+ respectively, there are also the operators tr\u00b1 : Hs(M\u00b1) \u2192Hs\u22121/2(\u2202M) for s > 1/2. On the boundary we de\ufb01ne the dual space H\u22121/2(\u2202M) = (H1/2(\u2202M))\u2217. The continuity of the trace operator tr : H1(T ) \u2192H1/2(\u2202M) gives the existence of the adjoint operator tr\u2217: H\u22121/2(\u2202M) \u2192 H\u22121(T ). If \u03d5 \u2208H1(T ) is supported away from \u2202M, then tr(\u03d5) = 0; this implies that the adjoint tr\u2217actually maps H\u22121/2(\u2202M) into H\u22121 c (T ). The adjoint is supported on \u2202M and, formally, we have tr\u2217\u03d5 = \u03d5d\u03c3. We also consider the space H1 0(M) := {u \u2208H1(M) : tr\u2212(u) = 0} and its dual H\u22121(M) := (H1 0(M))\u2217. The space H1 0(M) is also the closure of C\u221e c (M\u2212) under the H1(M) norm. 2.3.2 Weak solutions The de\ufb01nitions below of weak solution and Dirichlet-to-Neumann map are natural after we formally integrate by parts, Z M (HV,W u)v = Z M D \u00b7 [(D + V )u]v + V \u00b7 [(D + V )u]v + Wuv = Z M \u2212[(D + V )u] \u00b7 Dv + V \u00b7 [(D + V )u]v + Wuv + 1 2\u03c0i Z \u2202M \u03bd \u00b7 [(D + V )u]v = Z M \u2212Du \u00b7 Dv + V \u00b7 (vDu \u2212uDv) + (V 2 + W)uv \u2212i 2\u03c0 Z \u2202M \u03bd \u00b7 [(D + V )u]v. For f \u2208H1/2(\u2202M), we say that u \u2208H1(M) is a weak solution to the Dirichlet problem (\u2217) if tr\u2212(u) = f and Z M \u2212Du \u00b7 D\u03d5 + V \u00b7 (\u03d5Du \u2212uD\u03d5) + (V 2 + W)u\u03d5 = 0, (1) for all test functions \u03d5 \u2208H1 0(M). 2.3.3 Inhomogeneous problem and extension operator The \ufb01rst step towards solving the Dirichlet problem (\u2217) is the solution to the inhomogeneous boundary value problem D2u = f \u2208H\u22121(M), u \u2208H1 0(M). (\u2217\u2217) We say that u is a solution to (\u2217\u2217) if for any \u03d5 \u2208H1 0(M) we have \u27e8f, \u03d5\u27e9= Z M \u2212Du \u00b7 D\u03d5. Proposition 2.5 ([24]). For any f \u2208H\u22121(M) there exists a unique solution u \u2208H1 0(M) to the boundary value problem D2u = f. If T f := u denotes the solution operator, then T : Hs(M) \u2192 Hs+2(M) \u2229H1 0(M) is bounded for any s \u2265\u22121. In [24] it is shown an explicit construction of a bounded extension operator E : Hs\u22121/2(\u2202M) \u2192Hs(M) for all s \u22651, such that tr\u2212\u25e6E = I. Moreover, for any N \u2208 N and s \u2264N there is an extension Hs(M) \u2192Hs(T ) (that may depend on N), so that we have an extension E : Hs\u22121/2(\u2202M) \u2192Hs(T ) with tr \u25e6E = I; in particular, the trace operator tr : Hs(T ) \u2192Hs\u22121/2(\u2202M) is surjective for s \u22651. Moreover, by cutting o\ufb00the extension with an appropriate \ufb01xed smooth function we can assume that Ef is supported on some \ufb01xed compact set of T , containing M, for any f \u2208Hs(\u2202M). Remark. We will be concerned with values of s in a \ufb01xed range, so we will avoid to refer constantly to the integer associated to the extension. 11 \f2.3.4 Solution to the Dirichlet problem The existence of the extension allows to turn the Dirichlet problem (\u2217) into the boundary value problem (\u2217\u2217) for which we know the existence, uniqueness, and regularity properties. Proposition 2.6 ([24]). Assume that the potentials satisfy V, W \u2208L\u221e(M) and D \u00b7 V \u2208L\u221e(M). If 0 is not a Dirichlet eigenvalue of HV,W in M, then for any f \u2208H1/2(\u2202M) there exists a unique weak u \u2208H1(M) solution to the Dirichlet problem (\u2217). If we denote DV,W f := u, then DV,W : Hs(\u2202M) \u2192 Hs+1/2(M) is bounded for 1/2 \u2264s \u22643/2. Proof. Under the conditions on the potentials we have that the \ufb01rst order di\ufb00erential operator X := HV,W \u2212D2 = 2V \u00b7 D + (V 2 + D \u00b7 V + W) maps H1(M) into L2(M). We \ufb01rst consider the case f \u2208H1/2(\u2202M), so that Ef \u2208H1(M). Then, u \u2208H1(M) solves the Dirichlet problem (\u2217) if and only if v := u \u2212Ef \u2208H1 0(M) solves the boundary value problem HV,W v = \u2212HV,WEf. From Proposition 2.5 we can look for a solution of the form v = T w, with w \u2208H\u22121(M), leaving us to solve the equation (I + XT )w = \u2212HV,WEf \u2208H\u22121(M). From Proposition 2.5 and the conditions on the potentials we know that the operator XT : H\u22121(M) \u2192L2(M) is continuous, and by Rellich\u2019s theorem we have that XT is a compact operator on H\u22121(M). If 0 is not a Dirichlet eigenvalue of HV,W in M, then the Dirichlet problem (\u2217) has at most one solution and therefore I + XT is injective. It follows then from Fredholm\u2019s alternative that I + XT is bijective, and by the Open Mapping theorem that its inverse is continuous. Then, \u2225v\u2225H1 0 (M) \u2272\u2225w\u2225H\u22121(M) \u2272\u2225HV,W Ef\u2225H\u22121(M) \u2272\u2225Ef\u2225H1(M) \u2272\u2225f\u2225H1/2(\u2202M). Therefore, u = v + Ef \u2208H1(M) and \u2225u\u2225H1(M) \u2272\u2225v\u2225H1(M) + \u2225Ef\u2225H1(M) \u2272\u2225f\u2225H1/2(\u2202M), as desired. To prove the higher\u2013order regularity of the solutions, all we need to modify in the proof is the fact that for f \u2208H3/2(\u2202M) we have Ef \u2208H2(M), and therefore HV,W Ef \u2208L2(M). The higher\u2013order regularity properties of T from Proposition 2.5 imply that T : L2(M) \u2192H1(M) is compact, and so XT is compact on L2(M). After this the proof carries out exactly as before. 2.3.5 Dirichlet-to-Neumann map and normal derivatives We de\ufb01ne the Dirichlet-to-Neumann (DN) map \u039bV,W as follows: if f, g \u2208H1/2(\u2202M) and v \u2208H1(M) is any function extending g, i.e. tr\u2212(v) = g, then \u27e8\u039bV,W f, g\u27e9:= Z M \u2212Du \u00b7 Dv + V \u00b7 (vDu \u2212uDv) + (V 2 + W)uv, (2) where u = DV,W f \u2208H1(M) is the weak solution of (\u2217). The de\ufb01nition of the weak solution implies that the DN map is well-de\ufb01ned, i.e. it depends only on g an not on the choice of extension. Formally we have that \u039bV,Wf = i 2\u03c0 \u03bd \u00b7 (D + V )u \f \f \u2202M. Before proving the boundedness properties of the DN map we record a Green identity that will be useful now and in Chapter 5. This is just slightly more general than saying that the divergence theorem holds for vector \ufb01elds in W 1,1(M). Proposition 2.7. If supp(V ) \u2286M\u2212, V \u2208L\u221e(M), D \u00b7 V \u2208L\u221e(M), w \u2208W 1,1(M), then Z M V \u00b7 Dw + (D \u00b7 V )w = 0. Proof. This proof is taken from [18], Lemma 5.2. Let N = d + 1. Since L\u221e(M) does not have good approximation properties, we start proving it for V \u2208LN(M), D\u00b7V \u2208LN/2(M), w = W 1,N/(N\u22121)(M). From the Sobolev embedding we have that w \u2208W N/(N\u22122)(M), so that the integral is in fact convergent. 12 \fGiven that supp(V ) \u2286M\u2212we can \ufb01nd a compact set K \u2286M\u2212and smooth functions {Vk} such that supp(Vk) \u2286K, Vk \u2192V in LN(M) and D \u00b7 Vk \u2192D \u00b7 V in LN/2(M). Moreover, Vkw \u2208W 1,N/(N\u22121)(M) and supp(Vkw) \u2286K. The divergence theorem holds for vector \ufb01elds in W 1,1(M), and so we get Z M V \u00b7 Dw + (D \u00b7 V )w = lim k\u2192+\u221e Z M Vk \u00b7 Dw + (D \u00b7 Vk)w = lim k\u2192+\u221e Z M D \u00b7 (Vkw) = lim k\u2192+\u221e 1 2\u03c0i Z \u2202M \u03bd \u00b7 (Vkw) = 0. The conditions V \u2208LN(M) and D \u00b7 V \u2208LN/2(M) are satis\ufb01ed if we assume V \u2208L\u221e(M) and D \u00b7 V \u2208L\u221e(M). Finally, the integral only takes place in supp(V ) \u2286M\u2212. We know that there exist smooth functions {wk} such that wk \u2192w in L1(M) and Dwk \u2192Dw in L1(supp(V )), and thus the conclusion follows. Remark. The condition supp(V ) \u2286M\u2212is not necessary; in [18] this is proven under weaker conditions whose analogs would be supp(V ) \u2286M and D \u00b7 V \u2208L\u221e(T ). Before we continue, we need to de\ufb01ne the interior and exterior normal derivative of a function. This represents no problem if the function u is in H2(M) or H2 loc(M+), as the gradient Du is in H1(M) or H1 loc(M+) and so its trace is in H1/2(\u2202M). Moreover, for \u03d5 \u2208C\u221e c (T ) it satis\ufb01es either Z \u2202M (\u2202\u00b1 \u03bd u)\u03d5 = \u22134\u03c02 Z M\u00b1 (D2u)\u03d5 + Du \u00b7 D\u03d5. These identities suggest that we can de\ufb01ne the normal derivatives for harmonic functions in H1(M) or H1 loc(M+). We say u, in H1(M) or H1 loc(M+), is harmonic if for any \u03d5 \u2208C\u221e c (M\u00b1) we have Z M\u00b1 Du \u00b7 D\u03d5 = 0, (4) as it corresponds. By continuity these de\ufb01nitions extend to all test functions \u03d5 \u2208H1(M\u00b1) with tr\u00b1(\u03d5) = 0. If f \u2208H1/2(\u2202M) and v \u2208H1(M\u00b1) is any function extending f, i.e. tr\u00b1(v) = f, then we de\ufb01ne the normal derivatives as the functionals \u27e8\u2202\u00b1 \u03bd u, f\u27e9:= \u22134\u03c02 Z M\u00b1 Du \u00b7 Dv. (5) The condition (4) ensures that this is well-de\ufb01ned, i.e. it depends only on f and not on the choice of the extension. In particular, taking v = Ef \u2208H1 c (T ) and using the boundedness and support properties of Ef we can conclude that \u2202\u00b1 \u03bd u \u2208H\u22121/2(\u2202M). Proposition 2.8. Assume that the potentials satisfy V, W \u2208L\u221e(M) and D \u00b7 V \u2208L\u221e(M). Suppose in addition that supp(V ) \u2286M\u2212. If 0 is not a Dirichlet eigenvalue of HV,W in M, then the Dirichletto-Neumann map \u039bV,W : Hs(\u2202M) \u2192Hs\u22121(\u2202M) is bounded for 1/2 \u2264s \u22643/2. Moreover, if f \u2208H3/2(\u2202M) and u = DV,W f \u2208H2(M), then we have \u039bV,Wf = 1 4\u03c02 \u2202\u2212 \u03bd u \f \f \u2202M. Proof. We \ufb01rst prove that \u039bV,W : H1/2(\u2202M) \u2192H\u22121/2(\u2202M) is bounded. If f, g \u2208H1/2(\u2202M), then we have to show that |\u27e8\u039bV,W f, g\u27e9| \u2272\u2225f\u2225H1/2(\u2202M)\u2225g\u2225H1/2(\u2202M). For u, v \u2208H1(M) we have that \f \f \f \f Z M \u2212Du \u00b7 Dv + V \u00b7 (vDu \u2212uDv) + (V 2 + W)uv \f \f \f \f \u2272\u2225u\u2225H1(M)\u2225v\u2225H1(M). In particular, taking u = DV,W f \u2208H1(M) and v = Eg \u2208H1(M), we conclude from (2) that |\u27e8\u039bV,W f, g\u27e9| \u2272\u2225u\u2225H1(M)\u2225v\u2225H1(M) \u2272\u2225f\u2225H1/2(\u2202M)\u2225g\u2225H1/2(\u2202M), 13 \fwhere we used in the last inequality the boundedness of DV,W and E. Now we prove the result when f \u2208H3/2(\u2202M). Let g, v be as before. From Proposition 2.6 we have that u = DV,W f \u2208H2(M), and so \u2202\u2212 \u03bd u \u2208H1/2(\u2202M). Moreover, we can integrate by parts to obtain Z M (D2u)v = Z M \u2212Du \u00b7 Dv \u2212 1 4\u03c02 Z \u2202M (\u2202\u2212 \u03bd u)g In addition, for u \u2208H2(M), v \u2208H1(M) we have that uv \u2208W 1,1(M), so that we obtain R M D\u00b7(V u)v = \u2212 R M V \u00b7 (uDv). from Proposition 2.7. From the previous identities and HV,W u = 0 we get that 0 = Z M D \u00b7 [(D + V )u]v + V \u00b7 [(D + V )u]v + Wuv = Z M \u2212Du \u00b7 Dv + V \u00b7 (vDu \u2212uDv) + (V 2 + W)uv \u2212 1 4\u03c02 Z \u2202M (\u2202\u2212 \u03bd u)g, i.e. \u039bV,W f = \u2202\u2212 \u03bd u/4\u03c02, and \u2225\u039bV,Wf\u2225H1/2(\u2202M) \u2272\u2225Du\u2225H1(M) \u2272\u2225u\u2225H2(M) \u2272\u2225f\u2225H3/2(\u2202M), as we wanted to prove. An important application of the previous theorem is the case of the Laplacian H0,0 = D2. We know that 0 is not a Dirichlet eigenvalue of the Laplacian in M, and so we have the DN map \u039b0,0 de\ufb01ned by \u27e8\u039b0,0f, g\u27e9:= Z M \u2212Du \u00b7 Dv, (6) where u = D0,0f \u2208H1(M) and v \u2208H1(M) is any function extending g \u2208H1/2(\u2202M). We will not use the result for s > 3/2, but it can be shown that for s \u22651/2, the map \u039b0,0 : Hs(\u2202M) \u2192Hs\u22121(\u2202M) is bounded. Moreover, the symmetry in (6) implies the symmetry of the DN map, i.e. we have \u27e8\u039b0,0f, g\u27e9= \u27e8\u039b0,0g, f\u27e9for f, g \u2208H1/2(\u2202M). 3 Semiclassical pseudodi\ufb00erential operators over R \u00d7 Td We denote the points in the cylinder T = R \u00d7 Td by (x1, x\u2032), meaning that x1 \u2208 R and x\u2032 \u2208 Td. As it has been usual in the inverse problem literature, instead of the large parameter \u03c4 (appearing in the Carleman estimate) we consider a small parameter \u210f= 1/\u03c4 > 0, and use the standard notation and results from semiclassical analysis. We use the notation \u210finstead of h to prevent confusion with the later use of h for a harmonic function. In this section, we de\ufb01ne and prove the necessary results for pseudodi\ufb00erential operators on the cylinder T = R\u00d7 Td. We will use the de\ufb01nition and basic properties of these operators on R and Td, for which we refer to [21], [29], [18], [20], [17]. Some of the results below may be valid in greater generality than that we consider here. We will restrict to prove the results that we will need. 3.1 De\ufb01nitions and elementary properties 3.1.1 Semiclassical Fourier transform We will use the ideas from semiclassical analysis only for the real variable x1, as the term \u03c4x1 = x1/\u210f appears in the limiting Carleman weight, and expressions of the form e2\u03c0\u03c4x1Dx1e\u22122\u03c0\u03c4x1 = Dx1 + i\u03c4 = \u03c4(\u210fDx1 + i) will continue to appear through the problem. For this reason, throughout the present chapter, we de\ufb01ne the semiclassical Fourier transform, for functions in L1( R), by c f \u210f(\u03be) := Z R e\u22122\u03c0ix1\u03be/\u210ff(x1)dx1, 14 \fi.e. c f \u210f(\u210f\u03be) = b f(\u03be). We can rewrite the results from Proposition 2.3 as follows. Proposition 3.1. If f \u2208S(T ), then its Fourier coe\ufb03cients fk are in S( R). Moreover, these satisfy the following: a). the transform of the coe\ufb03cients and its derivatives have polynomial decay bounds |(\u210fD\u03be)\u03b1c f \u210f k (\u03be)| \u2272 \u2225x\u03b1 1 f\u2225W 2m,1 \u210f (T ) \u27e8\u03be, \u210fk\u27e92m , where \u27e8\u03be, \u210fk\u27e9:= (1 + \u03be2 + |\u210fk|2)1/2 and the constant of the inequality may depend on m and d, b). the inversion formula holds, f(x1, x\u2032) = X k\u2208Zd fk(x1)ek(x\u2032) = 1 \u210f X k\u2208Zd Z R e2\u03c0ix1\u03be/\u210fek(x\u2032)c f \u210f k (\u03be)d\u03be, with pointwise absolute uniform convergence, as well as for its derivatives, c). Plancherel\u2019s identity holds \u2225f\u22252 L2(T ) = P k\u2208Zd \u2225fk\u22252 L2(R) = \u210f\u22121 P k\u2208Zd \u2225c f \u210f k \u22252 L2(R). 3.1.2 Semiclassical pseudodi\ufb00erential operators For the di\ufb00erential operator a\u03b1,\u03b2(x1, x\u2032)(\u210fDx1)\u03b1(\u210fDx\u2032)\u03b2 on T and f \u2208S(T ) we have the Fourier inversion relation [a\u03b1,\u03b2(x1, x\u2032)(\u210fDx1)\u03b1(\u210fDx\u2032)\u03b2]f(x1, x\u2032) = 1 \u210f X k\u2208Zd Z R e2\u03c0ix1\u03be/\u210fek(x\u2032)[a\u03b1,\u03b2(x1, x\u2032)\u03be\u03b1(\u210fk)\u03b2]c f \u210f k (\u03be)d\u03be. We refer to the function a(x1, x\u2032, \u03be, k) = a\u03b1,\u03b2(x1, x\u2032)\u03be\u03b1(\u210fk)\u03b2 as the symbol of the di\ufb00erential operator. In what follows we show that we can admit symbols more general than polynomials (in the dual variables \u03be and k). Finally, although we only need to de\ufb01ne the symbol over R \u00d7 Td \u00d7 R \u00d7 Zd, it may be convenient also to allow for symbols over R\u00d7 Td\u00d7 R\u00d7 Rd. We denote the points in R\u00d7 Td\u00d7 R\u00d7 Rd by (x1, x\u2032, \u03be, t), and we call \u03be and t the dual real and toroidal variables, respectively. De\ufb01nition 3.2. We say that a = a(x1, \u03be; \u210f) is a (semiclassical) m-th order symbol over R\u00d7 R if there exists \u210f0 such that if 0 < \u210f\u2264\u210f0, then for any M \u22650 there exists a constant AM such that |D\u03b1 x1D\u03b2 \u03be a(x1, \u03be; \u210f)| \u2264AM\u27e8\u03be\u27e9m, whenever \u03b1 + |\u03b2| \u2264M. The associated pseudodi\ufb00erential operator is de\ufb01ned by Af(x1) := Op\u210f(a)f(x1) = 1 \u210f Z R e2\u03c0ix1\u03be/\u210fa(x1, \u03be; \u210f)c g\u210f(\u03be)d\u03be. De\ufb01nition 3.3. We say that a = a(x1, x\u2032, \u03be, t; \u210f) is a (semiclassical) m-th order symbol over R\u00d7 Td \u00d7 R \u00d7 Rd if there exists \u210f0 such that if 0 < \u210f\u2264\u210f0, then for any M \u22650 there exists a constant AM such that |D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be a(x1, x\u2032, \u03be, t; \u210f)| \u2264AM\u27e8\u03be, \u210ft\u27e9m, whenever \u03b1 + |\u03b2| + \u03b3 \u2264M. The associated pseudodi\ufb00erential operator is de\ufb01ned by Af(x1, x\u2032) := Op\u210f(a)f(x1, x\u2032) := 1 \u210f X k\u2208Zd Z R e2\u03c0ix1\u03be/\u210fek(x\u2032)a(x1, x\u2032, \u03be, k; \u210f)c f \u210f k (\u03be)d\u03be. Remark. Observe that we do not require the order of the factor \u27e8\u03be\u27e9or \u27e8\u03be, \u210ft\u27e9to decrease whenever we di\ufb00erentiate with respect to \u03be. This would be the case if the symbol were a polynomial or a rational function, but we will be considering more general symbols. In the notation of [21], these would correspond to symbols in Sm 0,0. 15 \fRemark. Note that we do not require any condition on the di\ufb00erences (or derivatives) with respect to the dual toroidal variables. In a later section, Composition, we will need these symbols and refer to them as special. Remark. To avoid unnecessary notation, we may occasionally drop the dependance of the symbol on the semiclassical parameter and just write a(x1, x\u2032, \u03be, t). Example. With this de\ufb01nition, the functions \u03be and \u210ftj are symbols of order 1. Moreover, we have that \u210fDx1 = Op\u210f(\u03be) and \u210fDx\u2032 j = Op\u210f(\u210ftj) as (\u210fDx1)f(x1, x\u2032) = 1 \u210f X k\u2208Zd Z R e2\u03c0ix1\u03be/\u210fek(x\u2032)(\u03be)c f \u210f k (\u03be)d\u03be, (\u210fDx\u2032 j)f(x1, x\u2032) = 1 \u210f X k\u2208Zd Z R e2\u03c0ix1\u03be/\u210fek(x\u2032)(\u210fkj)c f \u210f k (\u03be)d\u03be. Example. The function \u27e8\u03be, \u210ft\u27e9\u22122 := 1/(\u03be2 + |\u210ft|2 + 1) is a symbol of order \u22122. Proposition 3.4. If a, b are symbols of order m and n, then D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be a, a + b, and ab are symbols of order m, max{m, n}, and m + n, respectively. The seminorms of each of these symbols are bounded by those of a, the maximum of those of a and b, and products of those of a and b, respectively. Proof. This is a routine argument. Proposition 3.5. If A = Op\u210f(a) is a pseudodi\ufb00erential operator over T , then A maps the space of Schwartz functions S(T ) into itself. Proof. For this proof we will use the notation Dx = (Dx1, Dx\u2032). Let f \u2208S(T ). The polynomial control of the symbol a and its derivatives, together with the rapid decay of c f \u210f k (\u03be) from Proposition 3.1 give that Af \u2208C\u221e(T ), and it is bounded together with its derivatives. Moreover, di\ufb00erentiating the expression we see that (the vector) (\u210fDx)Af equals \u210fDxAf(x1, x\u2032) = 1 \u210f X k\u2208Zd Z R \u210fDx(e2\u03c0ix1\u03be/\u210fek(x\u2032)a)c f \u210f k (\u03be)d\u03be = 1 \u210f X k\u2208Zd Z R e2\u03c0ix1\u03be/\u210fek(x\u2032)[(\u03be, \u210fk)a + \u210fDxa]c f \u210f k (\u03be)d\u03be, and so it is a pseudodi\ufb00erential operator corresponding to the symbol (\u03be, \u210ft)a + \u210fDxa. By induction the same is true for higher order derivatives. Therefore, in order to show that Af \u2208S(T ), it su\ufb03ces to show that \u27e8x1\u27e92m|Af| \u2264Cm for all m \u22650. Integrating by parts we obtain that \u27e8x1\u27e92mAf(x1, x\u2032) = 1 \u210f X k\u2208Zd Z R \u27e8\u210fD\u03be\u27e92m(e2\u03c0ix1\u03be/\u210f)ek(x\u2032)ac f \u210f k (\u03be)d\u03be = 1 \u210f X k\u2208Zd Z R e2\u03c0ix1\u03be/\u210fek(x\u2032)\u27e8\u210fD\u03be\u27e92m[ac f \u210f k (\u03be)]d\u03be. Again, the polynomial control of the symbol a and its derivatives, together with the rapid decay of the derivatives of c f \u210f k (\u03be) from Proposition 3.1 give that this is bounded, from where the conclusion follows. Proposition 3.6. Let A = Op\u210f(a) be a pseudodi\ufb00erential operator over T . Then, it satis\ufb01es the following identities, \u210fDx1A = Op\u210f(\u03bea + \u210fDx1a), \u210f2D2 x1A = Op\u210f(\u03be2a + 2\u210f\u03beDx1a + \u210f2D2 x1a), \u210fDx\u2032 jA = Op\u210f(\u210ftja + \u210fDx\u2032 ja), \u210f2D2 x\u2032A = Op\u210f(|\u210ft|2a + 2\u210f(\u210ft \u00b7 Dx\u2032a) + \u210f2D2 x\u2032a), A \u25e6\u210fDx1 = Op\u210f(\u03bea), A \u25e6\u210f2D2 x1 = Op\u210f(\u03be2a), A \u25e6\u210f2D2 x\u2032 = Op\u210f(|\u210ft|2a). 16 \fProof. These results follow directly from the de\ufb01nition. The simplest case when dealing with pseudodi\ufb00erential operators in Rd, is when the symbol has spatial compact support, see Chapter 6, Section 2.1 in [21]. This is always the case for symbols on the torus, so in analogy to [21], we decompose the symbol in its Fourier series. With uniform convergence (in x1, x\u2032, \u03be, and l), we have that a(x1, x\u2032, \u03be, l; \u210f) = P k\u2208Zd ak(x1, \u03be, l; \u210f)ek(x\u2032), so we can rewrite the operator A = Op\u210f(a) as Af(x1, x\u2032) = 1 \u210f X l\u2208Zd Z R e2\u03c0ix1\u03be/\u210fel(x\u2032)a(x1, x\u2032, \u03be, l)c f \u210f l (\u03be)d\u03be = 1 \u210f X k,l\u2208Zd Z R e2\u03c0ix1\u03be/\u210fek+l(x\u2032)ak(x1, \u03be, l)c f \u210f l (\u03be)d\u03be = X k,l\u2208Zd \u00121 \u210f Z R e2\u03c0ix1\u03be/\u210fak\u2212l(x1, \u03be, l)c f \u210f l (\u03be)d\u03be \u0013 ek(x\u2032). (7) If a is a symbol over R \u00d7 Td \u00d7 R \u00d7 Rd, then for \ufb01xed k, l \u2208 Zd we can de\ufb01ne a symbol over R \u00d7 R by ak,l(x1, \u03be; \u210f) := ak\u2212l(x1, \u03be, l; \u210f). We will elaborate below on the properties of this symbol. Let us de\ufb01ne Akl := Op\u210f(ak,l) on S( R). For f \u2208S(T ), we de\ufb01ne Aklf(x1, x\u2032) := Aklfl(x1)ek(x\u2032), so that (7) can be expressed as the decomposition Af(x1, x\u2032) = X k,l\u2208Zd Aklfl(x1)ek(x\u2032) = X k,l\u2208Zd Aklf(x1, x\u2032). (8) 3.2 Boundedness In this section we prove a weighted version of the Calder\u00b4 on\u2013Vaillancourt theorem for pseudodi\ufb00erential operators over T . It is interesting to observe that we do not need to control the di\ufb00erences over the dual toroidal variables; this had already been noted in [20], [17]. Recall from before that for the symbol a(x1, x\u2032, \u03be, l; \u210f) over R \u00d7 Td \u00d7 R \u00d7 Zd, we de\ufb01ned the symbol ak,l(x1, \u03be; \u210f) := ak\u2212l(x1, \u03be, l; \u210f) over R \u00d7 R. Proposition 3.7. If a(x1, x\u2032, \u03be, l; \u210f) is a semiclassical zero order symbol over R \u00d7 Td \u00d7 R \u00d7 Zd, then ak,l(x1, \u03be; \u210f) is a semiclassical zero order symbol over R \u00d7 R with seminorm bounds |D\u03b1 x1D\u03b2 \u03be ak,l(x1, \u03be; \u210f)| \u2272AM+2N\u27e8k \u2212l\u27e9\u22122N, whenever \u03b1 + \u03b2 \u2264M and any N \u22650. Proof. From Proposition 2.2 we have that |D\u03b1 x1D\u03b2 \u03be ak,l(x1, \u03be)| = |D\u03b1 x1D\u03b2 \u03be ak\u2212l(x1, \u03be, l)| \u2272\u27e8k \u2212l\u27e9\u22122N\u2225D\u03b1 x1D\u03b2 \u03be a(x1, \u00b7, \u03be, l)\u2225W 2N,1(Td). Given that a is a zero order symbol, then for any N \u22650 we can bound \u2225D\u03b1 x1D\u03b2 \u03be a(x1, \u00b7, \u03be, l)\u2225W 2N,1(Td) \u2272 AM+2N, whenever \u03b1 + \u03b2 \u2264M, as we wanted to prove. We use this to show that the decomposition from (8) actually converges. The \ufb01rst step is to recall the standard boundedness properties of pseudodi\ufb00erential operators on weighted spaces over R. We will elaborate a little more on the quantitative aspect of the bound in the appendix at the end of the chapter. Proposition 3.8 ([18]). Let 0 < \u210f\u22641. Let a(x1, \u03be; \u210f) be a semiclassical zero order symbol over R \u00d7 R. For any \u03b4 \u2208 R the operator Op\u210f(a) is bounded in L2 \u03b4( R). Moreover, if |\u03b4| \u2264\u03b40, then the operator norms \u2225Op\u210f(a)\u2225L2 \u03b4(R)\u2192L2 \u03b4(R) are uniformly bounded (in \u03b4 and \u210f) by a multiple (depending on \u03b40) of some seminorm of a. 17 \fRemark. From Proposition 3.7 and Proposition 3.8 we obtain that if |\u03b4| \u2264\u03b40, then we can uniformly bound \u2225Akl\u2225L2 \u03b4(R)\u2192L2 \u03b4(R) \u2272AM+2N\u27e8k \u2212l\u27e9\u22122N, for some value of M and any N \u22650. Let us consider the elliptic di\ufb00erential operator \u27e8\u210fD\u27e92 = (\u210fD)2 + 1 = Op\u210f(\u27e8\u03be, \u210ft\u27e92), and the multiplier operator \u27e8\u210fD\u27e9\u22122 := Op\u210f(\u27e8\u03be, \u210ft\u27e9\u22122). These are pseudodi\ufb00erential operators, so by Proposition 3.5 they map S(T ) to itself. Moreover, these are inverses to each other on S(T ). The proof the following result is presented in the appendix. Proposition 3.9. Let |\u03b4| \u2264\u03b40 and let 0 < \u210f\u22641. The di\ufb00erential operator \u27e8\u210fD\u27e92 : H2 \u03b4,\u210f(T ) \u2192L2 \u03b4(T ) and the multiplier operator \u27e8\u210fD\u27e9\u22122 : L2 \u03b4(T ) \u2192H2 \u03b4,\u210f(T ) are uniformly bounded (in \u03b4 and \u210f) operators, and inverses to each other. The bounds of the operators may depend in \u03b40. Proposition 3.10. Let 0 < \u210f\u22641. If A = Op\u210f(a) is an m-th order pseudodi\ufb00erential operator, then \u27e8\u210fD\u27e92A\u27e8\u210fD\u27e9\u22122 is also an m-th order pseudodi\ufb00erential operator with symbol e a := a + 2\u210f(\u03be, \u210ft) \u00b7 (Dx1a, Dx\u2032a) + \u210f2(D2 x1a + D2 x\u2032a) \u27e8\u03be, \u210ft\u27e92 . Moreover, the seminorms of e a are bounded by seminorms of a. Proof. The \ufb01rst part is a direct computation, \u27e8\u210fD\u27e92A\u27e8\u210fD\u27e9\u22122f(x1, x\u2032) = \u27e8\u210fD\u27e92 \u00121 \u210f X k\u2208Zd Z R e2\u03c0ix1\u03be/\u210fek(x\u2032)a c f \u210f k (\u03be) \u27e8\u03be, \u210fk\u27e92 d\u03be \u0013 = 1 \u210f X k\u2208Zd Z R \u27e8\u210fD\u27e92(e2\u03c0ix1\u03be/\u210fek(x\u2032)a) c f \u210f k (\u03be) \u27e8\u03be, \u210fk\u27e92 d\u03be = 1 \u210f X k\u2208Zd Z R e2\u03c0ix1\u03be/\u210fek(x\u2032) \u0012 a + 2\u210f(\u03be, \u210fk) \u00b7 (Dx1a, Dx\u2032a) + \u210f2(D2 x1a + D2 x\u2032a) \u27e8\u03be, \u210fk\u27e92 \u0013 c f \u210f k (\u03be)d\u03be. The symbols (\u03be, \u210ft)/(\u03be2 + |\u210ft|2 + 1) and 1/(\u03be2 + |\u210ft|2 + 1) have order \u22121 and \u22122, respectively, with uniformly bounded (in \u210f) seminorms. The conclusion follows then from Proposition 3.4. Theorem 3.11. Let 0 < \u210f\u22641. Let a be a zero order symbol over R \u00d7 Td \u00d7 R \u00d7 Rd. For s = 0, 2 and |\u03b4| \u2264\u03b40, the operator Op\u210f(a) is uniformly bounded (in \u03b4 and \u210f) on Hs \u03b4,\u210f(T ). The bounds depend on d, \u03b40, and (linearly) in some seminorm of the symbol, but are independent of \u210f. Proof. We prove \ufb01rst the result on the weighted spaces L2 \u03b4(T ), and then conjugate by \u27e8\u210fD\u27e92 to show the result in H2 \u03b4,\u210f(T ). Recall the decomposition A = P k,l Akl from (8), where Aklf(x1, x\u2032) := Aklfl(x1)ek(x\u2032). Choosing N = N(d) large enough and using the bounds for the operators Akl, from the remark after Proposition 3.8, we obtain that sup k\u2208Zd X l\u2208Zd \u2225Akl\u2225L2 \u03b4(R)\u2192L2 \u03b4(R), sup l\u2208Zd X k\u2208Zd \u2225Akl\u2225L2 \u03b4(R)\u2192L2 \u03b4(R) \u2272AM+2N. By Plancherel\u2019s theorem and Schur\u2019s criterion it follows that \u2225Af\u22252 L2 \u03b4(T ) = X k\u2208Zd \r \r \r \r X l\u2208Zd Aklfl \r \r \r \r 2 L2 \u03b4(R) \u2264 X k\u2208Zd \u0012 X l\u2208Zd \u2225Akl\u2225L2 \u03b4(R)\u2192L2 \u03b4(R)\u2225fl\u2225L2 \u03b4(R) \u00132 \u2272A2 M+2N X l\u2208Zd \u2225fl\u22252 L2 \u03b4(R) = A2 M+2N\u2225f\u22252 L2 \u03b4(T ). This gives that A is a bounded operator on L2 \u03b4(T ) for |\u03b4| \u2264\u03b40. From Proposition 3.9 we get that the boundedness of A on H2 \u03b4,\u210f(T ) is equivalent to the boundedness of \u27e8\u210fD\u27e92A\u27e8\u210fD\u27e9\u22122 on L2 \u03b4(T ). We know from Proposition 3.10 that this is a zero order pseudodi\ufb00erential operator with seminorms bounded by those of a, and thus the conclusion follows. 18 \fRemark. The result proven above will be su\ufb03cient for our purposes, but the result can be extended to Hs \u03b4,\u210f(T ) for any 0 \u2264s \u22642 by complex interpolation. 3.3 Composition Let a, b be zero order symbols, and let A = Op\u210f(a) and B = Op\u210f(b). We know from Proposition 3.4 that ab is also a zero order symbol. The following result provides a relation between the operator Op\u210f(ab) and the composition AB. This will be used to obtain the invertibility of pseudodi\ufb00erential operators corresponding to certain zero order symbols. In contrast to Theorem 3.11, in this case we need our symbols to satisfy bounds for the di\ufb00erences in the dual toroidal variables. For (u, v, w) \u2208 R\u00d7 Rd\u00d7 Rd we denote \u27e8u, v, w\u27e9:= (1 + u2 + |v|2 + |w|2)1/2. De\ufb01nition 3.12. We say a zero order symbol a = a(x1, x\u2032, \u03be, t; \u210f) is special if, in addition to the conditions from De\ufb01nition 3.3, for any M \u22650 there exists a constant A\u2032 M such that |D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be (a(\u00b7, t1) \u2212a(\u00b7, t2))| \u2264\u210fA\u2032 M|t1 \u2212t2|, whenever \u03b1 + |\u03b2| + \u03b3 \u2264M. Remark. If the symbol is di\ufb00erentiable with respect to t and satis\ufb01es |D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be Dta| \u2264\u210fA\u2032 M, whenever \u03b1 + |\u03b2| + \u03b3 \u2264M, then the symbol is be special. Theorem 3.13. Let 0 < \u210f\u22641 and \u03b40 \u22650. Let a, b be zero order symbols over R \u00d7 Td \u00d7 R \u00d7 Rd. There exists a zero order symbol c such that Op\u210f(a)Op\u210f(b) = Op\u210f(c). If the symbols are special, then, for s = 0, 2 and |\u03b4| \u2264\u03b40, we have \u2225Op\u210f(a)Op\u210f(b) \u2212Op\u210f(ab)\u2225Hs \u03b4,\u210f(T )\u2192Hs \u03b4,\u210f(T ) = \u2225Op\u210f(c \u2212ab)\u2225Hs \u03b4,\u210f(T )\u2192Hs \u03b4,\u210f(T ) \u2272\u210f, where the constant of the inequality is a multiple (depending on d, \u03b40) of the product of some seminorm of the symbols, but is independent of \u210f. Proof. Let A = Op\u210f(a) and B = Op\u210f(b). Let us decompose A = P Ajk and B = P Blm as in (8). We have that AjkBlm = 0 if k \u0338= l, so that ABf(x1, x\u2032) = X j,k,l\u2208Zd AjkBklf(x1, x\u2032), and AjkBklf(x1, x\u2032) = AjkBklfl(x1)ej(x\u2032). We know that there exists a zero order symbol cj,k,l over R \u00d7 R such that AjkBkl = Op\u210f(cj,k,l), see [21], [29]. Thus, ABf(x1, x\u2032) = X j,k,l\u2208Zd AjkBklf(x1, x\u2032) = X j,k,l\u2208Zd AjkBklfl(x1)ej(x\u2032) = X j,k,l\u2208Zd Op\u210f(cj,k,l)fl(x1)ej(x\u2032). From Proposition 3.7 and Proposition 3.19, which we prove in the appendix, there exists some K such that for any N \u22650 we have that |D\u03b1 x1D\u03b2 \u03be cj,k,l| \u2272AK+M+2NBK+M+2N\u27e8j \u2212k\u27e9\u22122N\u27e8k \u2212l\u27e9\u22122N, (9) |D\u03b1 x1D\u03b2 \u03be (cj,k,l \u2212aj,kbk,l)| \u2272\u210fAK+M+2NBK+M+2N\u27e8j \u2212k\u27e9\u22122N\u27e8k \u2212l\u27e9\u22122N, (10) whenever \u03b1 + \u03b2 \u2264M. Recall that for a symbol s we denote sk,l(x1, \u03be) = sk\u2212l(x1, \u03be, l). Using this notation and the decomposition from (8), we see that if c were a symbol such that AB = Op\u210f(c), then we must have cj,l(x1, \u03be) = P k\u2208Zd cj,k,l(x1, \u03be). Thus we de\ufb01ne c(x1, x\u2032, \u03be, l) = X j\u2208Zd cj(x1, \u03be, l)ej(x\u2032) = X j\u2208Zd cj+l,l(x1, \u03be)ej(x\u2032) = X j\u2208Zd cj,l(x1, \u03be)ej\u2212l(x\u2032) := X j,k\u2208Zd cj,k,l(x1, \u03be)ej\u2212l(x\u2032). 19 \fIt follows from (9) that c is a zero order symbol. Moreover, c(x1, x\u2032, \u03be, l) \u2212a(x1, x\u2032, \u03be, l)b(x1, x\u2032, \u03be, l) = X j,k\u2208Zd (cj,k,l(x1, \u03be) \u2212aj\u2212k(x1, \u03be, l)bk\u2212l(x1, \u03be, l))ej\u2212l(x\u2032) = X j,k\u2208Zd (cj,k,l(x1, \u03be) \u2212aj,k(x1, \u03be)bk,l(x1, \u03be))ej\u2212l(x\u2032) + (aj\u2212k(x1, \u03be, k) \u2212aj\u2212k(x1, \u03be, l))bk,l(x1, \u03be)ej\u2212l(x\u2032). From (10) and the inequality \u27e8x+y\u27e9\u2272\u27e8x\u27e9\u27e8y\u27e9, we obtain that the \ufb01rst di\ufb00erence is a zero order symbol with seminorms bounded by (appropriate) multiples of \u210f. For the second di\ufb00erence we use that the symbol is special and Proposition 2.2 (as in the proof of Proposition 3.7) to obtain |D\u03b1 x1D\u03b2 \u03be (aj\u2212k(x1, \u03be, k) \u2212aj\u2212k(x1, \u03be, l))| \u2272\u210f|k \u2212l|A\u2032 M+2N\u27e8j \u2212k\u27e9\u22122N, any N \u22650, whenever \u03b1 + \u03b2 \u2264M. Therefore, |D\u03b1 x1D\u03b2 \u03be [(aj\u2212k(x1, \u03be, k) \u2212aj\u2212k(x1, \u03be, l))bk,l(x1, \u03be)]| \u2272\u210fA\u2032 M+2NBM+2N\u27e8j \u2212k\u27e9\u22122N\u27e8k \u2212l\u27e9\u22122N+1, for any N \u22650, whenever \u03b1 + \u03b2 \u2264M. As before, we conclude that the second di\ufb00erence is a zero order symbol with seminorms bounded by (appropriate) multiples of \u210f, and the result follows from Theorem 3.11. Proposition 3.14. Let a and b be special zero order symbols over R \u00d7 Td \u00d7 R \u00d7 Rd. Then, a). D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be a, a + b, and ab are also special zero order symbols. Moreover, their seminorms are controlled by the products of the seminorms of a and b. b). the function ea is also a special zero order symbol, c). for small enough \u210f, depending on a, the function log(1 + \u210fa) is also a special zero order symbol. Proof. This is a routine argument. Corollary 3.15. Let a be a special zero order symbol over R \u00d7 Td \u00d7 R \u00d7 Rd and let \u03b40 > 0. Then there exists \u210f0 > 0, such that if 0 < \u210f\u2264\u210f0, then Op\u210f(ea) is an invertible operator in Hs \u03b4,\u210f(T ) for |\u03b4| \u2264\u03b40 and s = 0, 2. Moreover, the norms in Hs \u03b4,\u210f(T ) of the operator and its inverse are uniformly bounded (in \u03b4 and \u210f). Proof. We know from Proposition 3.14 that e\u00b1a are zero order symbols. From Theorem 3.13 we have that \u2225Op\u210f(ea)Op\u210f(e\u2212a) \u2212I\u2225Hs \u03b4,\u210f(T )\u2192Hs \u03b4,\u210f(T ), \u2225Op\u210f(e\u2212a)Op\u210f(ea) \u2212I\u2225Hs \u03b4,\u210f(T )\u2192Hs \u03b4,\u210f(T ) \u2272\u210f. This implies that Op\u210f(ea) has left and right inverses and the conclusion follows. 3.4 Appendices 3.4.1 Some facts about weighted spaces Let us recall the multiplier operator \u27e8\u210fD\u27e9\u22122 := Op\u210f(\u27e8\u03be, \u210ft\u27e9\u22122), i.e. \u27e8\u210fD\u27e9\u22122f(x1, x\u2032) = 1 \u210f X k\u2208Zd Z R e2\u03c0ix1\u03be/\u210fek(x\u2032) 1 \u03be2 + |\u210fk|2 + 1 c f \u210f k (\u03be)d\u03be = X k\u2208Zd \u00121 \u210f Z R e2\u03c0ix1\u03be/\u210f \u03be2 + |\u210fk|2 + 1 c f \u210f k (\u03be)d\u03be \u0013 ek(x\u2032) If \u03bb > 0, then we have the classical Fourier transform Z R e2\u03c0ix1\u03be \u03be2 + \u03bb2 d\u03be = \u03c0 \u03bbe\u22122\u03c0\u03bb|x1|, 20 \fso that the multiplier operator is also given by a convolution with the convergent Fourier series \u03c0 \u210f X k\u2208Zd 1 \u27e8\u210fk\u27e9e\u22122\u03c0\u27e8\u210fk\u27e9|x1|/\u210fek(x\u2032). (11) In the following results we study the properties of convolutions with functions of the form e\u2212\u03bb|x1| to give a proof to Proposition 3.9. Proposition 3.16. Let |\u03b4| \u2264\u03b40 and g \u2208L1 \u03b40( R). For any f \u2208L2 \u03b4( R) we have that \u2225f \u2217g\u2225L2 \u03b4 \u2272 \u2225f\u2225L2 \u03b4\u2225g\u2225L1 \u03b40, where the constant of the inequality may depend on \u03b40. Proof. Using that \u27e8a\u27e9\u27e8b\u27e9\u22121 \u2272\u27e8a \u2212b\u27e9and \u27e8a\u27e9\u22651 we obtain that \u27e8x1\u27e9\u03b4|f \u2217g|(x1) \u2264\u27e8x1\u27e9\u03b4(|f| \u2217|g|)(x1) = Z R \u27e8y1\u27e9\u03b4|f(y1)|\u27e8x1 \u2212y1\u27e9\u03b40|g(x1 \u2212y1)|[\u27e8x1\u27e9\u03b4\u27e8y1\u27e9\u2212\u03b4\u27e8x1 \u2212y1\u27e9\u2212\u03b40]dy1 \u2272(\u27e8\u00b7\u27e9\u03b4|f|) \u2217(\u27e8\u00b7\u27e9\u03b40|g|)(x1). The conclusion then follows from Young\u2019s inequality. Proposition 3.17. Let \u03bb \u22651. Then e\u2212\u03bb|x1| \u2208L1 \u03b40( R) for all \u03b40 \u22650, and satis\ufb01es \u2225e\u2212\u03bb|x1|\u2225L1 \u03b40 \u2272\u03bb\u22121, with the constant depending on \u03b40. Proof. Integrating by parts we obtain that if n \u22650 is an integer, then Z \u221e 0 e\u2212\u03bbx1xn 1 dx1 = n! \u03bbn+1 \u22721 \u03bb. We have that \u27e8x1\u27e9n \u22721 + |x1|n, so that if \u03b40 \u2264n, then Z R e\u2212\u03bb|x1|\u27e8x1\u27e9\u03b40dx1 \u2264 Z R e\u2212\u03bb|x1|\u27e8x1\u27e9ndx1 \u2272 Z \u221e 0 e\u2212\u03bbx1(1 + xn 1)dx1 \u22721 \u03bb, as we wanted to prove. Proposition 3.18. Let 0 < \u210f\u22641, |\u03b4| \u2264\u03b40, and \u03bb \u22651. If f \u2208L2 \u03b4( R) and we de\ufb01ne T\u03bbf := e\u22122\u03c0\u03bb|x1|/\u210f\u2217f, then T\u03bbf \u2208H2 \u03b4,\u210f( R) and \u2225(\u210fDx1)mT\u03bbf\u2225L2 \u03b4 \u2272\u210f\u03bbm\u22121\u2225f\u2225L2 \u03b4 for 0 \u2264m \u22642. Proof. Di\ufb00erentiating we observe that \u210fDx1(e\u22122\u03c0\u03bb|x1|/\u210f) = \u03bbisgn(x1)e\u22122\u03c0\u03bb|x1|/\u210f, (\u210fDx1)2(e\u22122\u03c0\u03bb|x1|/\u210f) = \u210f\u03bb \u03c0 \u03b40 \u2212\u03bb2e\u22122\u03c0\u03bb|x1|/\u210f, so that we have \u210fDx1T\u03bbf = \u03bbi(sgn(x1)e\u22122\u03c0\u03bb|x1|/\u210f) \u2217f, (\u210fDx1)2T\u03bbf = \u210f\u03bb \u03c0 f \u2212\u03bb2T\u03bbf. From Proposition 3.16 and Proposition 3.17 we obtain that \u2225T\u03bbf\u2225L2 \u03b4 \u2272\u210f \u03bb\u2225f\u2225L2 \u03b4, \u2225\u210fDx1T\u03bbf\u2225L2 \u03b4 \u2272\u210f\u2225f\u2225L2 \u03b4, \u2225(\u210fDx1)2T\u03bbf\u2225L2 \u03b4 \u2272\u210f\u03bb\u2225f\u2225L2 \u03b4. Proposition 3.9. Let |\u03b4| \u2264\u03b40 and let 0 < \u210f\u22641. The di\ufb00erential operator \u27e8\u210fD\u27e92 : H2 \u03b4,\u210f(T ) \u2192L2 \u03b4(T ) and the multiplier operator \u27e8\u210fD\u27e9\u22122 : L2 \u03b4(T ) \u2192H2 \u03b4,\u210f(T ) are uniformly bounded (in \u03b4 and \u210f) operators, and inverses to each other. The bounds of the operators may depend in \u03b40. 21 \fProof. It is clear that the di\ufb00erential operator \u27e8\u210fD\u27e92 : H2 \u03b4,\u210f(T ) \u2192L2 \u03b4(T ) is a bounded operator. Using the notation of Proposition 3.18, we get from (11) that F(x1, x\u2032) := \u27e8\u210fD\u27e9\u22122f(x1, x\u2032) = \u03c0 \u210f X k\u2208Zd 1 \u27e8\u210fk\u27e9T\u27e8\u210fk\u27e9fk(x1)ek(x\u2032). We have that \u2225F\u22252 H2 \u03b4,\u210f(T ) \u2243P k\u2208Zd\u27e8\u210fk\u27e94\u2225Fk\u22252 L2 \u03b4(R) + \u27e8\u210fk\u27e92\u2225\u210fDx1Fk\u22252 L2 \u03b4(R) + \u2225(\u210fDx1)2Fk\u22252 L2 \u03b4(R). The bound \u2225F\u22252 H2 \u03b4,\u210f(T ) \u2272\u2225f\u22252 L2 \u03b4(T ) then follows from Proposition 3.18. We have proven that both of these maps are uniformly bounded. The fact that these maps are inverses to each other on S(T ), together with the density of S(T ) in L2 \u03b4(T ) and Hs \u03b4,\u210f(T ), implies the desired result. 3.4.2 Some facts about semiclassical pseudodi\ufb00erential calculus on R In the results above we needed some quantitative results for the bounds of the symbols and operators over R. They are implicitly hinted in the literature, but, for the sake of completeness, we state them explicitly. We start with the operator bounds for zero order pseudodi\ufb00erential operators. To avoid unnecessary notation, we denote x1 \u2208 R simply by x. Proposition 3.8 ([18]). Let 0 < \u210f\u22641. Let a(x1, \u03be; \u210f) be a semiclassical zero order symbol over R \u00d7 R. For any \u03b4 \u2208 R the operator Op\u210f(a) is bounded in L2 \u03b4( R). Moreover, if |\u03b4| \u2264\u03b40, then the operator norms \u2225Op\u210f(a)\u2225L2 \u03b4(R)\u2192L2 \u03b4(R) are uniformly bounded (in \u03b4 and \u210f) by a multiple (depending on \u03b40) of some seminorm of a. Proof. Let us write Op\u210f(a)f(x) := 1 \u210f Z R e2\u03c0ix\u03be/\u210fa(x, \u03be)c f \u210f(\u03be)d\u03be = Z R e2\u03c0ix\u03bea(x, \u210f\u03be) b f(\u03be)d\u03be. The symbols a\u210f(x, \u03be) := a(x, \u210f\u03be) satisfy the same seminorm estimates |D\u03b1 xD\u03b2 \u03be a\u210f| \u2264AM, whenever \u03b1 + \u03b2 \u2264M. Therefore, it su\ufb03ces to prove this estimate for the case \u210f= 1. The case \u03b4 = 0, i.e. in L2( R), is the Calder\u00b4 on\u2013Vaillancourt theorem, and the bound for it in terms of the seminorms is stated in [21] at the end of Section 2.4, Chapter 6, or Section 4.5 in [29] (in the semiclassical setting for the Weyl quantization, but the method of proof is the same). We prove \ufb01rst the case \u03b4 > 0 for \u03b4 = 2n, with n a positive integer. Using the identities \u27e8x\u27e92ne2\u03c0ix\u03be = \u27e8D\u03be\u27e92ne2\u03c0ix\u03be, Dk \u03be b f(\u03be) = (\u22121)k\\ (xkf)(\u03be), and integrating by parts we obtain that if f \u2208S( R), then \u27e8x\u27e92nOp(a)f = Z R \u27e8D\u03be\u27e92n(e2\u03c0ix\u03be)a(x, \u03be) b f(\u03be)d\u03be = Z R e2\u03c0ix\u03be\u27e8D\u03be\u27e92n(a(x, \u03be) b f(\u03be))d\u03be = 2n X k=0 Z R e2\u03c0ix\u03beak(x, \u03be)\\ (xkf)(\u03be)d\u03be = 2n X k=0 Op(ak)(xkf), for some zero order symbols ak(x, \u03be), with seminorms controlled by those of a. From this and the Calder\u00b4 on\u2013Vaillancourt theorem we get that \u2225\u27e8x\u27e92nOp(a)f\u2225L2(R) \u2264 2n X k=0 \u2225Op(ak)(xkf)\u2225L2(R) \u2272 2n X k=0 \u2225xkf\u2225L2(R) \u2272\u2225\u27e8x\u27e92nf\u2225L2(R), with the constant of the inequality depending on n and some seminorm of a. We have shown that Op(a) is bounded on L2 2n( R). The intermediate values 0 < \u03b4 < 2n are obtained by complex interpolation. 22 \fNow, let us consider the case \u03b4 < 0 for \u03b4 = \u22122n, with n a positive integer. Integrating by parts we obtain that if f \u2208S( R), then \u27e8x\u27e9\u22122nOp(a)\u27e8x\u27e92nf = \u27e8x\u27e9\u22122n Z R e2\u03c0ix\u03bea(x, \u03be)\u27e8D\u03be\u27e92n b f(\u03be)d\u03be = \u27e8x\u27e9\u22122n Z R \u27e8D\u03be\u27e92n(e2\u03c0ix\u03bea(x, \u03be)) b f(\u03be)d\u03be = Op(e a)f, for some zero order symbol e a(x, \u03be), with seminorms controlled by those of a. This identity can be rewritten as \u27e8x\u27e9\u22122nOp(a) = Op(e a)\u27e8x\u27e9\u22122n, and so the Calder\u00b4 on\u2013Vaillancourt theorem gives the boundedness on L2 \u22122n( R). Again, the intermediate values \u22122n < \u03b4 < 0 are obtained by complex interpolation. We also prove the following result for the symbol of the composition, which we used in the proof of Theorem 3.13. Proposition 3.19. Let a(x, \u03be; \u210f) and b(x, \u03be; \u210f) be symbols over R \u00d7 R satisfying |D\u03b1 xD\u03b2 \u03be a| \u2264AM, |D\u03b1 xD\u03b2 \u03be b| \u2264BM, whenever \u03b1 + \u03b2 \u2264M. If c = c(x1, \u03be; \u210f) is the symbol such that Op\u210f(c) = Op\u210f(a)Op\u210f(b), then there exists some K such that for any M \u22650 the symbol satis\ufb01es |D\u03b1 xD\u03b2 \u03be c| \u2272AK+MBK+M, |D\u03b1 xD\u03b2 \u03be (c \u2212ab)| \u2272\u210fAK+MBK+M, whenever \u03b1 + \u03b2 \u2264M, where the constants of the inequalities may depend on M but are independent of \u210f. Proof. Proceeding as in [21], see Chapter 6, Section 3, it su\ufb03ces to show the estimates for compactly supported symbols and prove that these are independent of the size of the support. Let us recall the integral kernel representation of a pseudodi\ufb00erential operator, Op\u210f(s)f(x) = 1 \u210f Z R e2\u03c0ix\u03be/\u210fs(x, \u03be)c f \u210f(\u03be)d\u03be = 1 \u210f Z R \u0012Z R e2\u03c0i(x\u2212y)\u03be/\u210fs(x, \u03be)d\u03be \u0013 f(y)dy. Then, the composition has an integral kernel representation given by Op\u210f(a)Op\u210f(b)f(x) = 1 \u210f Z R \u0012Z R e2\u03c0i(x\u2212y)\u03be/\u210fa(x, \u03be)d\u03be \u0013 Op\u210f(b)f(y)dy = 1 \u210f Z R \u00121 \u210f Z R2 e2\u03c0i(x\u2212y)\u03be/\u210fe2\u03c0i(y\u2212z)\u03b7/\u210fa(x, \u03be)b(y, \u03b7)d\u03bed\u03b7dy \u0013 f(z)dz = 1 \u210f Z R \u0012Z R e2\u03c0i(x\u2212z)\u03b7/\u210fc(x, \u03b7)d\u03b7 \u0013 f(z)dz, and therefore the symbol of Op\u210f(a)Op\u210f(b) is equal to c(x, \u03b7) := 1 \u210f Z R2 e2\u03c0i(x\u2212y)(\u03be\u2212\u03b7)/\u210fa(x, \u03be)b(y, \u03b7)d\u03bedy = 1 \u210f Z R2 e\u22122\u03c0iy\u03be/\u210fa(x, \u03b7 + \u03be)b(x + y, \u03b7)d\u03bedy. From the inversion formula we have that a(x, \u03b7) = 1 \u210f Z R2 e\u22122\u03c0iy\u03be/\u210fa(x, \u03b7 + \u03be)d\u03bedy, which implies that c(x, \u03b7) \u2212a(x, \u03b7)b(x, \u03b7) = 1 \u210f Z R2 ye\u22122\u03c0iy\u03be/\u210fa(x, \u03b7 + \u03be) \u00b7 b(x + y, \u03b7) \u2212b(x, \u03b7) y d\u03bedy. 23 \fNow we use the identity D\u03be\u27e8\u210fD\u03be\u27e92 \u27e8y\u27e92 e\u22122\u03c0iy\u03be/\u210f= \u22121 \u210fye\u22122\u03c0iy\u03be/\u210f and integrate by parts to obtain that c(x, \u03b7) \u2212a(x, \u03b7)b(x, \u03b7) = Z R2 e\u22122\u03c0iy\u03be/\u210f(D\u03be\u27e8\u210fD\u03be\u27e92a(x, \u03b7 + \u03be)) \u0012b(x + y, \u03b7) \u2212b(x, \u03b7) y\u27e8y\u27e92 \u0013 d\u03bedy =: Z R2 e\u22122\u03c0iy\u03be/\u210fA(x, \u03b7, \u03be)B(x, \u03b7, y)d\u03bedy. Let us observe that the integral above is absolutely convergent because A has compact support in \u03be and B is integrable in y. Therefore, we can exchange the order of integration and obtain that \f \f \f \f Z R2e\u22122\u03c0iy\u03be/\u210fA(x, \u03b7, \u03be)B(x, \u03b7, y)d\u03bedy \f \f \f \f = \u210f \f \f \f \f Z R2 e\u22122\u03c0iy\u00b5A(x, \u03b7, \u210f\u00b5)B(x, \u03b7, y)dyd\u00b5 \f \f \f \f = \u210f \f \f \f \f Z R A(x, \u03b7, \u210f\u00b5)B(x, \u03b7, b \u00b5)d\u00b5 \f \f \f \f \u2272\u210f\u2225A(x, \u03b7, \u00b7)\u2225L\u221e(R)\u2225\u27e8Dy\u27e92B(x, \u03b7, \u00b7)\u2225L1(R), where we used in the last inequality that Z R | b f(\u00b5)|d\u00b5 = Z R |\u27e8\u00b5\u27e92 b f(\u00b5)| \u27e8\u00b5\u27e92 d\u00b5 = Z R | \\ \u27e8D\u27e92f(\u00b5)| \u27e8\u00b5\u27e92 d\u00b5 \u2272\u2225\\ \u27e8D\u27e92f\u2225L\u221e(R) \u2272\u2225\u27e8D\u27e92f\u2225L1(R). Similarly, for \u03b1 + \u03b2 \u2264M we can bound |D\u03b1 xD\u03b2 \u03b7 (c \u2212ab)| \u2272\u210f \f \f \f \f Z R D\u03b1 xD\u03b2 \u03b7 (A(x, \u03b7, \u210f\u00b5)B(x, \u03b7, b \u00b5))d\u00b5 \f \f \f \f \u2272\u210f sup \u03b10+\u03b20\u2264M \u2225D\u03b10 x D\u03b20 \u03b7 A(x, \u03b7, \u00b7)\u2225L\u221e(R) sup \u03b11+\u03b21\u2264M \u03b31\u22642 \u2225D\u03b11 x D\u03b21 \u03b7 D\u03b31 y B(x, \u03b7, \u00b7)\u2225L1(R). To \ufb01nish the estimate we use that |D\u03b1 xD\u03b2 \u03b7 A(x, \u03b7, \u03be)| \u2272AM+3, |D\u03b1 xD\u03b2 \u03b7D\u03b3 yB(x, \u03b7, y)| \u2272BM+3 \u27e8y\u27e92 , whenever \u03b1 + \u03b2 \u2264M and \u03b3 \u22642. The di\ufb00erential inequalities for the di\ufb00erence c \u2212ab imply the results for the symbol c, and this completes the proof. 4 Conjugation and Carleman Estimate The results of this chapter are an adaptation of those from [18] in the case of Rd to the case of the cylinder T = R\u00d7 Td. As brie\ufb02y mentioned in the setting, the proof of the magnetic Carleman estimate Theorem 1.2 is reduced to the case with no potentials. The proof of this Carleman estimate in [8] for R\u00d7M0, where M0 is a Riemannian manifold with boundary, is realized by an eigenfunction expansion and the solution of \ufb01rst order linear constant coe\ufb03cient ODEs. This can be carried out in the exact same way for the torus Td, so we have the following result. Theorem 4.1 ([8]). Let \u03b4 > 1/2. There exists \u03c40 \u22651 such that if |\u03c4| \u2265\u03c40 and \u03c4 2 / \u2208Spec(\u2212\u2206g0), then for any f \u2208L2 \u03b4(T ) there exists a unique u \u2208H1 \u2212\u03b4(T ) which solves e2\u03c0\u03c4x1D2e\u22122\u03c0\u03c4x1u = f. Moreover, this solution is in H2 \u2212\u03b4(T ) and satis\ufb01es the estimates \u2225u\u2225Hs \u2212\u03b4(T ) \u2272|\u03c4|s\u22121\u2225f\u2225L2 \u03b4(T ), for 0 \u2264s \u22642, with the constant of the inequality independent of \u03c4. 24 \fRemark. It is important to note that the constant in the inequality only requires the condition that \u03c4 2 does not belong to Spec(\u2212\u2206g0); it is not necessary to ensure any distance condition to the spectrum. We can easily see that the uniqueness would fail if \u03c42 \u2208Spec(\u2212\u2206g0) as u = em(x\u2032) \u2208H2 \u2212\u03b4(T ), with \u03c4 2 = |m|2, is a solution of the homogeneous problem. The theorem above allows to de\ufb01ne the operator G\u03c4 : L2 \u03b4(T ) \u2192H2 \u2212\u03b4(T ) by G\u03c4f := u, so that \u2206\u03c4G\u03c4 = I on L2 \u03b4(T ), where \u2206\u03c4 = e2\u03c0\u03c4x1D2e\u22122\u03c0\u03c4x1. The reduction from our problem to this one is accomplished through a conjugation by two invertible pseudodi\ufb00erential operators, i.e. essentially through the construction of an integrating factor. The construction of these operators is the main content of this chapter. Let us consider the relevant terms from the expression e2\u03c0\u03c4x1HV,W e\u22122\u03c0\u03c4x1: \u2206\u03c4 := e2\u03c0\u03c4x1D2e\u22122\u03c0\u03c4x1 = D2 x1 + 2i\u03c4Dx1 \u2212\u03c4 2 + D2 x\u2032, V\u03c4 := e2\u03c0\u03c4x1(V \u00b7 D)e\u22122\u03c0\u03c4x1 = e2\u03c0\u03c4x1(FDx1 + G \u00b7 Dx\u2032)e\u22122\u03c0\u03c4x1 = F(Dx1 + i\u03c4) + G \u00b7 Dx\u2032. Remark. Observe that we have absorbed the negative sign of the Laplacian into the de\ufb01nition of \u2206\u03c4. If we use semiclassical notation, with \u210f= 1/\u03c4 a small parameter, we can denote \u2206\u210f:= \u03c4 \u22122\u2206\u03c4 = \u210f2D2 x1 + 2i\u210fDx1 \u22121 + \u210f2D2 x\u2032, V\u210f:= \u03c4 \u22121V\u03c4 = F(\u210fDx1 + i) + G \u00b7 \u210fDx\u2032. Equivalently, we could have de\ufb01ned \u2206\u210f:= e2\u03c0x1/\u210f(\u210fD)2e\u22122\u03c0x1/\u210f, V\u210f:= e2\u03c0x1/\u210f[V \u00b7 (\u210fD)]e\u22122\u03c0x1/\u210f. Then we have that \u210f2(\u2206\u03c4 + 2V\u03c4) = \u2206\u210f+ 2\u210fV\u210f. A signi\ufb01cant part of this chapter is devoted to prove that we can conjugate this operator into the Laplacian plus a suitable error, as we state next. This construction follows closely the ideas from [18]. Theorem 4.2. Let 1/2 < \u03b4 < 1. Let V satisfy (\u22c6). There are \u03b5 > 0 and 0 < \u210f0 \u22641 such that for 0 < |\u210f| \u2264\u210f0 there exist zero order semiclassical pseudodi\ufb00erential operators A, B, R over the cylinder T , so that the following conjugation identity holds, (\u2206\u210f+ 2\u210fV\u210f)A = B\u2206\u210f+ \u210f1+\u03b5R. Moreover, the operators A and B are invertible, uniformly bounded (in \u210f) together with its inverses in Hs \u00b1\u03b4,\u210f(T ), for s = 0, 2, and R : L2 \u2212\u03b4(T ) \u2192L2 \u03b4(T ) is uniformly bounded (in \u210f). With the semiclassical notation we have that \u2206\u210f= Op\u210f(\u03be2 + 2i\u03be \u22121 + |\u210ft|2) = Op\u210f((\u03be + i)2 + |\u210ft|2). Moreover, from Proposition 3.6 we have that \u2206\u210fA = (\u210f2D2 x1 + 2i\u210fDx1 \u22121 + \u210f2D2 x\u2032)A = Op\u210f(\u03be2a + 2\u210f\u03beDx1a + \u210f2D2 x1a) + 2iOp\u210f(\u03bea + \u210fDx1a) \u2212Op\u210f(a) + Op\u210f(|\u210ft|2a + 2\u210f(\u210ft \u00b7 Dx\u2032a) + \u210f2D2 x\u2032a) = Op\u210f([(\u03be + i)2 + |\u210ft|2]a) + 2\u210fOp\u210f((\u03be + i)Dx1a + \u210ft \u00b7 Dx\u2032a) + \u210f2Op\u210f(D2a) = A\u2206\u210f+ 2\u210fOp\u210f((\u03be + i)Dx1a + \u210ft \u00b7 Dx\u2032a) + \u210f2Op\u210f(D2a), V\u210fA = (F(\u210fDx1 + i) + G \u00b7 \u210fDx\u2032)A = Op\u210f((\u03be + i)Fa + \u210fF \u00b7 Dx1a) + Op\u210f(\u210ft \u00b7 Ga + \u210fG \u00b7 Dx\u2032a) = Op\u210f([(\u03be + i)F + \u210ft \u00b7 G]a) + \u210fOp\u210f(V \u00b7 Da), 25 \fso, we obtain (\u2206\u210f+ 2\u210fV\u210f)A = A\u2206\u210f+ 2\u210fOp\u210f((\u03be + i)Dx1a + \u210ft \u00b7 Dx\u2032a + (\u03be + i)Fa + \u210ft \u00b7 Ga) + \u210f2Op\u210f(D2a + 2V \u00b7 Da). If a has nice properties, then the last operator already has the form we look for the remainder term in Theorem 4.2. Then, roughly speaking, we are left to make the operator 2\u210fOp\u210f((\u03be + i)Dx1a + \u210ft \u00b7 Dx\u2032a + (\u03be + i)Fa + \u210ft \u00b7 Ga) (12) suitably small. In order to do that, we split it in two parts: we make one part of it vanish and the remainder will be supported on a set where the operator \u2206\u210fis elliptic. The remainder will be subsumed by the expression A\u2206\u210fbecoming into B\u2206\u210f. In order for the operator A to be invertible it is usual to look for the symbol to be of the form a = e\u2212u, so that the symbol (12) becomes (\u03be + i)Dx1a + \u210ft \u00b7 Dx\u2032a + (\u03be + i)Fa + \u210ft \u00b7 Ga = a[\u2212(\u03be + i)Dx1u \u2212\u210ft \u00b7 Dx\u2032u + (\u03be + i)F + \u210ft \u00b7 G], leaving us to solve the equation (\u03be + i)Dx1u + \u210ft \u00b7 Dx\u2032u = (\u03be + i)F + \u210ft \u00b7 G, (13) for t \u2208 Zd. In the following sections we deal with appropriate existence and uniqueness of solutions to these equations, as well as with its estimates. Recall that the symbol of \u2206\u210fis (\u03be + i)2 + |\u210ft|2, and note that this vanishes if and only if \u03be = 0 and |\u210ft| = 1. The symbol is elliptic away from this set. Therefore, for the construction of the solution to the equation we will be mostly interested in working in a neighborhood of this vanishing set. Finally, let us mention some di\ufb03culties of our problem which do not seem to be present in the Euclidean setting, like in [22] or [18]. Observe that (13) can be rewritten as (\u03be + i, \u210ft) \u00b7 Du = (\u03be + i, \u210ft) \u00b7 V. Near the vanishing set of the symbol of \u2206\u210f, i.e. \u03be = 0 and |\u210ft| = 1, this equation resembles a higher dimensional version of the \u2202\u2013equation. It has been usual to reduce all such equations to a \u2202\u2013equation through a rotation, see for instance [22] or [18]. In our setting this is not immediately possible, in part because Zd does not admit non\u2013trivial rotations. One way to try to remedy this could be as follows. In [28], it is shown that for any k \u2208 Zd \\ {0} there is a matrix A \u2208SLn( Zd), i.e. a linear automorphism of Td, such that Ak = gcd(k)e1. This allows to make a change of variables so that the directional derivative \u210fk \u00b7 Dx\u2032 becomes an \u201cexact\u201d partial derivative \u210fgcd(k)Dy\u2032 1, and so the equation reduces from R \u00d7 Td to ( R \u00d7 T1) \u00d7 Td\u22121, where the last (d \u22121) toroidal variables do not intervene. In this case, the coe\ufb03cient \u210fgcd(k) plays a role in the estimates, and this seems di\ufb03cult to handle. In addition to the previous inconvenient, we will to need to estimate of the di\ufb00erences of the solutions when k varies; since it is not clear how the change of variables (i.e. the matrix) depends on k, we will refrain from using this idea. Instead, we will proceed using the decomposition in Fourier series. 4.1 Equation We start recalling the assumptions on the magnetic potential V = (F, G): V \u2208C\u221e c (T ), supp(V ) \u2286[\u2212R, R] \u00d7 Td, Z R V (x1, x\u2032)dx1 = 0 for all x\u2032 \u2208 Td. (\u22c6) Under these conditions, we will see that there is no di\ufb00erence in working over R \u00d7 Td \u00d7 R \u00d7 Zd or R \u00d7 Td \u00d7 R \u00d7 Rd, and so, to avoid suggesting that there is something special about the former, we will work over the latter. In this section we state the properties of the solution the equation (13) and motivate the reasons for assuming (\u22c6). Let us recall that for (u, v) \u2208 R \u00d7 Rd and (u, v, w) \u2208 R \u00d7 Rd \u00d7 Rd we use the notation \u27e8u, v\u27e9:= (1 + u2 + |v|2)1/2 and \u27e8u, v, w\u27e9:= (1 + u2 + |v|2 + |w|2)1/2. 26 \fTheorem 4.3. Let \u210f> 0 and let V = (F, G) satisfy (\u22c6). For \ufb01xed (\u03be, t) \u2208 R \u00d7 Rd, the equation (\u03be + i)Dx1u + \u210ft \u00b7 Dx\u2032u = (\u03be + i)F + \u210ft \u00b7 G, (13) has a unique solution u(\u00b7, \u03be, t; \u210f) \u2208C\u221e(T ) with the decay condition \u2225u(x1, \u00b7, \u03be, t)\u2225L\u221e(Td) \u21920 as x1 \u2192\u00b1\u221e. Moreover, we have that u(\u00b7, t) \u2208C\u221e( R \u00d7 Td \u00d7 R) and it satis\ufb01es the bounds |D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be u| \u2272\u27e8\u03be, \u210ft\u27e9, |D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be (u(\u00b7, \u03be, t1) \u2212u(\u00b7, \u03be, t2))| \u2272\u210f|t1 \u2212t2|\u27e8\u03be, \u210ft1, \u210ft2\u27e9. (14) For |x1| \u22652R we have the linear decay bound |D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be u| \u2272\u27e8\u03be, \u210ft\u27e9 |x1| . (15) The constants of the inequalities may depend on \u03b1, \u03b2, \u03b3, d, R, \u2225V \u2225W N,1(T ) for some N = N(\u03b1, \u03b2, d), but are independent of \u210f, \u03be, t. Remark. Under some conditions on t, like t \u2208 Zd or other arithmetic properties, it may be possible to show that u(\u00b7, \u03be, t) \u2208S(T ). We do not need such a strong result, so we do not intend to prove it. The equation (13) has constant coe\ufb03cients in (x1, x\u2032), so we can decompose u, F, G in its Fourier series u = X m\u2208Zd um(x1, \u03be, t; \u210f)em(x\u2032), F(x1, x\u2032) = X m\u2208Zd Fm(x1)em(x\u2032), G(x1, x\u2032) = X m\u2208Zd Gm(x1)em(x\u2032), and look for um to solve the equation (\u03be + i)Dx1um + \u210ft \u00b7 mum = (\u03be + i)Fm + \u210ft \u00b7 Gm. (16) For instance, in order to prove (14), it would su\ufb03ce to show inequalities of the form |D\u03b1 x1D\u03b2 \u03be um| \u2272\u27e8m\u27e9\u2212M\u27e8\u03be, \u210ft\u27e9, |D\u03b1 x1D\u03b2 \u03be (um(\u00b7, \u00b7, t1) \u2212um(\u00b7, \u00b7, t2))| \u2272\u210f\u27e8m\u27e9\u2212M|t1 \u2212t2|\u27e8\u03be, \u210ft1, \u210ft2\u27e9, for some su\ufb03ciently large M. We will prove these in a later section. Before we proceed, let us motivate the conditions that we are requiring for V . Considering the Fourier transform (no longer semiclassical) in (16) gives that c um(\u03b7) = (\u03be + i) c Fm(\u03b7) + \u210ft \u00b7 d Gm(\u03b7) (\u03be + i)\u03b7 + \u210ft \u00b7 m . The denominator vanishes only if \u03b7 = 0 and \u210ft \u00b7 m = 0. This suggests that the case \u210ft \u00b7 m \u0338= 0 may be less problematic than the case \u210ft \u00b7 m = 0. Indeed, we already see from (16) that there is not a unique solution, and even when de\ufb01ning such a solution it may not decay. The simplest way to avoid the problem of the denominator vanishing is to require c Fm(0) = d Gm(0) = 0, which is the same as the vanishing moments from (\u22c6). Uniqueness and decay are not necessarily required, but we will see that the decay estimates play a role in the construction of the conjugation (namely on the properties of R) and the reconstruction procedure. 4.2 Lemmas: ODEs and calculus In this section we prove some boundedness estimates for the solution of an ODE of the form (16), as well as some other necessary calculus facts. To avoid unnecessary notation, in this section we will denote the variable x1 simply by x. We start with the most elementary estimate for solutions of an ODE, and then improve it under the hypothesis of a vanishing moment. The reason why we will be dealing only with L1 and L\u221eestimates is that these are useful to iterate and they relate through the Fundamental Theorem of Calculus (i.e. as a 1-dimensional version of the Gagliardo\u2013Nirenberg\u2013Sobolev inequality). 27 \fLemma 4.4. Let a, \u03be \u2208 R, a \u0338= 0, and let H \u2208S( R). Consider the equation (\u03be + i)Dxv + av = H. Then there exists a unique solution in the sense of tempered distributions. Moreover, the solution belongs to S( R) and satis\ufb01es the estimates \u2225v\u2225L1 \u2264\u27e8\u03be\u27e9 |a| \u2225H\u2225L1, \u2225v\u2225L\u221e\u2272\u2225Dxv\u2225L1 \u2272\u2225H\u2225L1. The constants in the inequality are independent of a, \u03be, H. Proof. Taking the Fourier transform yields that b v(\u03b7) = b H(\u03b7) (\u03be + i)\u03b7 + a. We can bound the norm of the denominator by |(\u03be + i)\u03b7 + a|2 = \u27e8\u03be\u27e92\u03b72 + 2\u03be\u03b7a + a2 = \u0012 \u27e8\u03be\u27e9\u03b7 + \u03bea \u27e8\u03be\u27e9 \u00132 + a2 \u27e8\u03be\u27e92 \u2265a2 \u27e8\u03be\u27e92 > 0. Therefore, the denominator is a non-vanishing smooth function, and we obtain the existence and uniqueness in the sense of distributions. Moreover, the rapid decay of b H(\u03b7) and the bound for the denominator give that v \u2208S( R). Let \u00b5 := 2\u03c0ia/(\u03be + i) = 2\u03c0a(1 + i\u03be)/\u27e8\u03be\u27e92, so that Re(\u00b5) = 2\u03c0a/\u27e8\u03be\u27e92. The form of the solution depends on the sign of Re(\u00b5). The cases a > 0 and a < 0 are analogous, so we only consider one of these. If we assume that a > 0, then the solution is given by v(x) = 2\u03c0i \u03be + i Z x \u2212\u221e e\u2212\u00b5(x\u2212s)H(s)ds, as Re(\u00b5) = 2\u03c0a/\u27e8\u03be\u27e92 > 0. This gives that \u2225v\u2225L1 \u22642\u03c0 \u27e8\u03be\u27e9 Z R Z x \u2212\u221e e\u22122\u03c0a(x\u2212s)/\u27e8\u03be\u27e92|H(s)|dsdx = 2\u03c0 \u27e8\u03be\u27e9 Z R Z \u221e s e\u22122\u03c0a(x\u2212s)/\u27e8\u03be\u27e92|H(s)|dxds = \u27e8\u03be\u27e9 a \u2225H\u2225L1. (17) Using this together with the equation gives \u2225Dxv\u2225L1 \u22641 \u27e8\u03be\u27e9(a\u2225v\u2225L1 + \u2225H\u2225L1) \u2264\u27e8\u03be\u27e9+ 1 \u27e8\u03be\u27e9 \u2225H\u2225L1 \u2272\u2225H\u2225L1, as we wanted. The bound \u2225v\u2225L\u221e\u2272\u2225Dxv\u2225L1 follows from the Fundamental Theorem of Calculus and the fact that v \u2208S( R). Remark. It is also possible to show that \u2225v\u2225L\u221e\u2272\u27e8\u03be\u27e9 |a| \u2225H\u2225L\u221e. The following result shows that a vanishing moment assumption allows to consider the missing case a = 0 and also gives an improvement of the L1-estimates for the solution. Lemma 4.5. Let a, \u03be \u2208 R and let H \u2208S( R). The equation (\u03be + i)Dxv + av = DxH. has a unique solution v \u2208S( R) and it satis\ufb01es \u2225v\u2225L1 \u2272\u2225H\u2225L1, \u2225v\u2225L\u221e\u2272\u2225Dxv\u2225L1 \u2272\u2225DxH\u2225L1. The constants in the inequality are independent of a, \u03be, H. Moreover, if a \u0338= 0, then there exists w \u2208S( R) such that v = Dxw. 28 \fProof. If a = 0 then v = H/(\u03be +i) \u2208S( R), and the result follows immediately. For a \u0338= 0 the existence and uniqueness follow from Lemma 4.4. The cases a > 0 and a < 0 are analogous, so we only consider one of these. Assume that a > 0 and let \u00b5 := 2\u03c0a/(\u03be + i) be as in the previous proof. Integrating by parts yields that v(x) = 2\u03c0i \u03be + i Z x \u2212\u221e e\u2212\u00b5(x\u2212s)DsH(s)ds = 1 \u03be + i \u0012 H(x) \u2212\u00b5 Z x \u2212\u221e e\u2212\u00b5(x\u2212s)H(s)ds \u0013 . (18) We use the estimate (17) for the integral term, to conclude that \u2225v\u2225L1 \u22641 \u27e8\u03be\u27e9 \u0012 1 + |\u00b5|\u27e8\u03be\u27e92 a \u0013 \u2225H\u2225L1 \u2272\u2225H\u2225L1. The L\u221ebound follows trivially if a = 0, and from Lemma 4.4 if a \u0338= 0. Finally, if a \u0338= 0, then v = 1 aDx(H \u2212(\u03be + i)v) = Dxw, with w = (H \u2212(\u03be + i)v)/a \u2208S( R). Remark. The solutions in C1( R) to the equation (\u03be+i)Dxv+av = 0 are multiples of e\u2212\u00b5x. Therefore, there is also uniqueness under the weaker conditions v \u2208C1( R) and v(x) \u21920 as x \u2192\u00b1\u221e. Remark. From (18) it is also possible to show that \u2225v\u2225L\u221e\u22641 \u27e8\u03be\u27e9 \u0012 1 + |\u00b5|\u27e8\u03be\u27e92 a \u0013 \u2225H\u2225L\u221e\u2272\u2225H\u2225L\u221e. In the proof of the main result of the next section we will remark why focusing only in the L\u221e\u2192L\u221e estimates may not be so convenient. In the following proposition we show that the exponential function appearing from the integrating factor of the di\ufb00erential equations is bounded, together with all its derivatives. In proving the boundedness results from the following section we will use this result, as well as the idea of the proof. Lemma 4.6. Let \u03b7, \u03be \u2208 R with \u03b7 \u22650. Then, for any polynomial p we have \f \f \f \fe\u2212i\u03b7/(\u03be+i)p \u0012 \u03b7 \u27e8\u03be\u27e92 \u0013\f \f \f \f \u2264C(p), for some constant C(p) depending on the polynomial. Moreover, |D\u03b2 \u03be (e\u2212i\u03b7/(\u03be+i))| \u2264C\u03b2 for any \u03b2 \u22650. The constants C(p) and C\u03b2 are independent of \u03b7 and \u03be. Proof. Let \u00b5 = i\u03b7/(\u03be + i) = \u03b7(1 + i\u03be)/\u27e8\u03be\u27e92, so that Re(\u00b5) = \u03b7/\u27e8\u03be\u27e92 > 0. The \ufb01rst inequality follows from the triangle inequality and the bound e\u2212xxn \u2264n! for x \u22650. In order to di\ufb00erentiate with respect to \u03be, we \ufb01rst observe that D\u03be\u00b5 = \u2212\u00b5/(2\u03c0i(\u03be + i)). We show by induction that D\u03b2 \u03be (e\u2212\u00b5) = e\u2212\u00b5P\u03b2(\u00b5) (\u03be + i)\u03b2 , where P\u03b2 is a polynomial of degree \u03b2, whose coe\ufb03cients depend only on \u03b2. For \u03b2 = 0 it is clear. Moreover, D\u03be \u0012e\u2212\u00b5P\u03b2(\u00b5) (\u03be + i)\u03b2 \u0013 = e\u2212\u00b5 \u0012 \u00b5 2\u03c0i(\u03be + i) P\u03b2(\u00b5) (\u03be + i)\u03b2 \u2212 \u00b5 2\u03c0i(\u03be + i) P \u2032 \u03b2(\u00b5) (\u03be + i)\u03b2 \u2212 \u03b2P\u03b2(\u00b5) 2\u03c0i(\u03be + i)\u03b2+1 \u0013 . Thus, by de\ufb01ning the polynomial P\u03b2+1(z) := (zP\u03b2(z)\u2212zP \u2032 \u03b2(z)\u2212\u03b2P\u03b2)/(2\u03c0i) we complete the induction. Since |1/(\u03be + i)| \u22641 and P\u03b2 has degree \u03b2, we obtain that there exists some polynomial e P\u03b2 of degree \u03b2 such that \f \f \f \f P\u03b2(\u00b5) (\u03be + i)\u03b2 \f \f \f \f \u2264e P\u03b2 \u0012 \u03b7 \u27e8\u03be\u27e92 \u0013 . With this we conclude that |D\u03b2 \u03be e\u2212\u00b5| = \f \f \f \f e\u2212\u00b5P\u03b2(\u00b5) (\u03be + i)\u03b2 \f \f \f \f \u2264 \f \f \f \fe\u2212\u00b5 e P\u03b2 \u0012 \u03b7 \u27e8\u03be\u27e92 \u0013\f \f \f \f \u2264C\u03b2. 29 \f4.3 Estimates for the solutions of the equations The purpose of this section is to \ufb01nally prove Theorem 4.3. We start by proving the estimates for the ODEs (16) which result from expanding in Fourier series the equation (13). We start with the following elementary result. Proposition 4.7. If f \u2208S( R) is such that R R f(x1)dx1 = 0, then there exists a unique g \u2208S( R) such that Dx1g = f. Moreover, if f is compactly supported, then so is g. Similarly, if f \u2208S(T ) is such that R R f(x1, x\u2032)dx1 = 0 for all x\u2032 \u2208 Td, then there exists g \u2208S(T ) such that Dx1g = f. Moreover, its Fourier coe\ufb03cients fk \u2208S( R) satisfy that R R fk(x1) = 0 and Dx1gk = fk. Proof. Let us \ufb01rst consider the case f \u2208S( R). The uniqueness follows from the Schwartz condition. For the existence we de\ufb01ne g(x1) := Z x1 \u2212\u221e f(y)dy = \u2212 Z \u221e x1 f(y)dy. To show that g \u2208S( R) we use that if L \u22651, then for any positive m > 0 we have Z \u2212L \u2212\u221e 1 \u27e8y\u27e9m+1 dy, Z \u221e L 1 \u27e8y\u27e9m+1 dy \u2272 1 Lm . If f were compactly supported, then the de\ufb01nition above shows that g shares the same support. For the case of T , the \ufb01rst part of the proof follows exactly as before. Moreover, by Fubini\u2019s theorem we have that Z R fk(x1)dx1 = Z Td e\u2212k(x\u2032) \u0012Z R f(x1, x\u2032)dx1 \u0013 dx\u2032 = 0, and Dx1gk(x1) = Z Td Dx1g(x1, x\u2032)e\u2212k(x\u2032)dx\u2032 = Z Td f(x1, x\u2032)e\u2212k(x\u2032)dx\u2032 = fk(x1). If f and g are as in Proposition 4.7, then we de\ufb01ne D\u22121 x1 f := g. Theorem 4.8. Let \u210f> 0 and let V = (F, G) satisfy (\u22c6). For \ufb01xed (m, \u03be, t) \u2208 Zd\u00d7 R\u00d7 Rd the equation (\u03be + i)Dx1um + \u210ft \u00b7 mum = (\u03be + i)Fm + \u210ft \u00b7 Gm (16) has a unique solution um(\u00b7, \u03be, t; \u210f) \u2208C1( R) with the decay condition um(x1, \u03be, t) \u21920 as x1 \u2192\u00b1\u221e. Moreover, we have that um(\u00b7, \u03be, t) \u2208S( R), um(\u00b7, \u00b7, t) \u2208C\u221e( R \u00d7 R), and for any \u03b1, \u03b2 \u22650 it satis\ufb01es that D\u03b1 x1D\u03b2 \u03be um(\u00b7, \u03be, t) \u2208S( R). If \u210ft \u00b7 m = 0, then um is supported on |x1| \u2264R. If \u210ft \u00b7 m \u0338= 0, then um vanishes in one of the components of |x1| > R, and decays exponentially (depending on the product \u210ft \u00b7 m) on the other component. In addition, it satis\ufb01es the bounds |D\u03b1 x1D\u03b2 \u03be um| \u2272\u27e8\u03be, \u210ft\u27e9\u2225D\u03b1 x1Vm\u2225L1, (19) |D\u03b1 x1D\u03b2 \u03be (um(\u00b7, t1) \u2212um(\u00b7, t2))| \u2272\u210f|t1 \u2212t2|(\u2225D\u03b1 x1Vm\u2225L1 + \u27e8\u03be, \u210ft1, \u210ft2\u27e9|m|\u2225D\u03b1\u22121 x1 Vm\u2225L1). (20) Moreover, for |x1| \u22652R we have the linear decay bound |D\u03b1 x1D\u03b2 \u03be um(x1)| \u2272\u27e8\u03be, \u210ft\u27e9 |x1| \u2225D\u03b1\u22121 x1 Vm\u2225L1. (21) The constants in the inequalities may depend on \u03b1, \u03b2, d, but are independent of \u210f, m, \u03be, t, R. Proof. The uniqueness of such a solution follows from the remark after Lemma 4.5. From (\u22c6) and Lemma 4.5 we obtain the existence and that it belongs to S( R). As in the previous proofs, let \u00b5 = 2\u03c0i(\u210ft \u00b7 m)/(\u03be + i), so that Re(\u00b5) = 2\u03c0(\u210ft \u00b7 m)/\u27e8\u03be\u27e92. We know that the form of the solution depends on the sign of Re(\u00b5); more explicitly, proceeding as in the proof of Lemma 4.5, we have that: 30 \f1. if \u210ft \u00b7 m = 0, then um(x1, \u03be, t) = 1 \u03be + i[(\u03be + i)D\u22121 x1 Fm(x1) + \u210ft \u00b7 D\u22121 x1 Gm(x1)], 2. if \u210ft \u00b7 m > 0, then um(x1, \u03be, t) = 1 \u03be + i \u0014 (\u03be + i)D\u22121 x1 Fm(x1) + \u210ft \u00b7 D\u22121 x1 Gm(x1) \u2212\u00b5 Z x1 \u2212\u221e e\u2212\u00b5(x1\u2212y1)[(\u03be + i)D\u22121 y1 Fm(y1) + \u210ft \u00b7 D\u22121 y1 Gm(y1)]dy1 \u0015 , 3. if \u210ft \u00b7 m < 0, then um(x1, \u03be, t) = 1 \u03be + i \u0014 (\u03be + i)D\u22121 x1 Fm(x1) + \u210ft \u00b7 D\u22121 x1 Gm(x1) + \u00b5 Z \u221e x1 e\u2212\u00b5(x1\u2212y1)[(\u03be + i)D\u22121 y1 Fm(y1) + \u210ft \u00b7 D\u22121 y1 Gm(y1)]dy1 \u0015 . From Proposition 4.7 we know that (\u03be + i)D\u22121 x1 Fm + \u210ft \u00b7 D\u22121 x1 Gm is a smooth compactly supported function. In particular, this implies that in the \ufb01rst case the solution is compactly supported. In the second and third case, this implies that the solutions vanish if x1 < \u2212R and x1 > R, respectively, and are decaying exponentials if x1 > R and x1 < \u2212R, respectively. Moreover, all the terms involved (\u03be + i, \u00b5, e\u2212\u00b5(x1\u2212y1), and (\u03be + i)D\u22121 x1 Fm + \u210ft \u00b7 D\u22121 x1 Gm) are di\ufb00erentiable with respect to x1 and \u03be, and the possible di\ufb00erent cases depend only on m and t (namely on the sign of \u210ft \u00b7 m), and not on x1 or \u03be. This implies that um(\u00b7, \u00b7, t) \u2208C\u221e( R \u00d7 R) and D\u03b1 x1D\u03b2 \u03be um(\u00b7, \u03be, t) \u2208S( R). To prove the estimates (19) we succesively di\ufb00erentiate the equation (16) to show that Dx1D\u03beum(\u00b7, \u03be, t) (which we know is in S( R)) solves certain ODE, and then use the estimates for the unique solution in S( R) from Lemma 4.5. Di\ufb00erentiating the equation, we see that if \u03b1 \u22650 then D\u03b1 x1um solves the equation (\u03be + i)Dx1[D\u03b1 x1um] + \u210ft \u00b7 m[D\u03b1 x1um] = (\u03be + i)D\u03b1 x1Fm + \u210ft \u00b7 D\u03b1 x1Gm. (22) By (\u22c6) and Lemma 4.5 we can bound \u2225D\u03b1 x1um\u2225L1 \u2272\u27e8\u03be\u27e9\u2225D\u03b1\u22121 x1 Fm\u2225L1 + |\u210ft|\u2225D\u03b1\u22121 x1 Gm\u2225L1 \u2272\u27e8\u03be, \u210ft\u27e9\u2225D\u03b1\u22121 x1 Vm\u2225L1, (23) |D\u03b1 x1um| \u2272\u2225D\u03b1+1 x1 um\u2225L1 \u2272\u27e8\u03be, \u210ft\u27e9\u2225D\u03b1 x1Vm\u2225L1. (24) Di\ufb00erentiating (22) with respect to \u03be gives that D\u03b1 x1D\u03beum solves the equation (\u03be + i)Dx1[D\u03b1 x1D\u03beum] + \u210ft \u00b7 m[D\u03b1 x1D\u03beum] = 1 2\u03c0i(D\u03b1 x1Fm \u2212D\u03b1+1 x1 um). (25) By (\u22c6), Lemma 4.5, and (23) we can bound \u2225D\u03b1 x1D\u03beum\u2225L1 \u2272\u2225D\u03b1\u22121 x1 Fm\u2225L1 + \u2225D\u03b1 x1um\u2225L1 \u2272\u27e8\u03be, \u210ft\u27e9\u2225D\u03b1\u22121 x1 Vm\u2225L1, (26) |D\u03b1 x1D\u03beum| \u2272\u2225D\u03b1+1 x1 D\u03beum\u2225L1 \u2272\u27e8\u03be, \u210ft\u27e9\u2225D\u03b1 x1Vm\u2225L1. (27) By induction on \u03b2 \u22652, di\ufb00erentiating (25) with respect to \u03be gives that D\u03b1 x1D\u03b2 \u03be um solves the equation (\u03be + i)Dx1[D\u03b1 x1D\u03b2 \u03be um] + \u210ft \u00b7 m[D\u03b1 x1D\u03b2 \u03be um] = \u2212\u03b2 2\u03c0iD\u03b1+1 x1 D\u03b2\u22121 \u03be um. (28) From (28), Lemma 4.5, and (26) we obtain \u2225D\u03b1 x1D\u03b2 \u03be um\u2225L1 \u2272\u2225D\u03b1 x1D\u03b2\u22121 \u03be um\u2225L1 \u2272. . . \u2272\u2225D\u03b1 x1D\u03beum\u2225L1 \u2272\u27e8\u03be, \u210ft\u27e9\u2225D\u03b1\u22121 x1 Vm\u2225L1, (29) 31 \f|D\u03b1 x1D\u03b2 \u03be um| \u2272\u2225D\u03b1+1 x1 D\u03b2 \u03be um\u2225L1 \u2272\u27e8\u03be, \u210ft\u27e9\u2225D\u03b1 x1Vm\u2225L1. (30) We have shown (19) through (24), (27), and (30). Now we prove (20). Let us denote uj m(x1, \u03be) := um(x1, \u03be, tj) and recall that D\u03b1 x1D\u03b2 \u03be uj m(\u00b7, \u03be) \u2208S( R) for any \u03b1, \u03b2 \u22650. There are two cases to consider: when both products \u210ftj \u00b7 m vanish, and when at least one of them does not vanish. In the \ufb01rst case we have that uj m(x1, \u03be) = 1 \u03be + i[(\u03be + i)D\u22121 x1 Fm(x1) + \u210ftj \u00b7 D\u22121 x1 Gm(x1)], and so u1 m(x1, \u03be) \u2212u2 m(x1, \u03be) = \u210f(t1 \u2212t2) \u03be + i \u00b7 D\u22121 x1 Gm(x1). It follows directly from this and the Fundamental Theorem of Calculus that |D\u03b1 x1D\u03b2 \u03be (u1 m \u2212u2 m)| \u2272\u210f|t1 \u2212t2|\u2225D\u03b1 x1Vm\u2225L1. (31) Suppose now that \u210ft1 \u00b7 m \u0338= 0. Substracting the equations (22) for D\u03b1 x1uj m we obtain that (\u03be + i)Dx1[D\u03b1 x1(u1 m \u2212u2 m)] + \u210ft2 \u00b7 m[D\u03b1 x1(u1 m \u2212u2 m)] = \u210f(t1 \u2212t2) \u00b7 D\u03b1 x1Gm \u2212\u210f(t1 \u2212t2) \u00b7 mD\u03b1 x1u1 m. (32) By (\u22c6) and Lemma 4.5 we have that the condition \u210ft1 \u00b7 m \u0338= 0 implies that u1 m = Dx1w for some w \u2208S( R). Therefore, the di\ufb00erence D\u03b1 x1(u1 m \u2212u2 m) (which we know is in S( R)) is the unique solution in S( R) to a di\ufb00erential equation, (32), as in the setting of Lemma 4.5. By (\u22c6), (32), Lemma 4.5, and (23) we obtain |D\u03b1 x1(u1 m \u2212u2 m)| \u2272\u2225D\u03b1+1 x1 (u1 m \u2212u2 m)\u2225L1 \u2272\u210f|t1 \u2212t2|(\u2225D\u03b1 x1Gm\u2225L1 + |m|\u2225D\u03b1 x1u1 m\u2225L1) \u2272\u210f|t1 \u2212t2|(\u2225D\u03b1 x1Vm\u2225L1 + \u27e8\u03be, \u210ft1\u27e9|m|\u2225D\u03b1\u22121 x1 Vm\u2225L1). (33) Remark. This step shows why it is useful to have at disposal the L1 \u2192L\u221eestimates and not the L\u221e\u2192L\u221eestimates alone. Similarly, substracting the equations (25) for D\u03b1 x1D\u03beuj m we obtain that (\u03be + i)Dx1[D\u03b1 x1D\u03be(u1 m \u2212u2 m)] + \u210ft2 \u00b7 m[D\u03b1 x1D\u03be(u1 m \u2212u2 m)] = \u22121 2\u03c0iD\u03b1+1 x1 (u1 m \u2212u2 m) \u2212\u210f(t1 \u2212t2) \u00b7 mD\u03b1 x1D\u03beu1 m. For \u03b2 \u22652 the equation takes the same form. Indeed, substracting the equations (28) for D\u03b1 x1D\u03b2 \u03be uj m we obtain that (\u03be + i)Dx1[D\u03b1 x1D\u03b2 \u03be (u1 m \u2212u2 m)] + \u210ft2 \u00b7 m[D\u03b1 x1D\u03b2 \u03be (u1 m \u2212u2 m)] = \u2212\u03b2 2\u03c0iD\u03b1+1 x1 D\u03b2\u22121 \u03be (u1 m \u2212u2 m) \u2212\u210f(t1 \u2212t2) \u00b7 mD\u03b1 x1D\u03b2 \u03be u1 m. (34) Since u1 m = Dx1w, then we are in the setting of Lemma 4.5 as before. By (34), Lemma 4.5, (26), and (29) we can bound \u2225D\u03b1+1 x1 D\u03b2 \u03be (u1 m \u2212u2 m)\u2225L1 \u2272\u2225D\u03b1+1 x1 D\u03b2\u22121 \u03be (u1 m \u2212u2 m)\u2225L1 + \u210f|t1 \u2212t2||m|\u2225D\u03b1 x1D\u03b2 \u03be u1 m\u2225L1 \u2272\u2225D\u03b1+1 x1 D\u03b2\u22121 \u03be (u1 m \u2212u2 m)\u2225L1 + \u210f|t1 \u2212t2|\u27e8\u03be, \u210ft1\u27e9|m|\u2225D\u03b1\u22121 x1 Vm\u2225L1. Iterating this and using (33) we obtain that \u2225D\u03b1+1 x1 D\u03b2 \u03be (u1 m \u2212u2 m)\u2225L1 \u2272. . . \u2272\u2225D\u03b1+1 x1 (u1 m \u2212u2 m)\u2225L1 + \u210f|t1 \u2212t2|\u27e8\u03be, \u210ft1\u27e9|m|\u2225D\u03b1\u22121 x1 Vm\u2225L1 \u2272\u210f|t1 \u2212t2|(\u2225D\u03b1 x1Vm\u2225L1 + \u27e8\u03be, \u210ft1\u27e9|m|\u2225D\u03b1\u22121 x1 Vm\u2225L1). 32 \fFrom this and the Fundamental Theorem of Calculus we conclude that |D\u03b1 x1D\u03b2 \u03be (u1 m\u2212u2 m)| \u2272\u2225D\u03b1+1 x1 D\u03b2 \u03be (u1 m\u2212u2 m)\u2225L1 \u2272\u210f|t1\u2212t2|(\u2225D\u03b1 x1Vm\u2225L1+\u27e8\u03be, \u210ft1\u27e9|m|\u2225D\u03b1\u22121 x1 Vm\u2225L1). (35) We have shown (20) through (31), (33), and (35). Finally, we prove the decay estimates (21) for the solution. In the case \u210ft \u00b7 m = 0 there is nothing to prove, as the solution is compactly supported on |x1| \u2264R. The other two cases are analogous, so we only consider the case \u210ft\u00b7m > 0, so that Re(\u00b5) > 0. For this one we know the solution vanishes if x1 < \u2212R, so we only have to deal with x1 > R. We can rewrite the solution for x1 > R as um(x1, \u03be, t) = \u2212\u00b5e\u2212\u00b5(x1\u2212R) \u03be + i Z R \u2212R e\u2212\u00b5(R\u2212y1)[(\u03be + i)D\u22121 y1 Fm(y1) + \u210ft \u00b7 D\u22121 y1 Gm(y1)]dy1. Integrating by parts we obtain that D\u03b1 x1um = \u0012 \u2212\u00b5 2\u03c0i \u0013\u03b1 um = \u2212\u00b5e\u2212\u00b5(x1\u2212R) \u03be + i Z R \u2212R [(\u2212Dy1)\u03b1e\u2212\u00b5(R\u2212y1)][(\u03be + i)D\u22121 y1 Fm(y1) + \u210ft \u00b7 D\u22121 y1 Gm(y1)]dy1 = \u2212\u00b5e\u2212\u00b5(x1\u2212R) \u03be + i Z R \u2212R e\u2212\u00b5(R\u2212y1)[(\u03be + i)D\u03b1\u22121 y1 Fm(y1) + \u210ft \u00b7 D\u03b1\u22121 y1 Gm(y1)]dy1 =: \u03d5\u03c8. We will prove that \u03d5 and its derivatives (with respect to \u03be) have the required decay, while \u03c8 and its derivatives remain bounded. Let us observe that \u00b5(R \u2212y1) = i\u03b7/(\u03be + i) with \u03b7 \u22650 for y1 \u2208[\u2212R, R], so that we are in the setting of Lemma 4.6. It follows from this that |D\u03b2 \u03be \u03c8| \u2272\u27e8\u03be, \u210ft\u27e9\u2225D\u03b1\u22121 x1 Vm\u2225L1, where we allow the constant of the inequality to depend on \u03b2. By a similar induction as in the proof of Lemma 4.6, we obtain that D\u03b2 \u03be \u0012\u00b5e\u2212\u00b5s \u03be + i \u0013 = \u00b5e\u2212\u00b5sQ\u03b2(\u00b5s) (\u03be + i)\u03b2+1 , where Q\u03b2 is a polynomial of degree \u03b2 with coe\ufb03cients depending only on \u03b2. Multiplying by s we obtain sD\u03b2 \u03be \u0012\u00b5e\u2212\u00b5s \u03be + i \u0013 = e\u2212\u00b5sQ\u2217 \u03b2(\u00b5s) (\u03be + i)\u03b2+1 , where Q\u2217 \u03b2(z) = zQ\u03b2(z) is a polynomial of degree \u03b2 + 1. Again, as |\u03be + i| \u22651, there exists a polynomial e Q\u2217 \u03b2 of degree \u03b2 + 1 such that \f \f \f \f Q\u2217 \u03b2(\u00b5s) (\u03be + i)\u03b2+1 \f \f \f \f \u2264e Q\u2217 \u03b2 \u0012|\u00b5s| \u27e8\u03be\u27e9 \u0013 . We have that \u00b5(x1 \u2212R) = i\u03b7/(\u03be + i) with \u03b7 \u22650 for x1 \u2265R, so that we are in the setting of Lemma 4.6. It follows from the previous inequalities that |(x1 \u2212R)D\u03b2 \u03be \u03d5| \u2264 \f \f \f \fe\u2212i\u03b7/(\u03be+i) e Q\u2217 \u03b2 \u0012 \u03b7 \u27e8\u03be\u27e92 \u0013\f \f \f \f\u22721, where we allow the constant in the last inequality to depend on \u03b2. In particular, for x1 \u22652R we have that x1 \u2212R \u2265x1/2, so that the previous inequality gives |D\u03b2 \u03be \u03d5| \u22721/|x1|. Combining the bounds for \u03c8 and \u03d5 gives the decay estimate that we wanted. Remark. It may not be relevant for this particular problem, but the condition m \u2208 Zd does not seem to intervene in the proof of the result. Remark. It does not seem relevant, but the constants in the inequalities are independent of R. It only plays a role when we need that |x1| \u22652R to have the decay. 33 \fLet us discuss brie\ufb02y why the vanishing moment conditions were important in the previous proof. First, they appear in proving the di\ufb00erences estimates. If t1, t2 \u2208 Rd are such that \u210ft1 \u00b7 m > 0 > \u210ft2 \u00b7 m, then for x1 > R we would have u2 m(x1) = 0 and u1 m(x1) = 2\u03c0ie\u2212\u00b51(x1\u2212R) \u03be + i Z R \u2212R e\u2212\u00b51(R\u2212y1)[(\u03be + i)Fm(y1) + \u210ft1 \u00b7 Gm(y1)]dy1. It seems that there is no way to estimate u1 m \u2212u2 m in terms of the di\ufb00erence \u210f|t1 \u2212t2|. For instance, letting \u00b51 \u21920, we obtain that the di\ufb00erence would be approximately 2\u03c0i \u03be + i Z R \u2212R [(\u03be + i)Fm(y1) + \u210ft1 \u00b7 Gm(y1)]dy1, which suggests the need of the vanishing moment condition. They also show up with a crucial improvement for the decay estimates. In the case \u210ft \u00b7 m > 0 and x1 > R, without the vanishing moment condition, we would only have the exponential decay um(x1) = 2\u03c0ie\u2212\u00b5(x1\u2212R) \u03be + i Z R \u2212R e\u2212\u00b5(R\u2212y1)[(\u03be + i)Fm(y1) + \u210ft \u00b7 Gm(y1)]dy1. It may happen that \u00b5 is small, for instance if t \u00b7 m = 1, making the exponential decay very slow. In this case, the best estimates we seem to obtain are |um| \u2272 \u27e8\u03be\u27e9 \u00b5|x1 \u2212R|, with 1/\u00b5 potentially being as big as 1/\u210f. In our proof, what allows us to get better estimates is the presence of the factor \u00b5 in front of the integral. This factor comes from integrating by parts using the vanishing moment condition. We rewrite the previous estimates to depend on V and no longer on the Fourier coe\ufb03cients Vm. Corollary 4.9. Let V = (F, G) satisfy (\u22c6), and let um be the solution from Theorem 4.8. Then, for any M \u22650 we have |D\u03b1 x1D\u03b2 \u03be um| \u2272\u27e8\u03be, \u210ft\u27e9\u27e8m\u27e9\u22122M\u2225V \u2225W \u03b1+2M,1(T ), (36) |D\u03b1 x1D\u03b2 \u03be (um(\u00b7, t1) \u2212um(\u00b7, t2))| \u2272\u210f|t1 \u2212t2|\u27e8\u03be, \u210ft1, \u210ft2\u27e9\u27e8m\u27e9\u22122M+1\u2225V \u2225W \u03b1+2M,1(T ). (37) Moreover, for |x1| \u22652R we have the linear decay bound |D\u03b1 x1D\u03b2 \u03be um(x1)| \u2272\u27e8\u03be, \u210ft\u27e9 |x1| \u27e8m\u27e9\u22122M\u2225V \u2225W \u03b1+2M,1(T ). (38) The constants in the inequalities may depend on \u03b1, \u03b2, M, d, R, but are independent of \u210f, m, \u03be, t. Proof. We use that V is compactly supported and the Fundamental Theorem of Calculus to get that \u2225D\u03b1\u22121 x1 Vm\u2225L1 \u2272\u2225D\u03b1\u22121 x1 Vm\u2225L\u221e\u2272\u2225D\u03b1 x1Vm\u2225L1, where we allow the constant of the inequality to depend on R. For \u03b1 \u22650 and any M \u22650, we have from Proposition 2.3 that \u2225D\u03b1 x1Vm\u2225L1(R) \u2272\u27e8m\u27e9\u22122M\u2225D\u03b1 x1V \u2225W 2M,1(T ) \u2272\u27e8m\u27e9\u22122M\u2225V \u2225W \u03b1+2M,1(T ), where we allow the constant of the inequality to depend on M. Then the conclusion follows from Theorem 4.8. 34 \fProof of Theorem 4.3. Let u(\u00b7, \u03be, t; \u210f) \u2208C\u221e(T ) solve the equation (13) and satisfy the decay condition \u2225u(x1, \u00b7, \u03be, t)\u2225L\u221e(Td) \u21920 as x1 \u2192\u00b1\u221e. Then its Fourier coe\ufb03cients must solve the ODEs (16) and satisfy the decay conditions as x1 \u2192\u00b1\u221e. Under (\u22c6), we know from Theorem 4.8 the existence and uniqueness of such solutions. Let {um} be such and de\ufb01ne u(x1, x\u2032, \u03be, t; \u210f) := X m\u2208Zd um(x1, \u03be, t; \u210f)em(x\u2032). The control on derivatives of um from Corollary 4.9 implies that u(\u00b7, t) \u2208C\u221e( R \u00d7 Td \u00d7 R) and it solves (13). Moreover, we can bound |D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be u| \u2264 X m\u2208Zd |m|\u03b2|D\u03b1 x1D\u03b3 \u03be um| \u2272 X m\u2208Zd |m|\u03b2\u27e8m\u27e9\u22122M\u27e8\u03be, \u210ft\u27e9\u2272\u27e8\u03be, \u210ft\u27e9, by taking M = M(\u03b2, d) su\ufb03ciently large. The constants in the inequalities may depend on the admissible quantities, but are independent of \u210f, \u03be, t. In the same way we prove the di\ufb00erence estimate from (14) and the decay estimate (15); \ufb01nally, the existence of a solution satisfying the decay condition follows from (15). 4.4 Explicit de\ufb01nition of the symbol and properties The purpose of this section is to prove the conjugation from Theorem 4.2. Recall that we have (\u2206\u210f+ 2\u210fV\u210f)A = A\u2206\u210f+ 2\u210fOp\u210f((\u03be + i)Dx1a + \u210ft \u00b7 Dx\u2032a + (\u03be + i)Fa + \u210ft \u00b7 Ga) + \u210f2Op\u210f(D2a + 2V \u00b7 Da). Let us de\ufb01ne a = e\u2212u\u03c6, where u(x1, x\u2032, \u03be, t; \u210f) is the solution from Theorem 4.3 to (\u03be + i)Dx1u + \u210ft \u00b7 Dx\u2032u = (\u03be + i)F + \u210ft \u00b7 G, and \u03c6(x1, \u03be, t; \u210f) is de\ufb01ned as follows. Let \u03c8(s) be a (nonnegative, even, decreasing) smooth function such that \u03c8(s) \u22611 for |s| \u22641 and \u03c8(s) \u22610 for |s| \u22652. De\ufb01ne \u03c60(\u03be, t; \u210f) := \u03c8(\u03be)\u03c8(4(|\u210ft| \u22121)), \u03c6(x1, \u03be, t; \u210f) := \u03c8(\u210f\u03b8x1)\u03c60(\u03be, t; \u210f), with \u03b8 > 0 to be de\ufb01ned later. Remark. Note that \u03c60 and \u03c6 vanish if |\u210ft| \u22641/2, in particular they vanish for t near the origin and so these are smooth in all their variables. Moreover, it is a special zero order symbol. Remark. The cuto\ufb00in x1 is unnecessary for the properties of A and B, but it is actually needed for the properties of R. The estimates from Theorem 4.3 give that u\u03c6 is a special zero order symbol. From Corollary 3.15 we obtain that, for small \u210f, the operator A := Op\u210f(a) is invertible and its inverse is uniformly bounded (in \u210f) in Hs \u00b1\u03b4,\u210f(T ). We also have (\u03be + i)Dx1a + \u210ft \u00b7 Dx\u2032a + (\u03be + i)Fa + \u210ft \u00b7 Ga = a[\u2212(\u03be + i)Dx1(u\u03c6) \u2212\u210ft \u00b7 Dx\u2032(u\u03c6) + (\u03be + i)F + \u210ft \u00b7 G] = a(1 \u2212\u03c6)[(\u03be + i)F + \u210ft \u00b7 G] \u2212(\u03be + i)auDx1\u03c6 = a(1 \u2212\u03c6)[(\u03be + i)F + \u210ft \u00b7 G] \u2212 1 2\u03c0i\u210f\u03b8(\u03be + i)au\u03c8\u2032(\u210f\u03b8x1)\u03c60. Note that (1 \u2212\u03c6) vanishes if |x1| \u2264\u210f\u2212\u03b8, |\u03be| \u22641 and ||\u210ft| \u22121| \u22641/4. Let us rewrite 1 \u2212\u03c6 = (1 \u2212\u03c60) + \u03c60 \u00b7 (1 \u2212\u03c8(\u210f\u03b8x1)), and observe that 1 \u2212\u03c8(\u210f\u03b8x1) vanishes for |x1| \u2264\u210f\u2212\u03b8. Because F and G are supported on |x1| \u2264R, then, for small enough \u210f, say \u210f\u2212\u03b8 \u2265R, we have a(1 \u2212\u03c6)[(\u03be + i)F + \u210ft \u00b7 G] = a(1 \u2212\u03c60)[(\u03be + i)F + \u210ft \u00b7 G]. The function (1 \u2212\u03c60) vanishes if |\u03be| \u22641 and ||\u210ft| \u22121| \u22641/4, and outside of this set the operator \u2206\u210f is elliptic, i.e. its symbol (\u03be + i)2 + |\u210ft|2 does not vanish. 35 \fProposition 4.10. Outside of the set {|\u03be| \u22641} \u2229{||\u210ft| \u22121| \u22641/4}, we can bound |(\u03be + i)2 + |\u210ft|2| \u2273\u27e8\u03be, \u210ft\u27e92 Proof. Let s := (\u03be + i)2 + |\u210ft|2. Let us observe that |s| = ([\u03be2 + |\u210ft|2 \u22121]2 + 4\u03be2)1/2 = (\u03be4 + 2\u03be2(|\u210ft|2 + 1) + (|\u210ft|2 \u22121)2)1/2 \u2265\u03be2 + ||\u210ft|2 \u22121|. This proves that |s| \u2265\u03be2. If |\u03be| \u22651, this also proves |s| \u22651. If |\u03be| \u22641, then we must have ||\u210ft| \u22121| \u22651/4, and this gives |s| \u2265||\u210ft| + 1|/4 \u22651/4. We have shown that |s| \u22731. Thus, all that remains to prove is that |s| \u2273|\u210ft|2. If |\u210ft| \u22642, then this follows from before. If |\u210ft| \u22652, then ||\u210ft|2 \u22121| \u2265|\u210ft|2/2, and the conclusion follows. Let us consider the function r(x1, x\u2032, \u03be, t; \u210f) := 1 \u2212\u03c60 (\u03be + i)2 + |\u210ft|2 [(\u03be + i)F + \u210ft \u00b7 G] = (1 \u2212\u03c60)\u27e8\u03be, \u210ft\u27e92 (\u03be + i)2 + |\u210ft|2 \u00b7 (\u03be + i)F + \u210ft \u00b7 G \u27e8\u03be, \u210ft\u27e92 =: q1 \u00b7 q2. The functions \u03be/\u27e8\u03be, \u210ft\u27e92 and \u210ft/\u27e8\u03be, \u210ft\u27e92 are special zero order symbols, and so q2 is a special zero order symbol. We know that q1 is supported outside of {|\u03be| \u22641} \u2229{||\u210ft| \u22121| \u22641/4} and 1 \u2212\u03c60 is a special zero order symbol. By induction we can show that D\u03b1 \u03be D\u03b2 t \u27e8\u03be, \u210ft\u27e92 (\u03be + i)2 + |\u210ft|2 = \u210f\u03b2P\u03b1,\u03b2(\u03be, \u210ft) ((\u03be + i)2 + |\u210ft|2)\u03b1+\u03b2+1 , for some polynomial P\u03b1,\u03b2 of degree at most \u03b1 + \u03b2 + 2. Outside of {|\u03be| \u22641} \u2229{||\u210ft| \u22121| \u22641/4}, from Proposition 4.10, we can bound \f \f \f \fD\u03b1 \u03be D\u03b2 t \u27e8\u03be, \u210ft\u27e92 (\u03be + i)2 + |\u210ft|2 \f \f \f \f = \f \f \f \f \u210f\u03b2P\u03b1,\u03b2(\u03be, \u210ft) ((\u03be + i)2 + |\u210ft|2)\u03b1+\u03b2+1 \f \f \f \f \u2272\u210f\u03b2\u27e8\u03be, \u210ft\u27e9\u03b1+\u03b2+2 \u27e8\u03be, \u210ft\u27e92(\u03b1+\u03b2+1) \u2264\u210f\u03b2. This proves that q1 is a special zero order symbol, and by Proposition 3.14 so is r. Let us de\ufb01ne the symbol b by b := a + 2\u210fa 1 \u2212\u03c60 (\u03be + i)2 + |\u210ft|2 [(\u03be + i)F + \u210ft \u00b7 G] = a(1 + 2\u210fr). Since r is a special zero order symbol, by Proposition 3.14 we have that, for small enough \u210f, v := log(1 + 2\u210fr) is also a special zero order symbol. Therefore b = a(1 + 2\u210fr) = e\u2212u\u03c6+v and \u2212u\u03c6 + v a special zero order symbol. From Corollary 3.15, we conclude that for small \u210f, the operator B := Op\u210f(b) is invertible and its inverse is uniformly bounded (in \u210f) in Hs \u00b1\u03b4,\u210f(T ). Finally, we are left to consider the expression \u210f2(D2a + 2V \u00b7 Da) \u2212 2 2\u03c0i\u210f1+\u03b8(\u03be + i)au\u03c8\u2032(\u210f\u03b8x1)\u03c60 =: r1 + r2. In order to prove that the pseudodi\ufb00erential operator R, from Theorem 4.2, is bounded from L2 \u2212\u03b4(T ) to L2 \u03b4(T ), we will show that \u27e8x1\u27e92\u03b4ri are zero order symbols. Recall that u\u03c6 is supported on |x1| \u22642\u210f\u2212\u03b8 and so a = e\u2212u\u03c6 \u22611 if |x1| \u22652\u210f\u2212\u03b8. Therefore, r1 is supported on |x1| \u22642\u210f\u2212\u03b8, and we have the bounds |D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be r1| \u2272\u210f2, |D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be (\u27e8x1\u27e92\u03b4r1)| \u2272\u210f2\u22122\u03b4\u03b8. The term \u03c8\u2032(\u210f\u03b8x1) gives that r2 is supported on \u210f\u2212\u03b8 \u2264|x1| \u22642\u210f\u2212\u03b8. The decay estimate from Theorem 4.3 gives that |D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be u| \u2272\u210f\u03b8\u27e8\u03be, \u210ft\u27e9. Moreover, a is a zero order symbol and (\u03be + i)\u03c8\u2032(\u210f\u03b8x1)\u03c60 is a zero order symbol supported on {|\u03be| \u22642} \u2229{||\u210ft| \u22121| \u22641/2}. We obtain that |D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be r2| \u2272\u210f1+2\u03b8, |D\u03b1 x1D\u03b2 x\u2032D\u03b3 \u03be (\u27e8x1\u27e92\u03b4r2)| \u2272\u210f1+2\u03b8\u22122\u03b4\u03b8. We can ensure that 2 \u22122\u03b4\u03b8, 1 + 2\u03b8 \u22122\u03b4\u03b8 > 1 by taking \u03b8 = 1/2 and \u03b4 < 1. If we let \u03b5 = 1 \u2212\u03b4 > 0, then we obtain the conjugation identity (\u2206\u210f+ 2\u210fV\u210f)A = B\u2206\u210f+ \u210f1+\u03b5R. Remark. The restriction \u03b4 < 1 does not appear in [18]. This may be due that in our case we look for the operator R to be bounded from L2 \u2212\u03b4(T ) to L2 \u03b4(T ), so we need a gain of a factor \u27e8x1\u27e92\u03b4. In [18], it is only needed from L2 \u03b4( Rd) to L2 \u03b4+1( Rd), so just a factor \u27e8x\u27e9is required. 36 \f4.5 Proof of the Carleman estimate Let us rewrite the statements of Theorem 4.1 and Theorem 1.2 in semiclassical notation. Theorem 4.11. Let \u03b4 > 1/2. There exists \u210f0 \u22641 such that if 0 < |\u210f| \u2264\u210f0 and \u210f\u22122 / \u2208Spec(\u2212\u2206g0), then for any f \u2208L2 \u03b4(T ) there exists a unique u \u2208H1 \u2212\u03b4,\u210f(T ) which solves e2\u03c0x1/\u210f(\u210fD)2e\u22122\u03c0x1/\u210fu = \u210f2f. Moreover, this solution is in H2 \u2212\u03b4,\u210f(T ) and satis\ufb01es the estimates \u2225u\u2225H2 \u2212\u03b4,\u210f(T ) \u2272\u210f\u2225f\u2225L2 \u03b4(T ), with the constant of the inequality independent of \u210f. This theorem allows to de\ufb01ne G\u210f: L2 \u03b4(T ) \u2192H2 \u2212\u03b4,\u210f(T ) by G\u210ff := u, so that \u2206\u210fG\u210f= \u210f2I on L2 \u03b4(T ). Theorem 4.12. Let 1/2 < \u03b4 < 1 and let V, W satisfy (\u22c6). There exists \u210f0 \u22651 such that if 0 < |\u210f| \u2264\u210f0 and \u210f\u22122 / \u2208Spec(\u2212\u2206g0), then for any f \u2208L2 \u03b4(T ) there exists a unique u \u2208H2 \u2212\u03b4,\u210f(T ) which solves e2\u03c0x1/\u210f(\u210f2HV,W )e\u22122\u03c0x1/\u210fu = \u210f2f. Moreover, this solution satis\ufb01es the estimates \u2225u\u2225H2 \u2212\u03b4,\u210f(T ) \u2272\u210f\u2225f\u2225L2 \u03b4(T ), with the constant of the inequality independent of \u210f. Proof. We prove this only for \u210f> 0, as the other case is analogous. Let us write (\u2206\u210f+ 2\u210fV\u210f+ \u210f2f W)u = e2\u03c0x1/\u210f(\u210f2HV,W )e\u22122\u03c0x1/\u210fu = \u210f2f, (39) where \u2206\u210f= e2\u03c0x1/\u210f(\u210fD)2e\u22122\u03c0x1/\u210f, V\u210f= e2\u03c0x1/\u210f[V \u00b7 (\u210fD)]e\u22122\u03c0x1/\u210f, f W := V 2 + D \u00b7 V + W. To avoid repetition throughout the proof we recall from Theorem 4.2 that A and B are uniformly bounded invertible operators on Hs \u00b1\u03b4,\u210f(T ) with uniformly bounded inverses, R : L2 \u2212\u03b4(T ) \u2192L2 \u03b4(T ) is bounded. Also, f W : L2 \u2212\u03b4(T ) \u2192L2 \u03b4(T ) is bounded, because it is bounded and compactly supported. We start showing the existence and the estimates for the solution. We look for a solution of the form u = AG\u210fg \u2208L2 \u2212\u03b4(T ), with g \u2208L2 \u03b4(T ), and use Theorem 4.2 to rewrite the expression as (\u2206\u210f+ 2\u210fV\u210f+ \u210f2f W)u = (\u2206\u210f+ 2\u210fV\u210f+ \u210f2f W)AG\u210fg = \u210f2(B + \u210f\u22121+\u03b5RG\u210f+ f WAG\u210f)g. Let C := \u210f\u22121+\u03b5RG\u210f+ f WAG\u210f. We claim that C : L2 \u03b4(T ) \u2192L2 \u03b4(T ) is a small perturbation of the invertible operator B, so that B + C is also invertible. Using the boundedness properties of G\u210f, from Theorem 4.11 we obtain that \u2225\u210f\u22121+\u03b5RG\u210f\u2225L2 \u03b4(T )\u2192L2 \u03b4(T ) \u2272\u210f\u03b5, \u2225f WAG\u210f\u2225L2 \u03b4(T )\u2192L2 \u03b4(T ) \u2272\u210f. We observe that B + C = (I + CB\u22121)B, from where we conclude that B + C is invertible in L2 \u03b4(T ) as claimed, and its inverse has uniformly bounded norms. Thus if we de\ufb01ne g := (B + C)\u22121f \u2208L2 \u03b4(T ), then we obtain that u := AG\u210fg = AG\u210f(B + C)\u22121f solves the equation (39). The estimates for G\u210ffrom Theorem 4.11 give \u2225u\u2225H2 \u2212\u03b4,\u210f(T ) = \u2225AG\u210f(B + C)\u22121f\u2225H2 \u2212\u03b4,\u210f(T ) \u2272\u210f\u2225f\u2225L2 \u03b4(T ), as we wanted. Now we address the uniqueness. Assume that u \u2208H2 \u2212\u03b4,\u210f(T ) solves (\u2206\u210f+ 2\u210fV\u210f+ \u210f2f W)u = 0. 37 \fLet v := A\u22121u \u2208H2 \u2212\u03b4,\u210f(T ), so that v satis\ufb01es (B\u2206\u210f+ \u210f1+\u03b5R + \u210f2f WA)v = 0, or equivalently \u2206\u210fv = \u2212\u210f2B\u22121(\u210f\u22121+\u03b5R + f WA)v. The right-hand side is in L2 \u03b4(T ) and \u2225\u210f\u22121+\u03b5B\u22121Rv\u2225L2 \u03b4(T ) \u2272\u210f\u22121+\u03b5\u2225v\u2225L2 \u2212\u03b4(T ), \u2225B\u22121f WAv\u2225L2 \u03b4(T ) \u2272\u2225v\u2225L2 \u2212\u03b4(T ). The uniqueness from Theorem 4.11 implies that v = \u2212G\u210fB\u22121(\u210f\u22121+\u03b5R + f WA)v. Using the estimates for G\u210f, from Theorem 4.11, and the bound from above we obtain that \u2225v\u2225L2 \u2212\u03b4(T ) \u2272\u210f\u00b7 \u210f\u22121+\u03b5\u2225v\u2225L2 \u2212\u03b4(T ) = \u210f\u03b5\u2225v\u2225L2 \u2212\u03b4(T ). Taking \u210fsmall enough yields that v \u22610, from where we conclude that u \u22610. Remark. The uniqueness does not follow directly from the equation and perturbative arguments: if we rewrite the equation as \u2206\u210fu = \u2212\u210f2(2\u210f\u22121V\u210f+ f W)u, then the right-hand side is in L2 \u03b4(T ), so that u = \u2212G\u210f(2\u210f\u22121V\u210f+ f W)u, but we obtain no contradiction as we can only say \u2225G\u210f(2\u210f\u22121V\u210fu)\u2225L2 \u2212\u03b4(T ) \u2272\u2225u\u2225L2 \u2212\u03b4(T ). Remark. Let f \u2208L2 \u03b4(T ) and let u \u2208H2 \u2212\u03b4,\u210f(T ) be the unique solution to the equation (\u2206\u210f+ 2\u210fV\u210f+ \u210f2f W)u = \u210f2f. We can rewrite this as \u2206\u210fu = \u210f2f \u2212(2\u210fV\u210f+ f W)u, and observe that (2\u210fV\u210f+ \u210f2f W)u \u2208L2 \u03b4(T ). Therefore, u = G\u210fe g, for some e g \u2208L2 \u03b4(T ). In the proof of existence of the solution, we showed that u takes the form AG\u210fg for some g \u2208L2 \u03b4(T ). This and the last observation yield that the operator A maps the subspace G\u210fL2 \u03b4(T ) \u2286L2 \u2212\u03b4(T ) to itself. 5 Equivalent formulations and boundary characterization As mentioned in the introduction, in order to reconstruct the electromagnetic parameters we are interested in constructing many solutions to the equation HV,W u = 0. The result from Theorem 1.2 can be used to construct a unique solution that \u201cbehaves like\u201d a harmonic function at in\ufb01nity. Indeed, let h \u2208H2 loc(T ) be harmonic and let us look for a solution of the form u = h + e\u22122\u03c0\u03c4x1r; such u solves HV,W u = 0 if and only if the correction term r solves the equation e2\u03c0\u03c4x1HV,W e\u22122\u03c0\u03c4x1r = \u2212e2\u03c0\u03c4x1Xh, where X := HV,W \u2212D2 = 2V \u00b7 D + (V 2 + D \u00b7 V + W) is a \ufb01rst order di\ufb00erential operator supported in M. The conditions (\u2020) imply that Xh \u2208L2 c(T ), and so e2\u03c0\u03c4x1Xh \u2208L2 \u03b4(T ). From Theorem 1.2 we obtain a unique solution r \u2208H2 \u2212\u03b4(T ), and so there is a unique solution to HV,W u = 0 which \u201cbehaves like\u201d the harmonic function. As has been usual, we call these functions the complex geometrical optics (CGO) solutions. The purpose of this section is to show that the boundary values of the CGO can be characterized as the unique solution to a certain boundary integral equation. The passage from the uniqueness problem at the boundary to a uniqueness problem at in\ufb01nity was \ufb01rst explicitly noticed by Nachman in [13], and has become standard since then; for instance, see [18] or [9]. The uniqueness of this corrected solution is crucial for our problem; the lack of such is what prevents the local Carleman estimate for the magnetic Schr\u00a8 odinger operator in [2] from being useful in the reconstruction procedure. In this section we follow closely the presentation from [18], as the operators are translation invariant, and [9]. However, we have to proceed slightly di\ufb00erent, as 0 \u2208Spec(\u2212\u2206g0) and the Laplacian in T does not have a bounded inverse, i.e. for f \u2208L2(T ) (or f \u2208H\u22121(T )) there may not be u \u2208H2(T ) (or u \u2208H1(T )) such that D2u = f. 38 \f5.1 Green functions, operators, and layer potentials 5.1.1 \u03c4-dependent Green function and operator The di\ufb00erential operator \u2206\u03c4 = D2 x1 + 2i\u03c4Dx1 \u2212\u03c4 2 + D2 x\u2032 has constant coe\ufb03cients, in particular it is translation invariant, and so it is its right inverse G\u03c4 from Theorem 4.1. Since G\u03c4 : L2 \u03b4(T ) \u2192L2 \u2212\u03b4(T ) is bounded, there exists a tempered distribution g\u03c4 \u2208S\u2032(T ) such that G\u03c4f = g\u03c4 \u2217f for Schwartz functions f \u2208S(T ), where the convolution is considered over the whole cylinder. The purpose of this section is to understand the properties of g\u03c4 and other related distributions. We have that \u2206\u03c4g\u03c4 = \u03b4T (0), so the Fourier expansion of this distribution is given by d g\u03c4,k(\u03be) = 1 \u03be2 + 2i\u03c4\u03be \u2212\u03c42 + |k|2 = 1 (\u03be + i\u03c4)2 + |k|2 , (40) and therefore g\u03c4,k(x1) = Z R e2\u03c0ix1\u03be (\u03be + i\u03c4)2 + |k|2 d\u03be. (41) Let us note that this integral converges absolutely, as the denominator is quadratic in \u03be and never vanishes because \u03c42 / \u2208Spec(\u2212\u2206g0). These integrals can be computed explicitly as follows. Proposition 5.1. The Fourier coe\ufb03cients g\u03c4,k(x1) of the distribution g\u03c4 are given by g\u03c4,k(x1) = \u03c0e2\u03c0\u03c4x1 \uf8f1 \uf8f2 \uf8f3 \u22122\u03c0(|x1| \u2212sgn(\u03c4)x1) if k = 0, (e\u22122\u03c0|k||x1| \u2212e\u22122\u03c0|k|sgn(\u03c4)x1)/|k| if 0 < |k| < |\u03c4|, e\u22122\u03c0|k||x1|/|k| if |k| > |\u03c4|. The distribution g\u03c4 is actually smooth away from (0, 0) \u2208T . For any \u03b5 > 0 and |x1| > \u03b5, the function g\u03c4(x1, x\u2032) and all its derivatives are uniformly bounded, i.e. we have |D\u03b1 x1D\u03b2 x\u2032g\u03c4(x1, x\u2032)| \u2264C, for some constant C = C(\u03b1, \u03b2, \u03c4, \u03b5). Proof. Instead of (41), let us consider the expression g\u03c4(x1, \u03bb) := Z R e2\u03c0ix1\u03be (\u03be + i\u03c4)2 + \u03bb2 d\u03be, (42) with \u03bb \u22650 and \u03bb \u0338= |\u03c4|, so that the denominator does not vanish. We start with the following two observations: \ufb01rst, g\u2212\u03c4(\u2212x1, \u03bb) = g\u03c4(x1, \u03bb), so it su\ufb03ces to consider the case \u03c4 > 0; second, for \ufb01xed \u03c4 and x1, the function g\u03c4(x1, \u00b7) is continuous, so the case \u03bb = 0 follows from the case \u03bb > 0. We would like to relate (42) to the classical integral Z R e2\u03c0ix1\u03be \u03be2 + \u03bb2 d\u03be = \u03c0 \u03bbe\u22122\u03c0\u03bb|x1|, which can be obtained by direct computation and the inversion formula. Consider the meromorphic function f(z) = e2\u03c0ix1z/(z2 + \u03bb2), with simple poles at z = \u00b1\u03bbi, and the rectangular contour bounded by the lines Re(z) = \u00b1L, for some large L > 0, and Im(z) = 0, Im(z) = \u03c4. The pole \u2212\u03bbi is outside of this domain since we are assuming \u03c4 > 0. The pole \u03bbi is inside this domain if 0 < \u03bb < \u03c4 and outside if \u03bb > \u03c4. Moreover, the residue of f at z = \u03bbi is equal to e\u22122\u03c0\u03bbx1/(2\u03bbi). The vertical segments of the contour have length \u03c4, and over them the numerator of f is bounded (uniformly in L), while the denominator is of order L2. From the residue theorem, after letting L \u2192+\u221e, we deduce that \u03c0 \u03bbe\u22122\u03c0\u03bb|x1| \u2212e\u22122\u03c0\u03c4x1g\u03c4(x1, \u03bb) = Z R f(z)dz \u2212 Z R f(z + i\u03c4)dz = 2\u03c0i \u001a e\u22122\u03c0\u03bbx1/(2\u03bbi) if 0 < \u03bb < \u03c4, 0 if \u03bb > \u03c4, which gives that g\u03c4(x1, \u03bb) = \u03c0e2\u03c0\u03c4x1 \u03bb \u001a e\u22122\u03c0\u03bb|x1| \u2212e\u22122\u03c0\u03bbx1 if 0 < \u03bb < \u03c4, e\u22122\u03c0\u03bb|x1| if \u03bb > \u03c4, 39 \fFor the case \u03bb = 0, we let \u03bb \u21920+, to conclude g\u03c4(x1, 0) = \u03c0e2\u03c0\u03c4x1 lim \u03bb\u21920+ e\u22122\u03c0\u03bb|x1| \u2212e\u22122\u03c0\u03bbx1 \u03bb = \u22122\u03c02e2\u03c0\u03c4x1(|x1| \u2212x1). We have proven the formulas for the Fourier coe\ufb03cients. To show the regularity, let us observe that D2(e\u22122\u03c0\u03c4x1g\u03c4) = e\u22122\u03c0\u03c4x1\u2206\u03c4g\u03c4 = \u03b4T (0). From Weyl\u2019s regularity lemma for distributions, see Chapter 10 in [5], it follows that e\u22122\u03c0\u03c4x1g\u03c4 is a smooth function away from (0, 0) \u2208T . Therefore, g\u03c4 is also smooth away from (0, 0). Assuming that \u03c4 > 0, for x1 > 0 we have that g\u03c4(x1, x\u2032) = \u03c0 X |k|>\u03c4 e\u22122\u03c0(|k|\u2212\u03c4)|x1| |k| ek(x\u2032), while for x1 < 0 we have that g\u03c4(x1, x\u2032) = 4\u03c02e\u22122\u03c0\u03c4|x1|x1 + \u03c0 X |k|\u0338=0 e\u22122\u03c0(\u03c4+|k|)|x1| |k| ek(x\u2032) \u2212\u03c0 X 0<|k|<\u03c4 e\u22122\u03c0(\u03c4\u2212|k|)|x1| |k| ek(x\u2032). The uniform boundedness of g\u03c4 and its derivatives, for |x1| > \u03b5, follows from the fact that g\u03c4(x1, x\u2032) is a sum of negative exponentials on each half of |x1| > \u03b5. Remark. The coe\ufb03cient g\u03c4,k(x1) can also be computed by solving the equation (D2 x1 + 2i\u03c4Dx1 \u2212|\u03c4|2 + |k|2)g\u03c4,k(x1) = \u03b4 R(0). This can be solved in each half x1 > 0 and x1 < 0 as sum of exponentials, and then using decay conditions lim|x1|\u2192\u00b1\u221eg\u03c4,k(x1) = 0 and the jump condition at x1 = 0. As in the previous proof, we consider the distribution \u0393\u03c4 := e\u22122\u03c0\u03c4x1g\u03c4 \u2208D\u2032(T ), which is no longer a tempered distribution, and satis\ufb01es D2\u0393\u03c4 = \u03b4T (0). In principle, it may not make sense to talk about the Fourier transform of \u0393\u03c4 as it is not a tempered distribution. However, from Proposition 5.1, we could formally say that the Fourier coe\ufb03cients of \u0393\u03c4 are given by e\u22122\u03c0\u03c4x1g\u03c4,k(x1) = \uf8f1 \uf8f2 \uf8f3 \u22122\u03c02(|x1| \u2212sgn(\u03c4)x1) if k = 0, \u03c0(e\u22122\u03c0|k||x1| \u2212e\u22122\u03c0|k|sgn(\u03c4)x1)/|k| if 0 < |k| < |\u03c4|, \u03c0e\u22122\u03c0|k||x1|/|k| if |k| > |\u03c4|. Based on the formal Fourier coe\ufb03cients above, we consider the harmonic function H\u03c4(x1, x\u2032) := 2\u03c02sgn(\u03c4)x1 \u2212\u03c0 X 0<|k|<|\u03c4| e\u22122\u03c0|k|sgn(\u03c4)x1 |k| ek(x\u2032). We have that H\u03c4 \u2208D\u2032(T ) because it is a smooth function, and so \u03930 := \u0393\u03c4 \u2212H\u03c4 \u2208D\u2032(T ) as well. Let us de\ufb01ne the distributions \u03930 0 := \u22122\u03c02|x1| and \u0393\u2217 0 := \u03930 \u2212\u03930 0. Formally, the Fourier coe\ufb03cients of \u03930 and \u0393\u2217 0 are given by \u03930,k(x1) = \u001a \u22122\u03c02|x1| if k = 0, \u03c0e\u22122\u03c0|k||x1|/|k| if k \u0338= 0, \u0393\u2217 0,k(x1) = \u001a 0 if k = 0, \u03c0e\u22122\u03c0|k||x1|/|k| if k \u0338= 0. (43) As in the proof of Proposition 5.1, we have that if k \u0338= 0, then d \u0393\u2217 0,k(\u03be) = 1 \u03be2 + |k|2 . (44) 40 \fProposition 5.2. The distributions \u03930 0 and \u0393\u2217 0 are tempered distributions, thus so is \u03930. Proof. It is clear that \u03930 0 is a tempered distribution. From (44) we actually obtain that \u0393\u2217 0 \u2208Hs(T ) for all s < \u2212(d \u22123)/2, and the conclusion follows. Proposition 5.3. Let \u0393\u03c4(x, y) := \u0393\u03c4(x \u2212y). Then, D2\u0393\u03c4(x, \u00b7) = \u03b4T (x), \u0393\u03c4(\u00b7, \u00b7) is smooth in T \u00d7 T away from the diagonal. Proof. As mentioned in the proof of Proposition 5.1, the fact that D2\u0393\u03c4 = \u03b4T (0) and Weyl\u2019s regularity lemma imply that \u0393\u03c4 is smooth away from (0, 0) \u2208T . This gives that D2\u0393\u03c4(x, \u00b7) = \u03b4T (x) and the smoothness of \u0393\u03c4(\u00b7, \u00b7) away from the diagonal. Despite the reasons not being apparent at this moment, we consider the following de\ufb01nition. We will see later how this operator appears naturally when we try to reformulate the di\ufb00erential equation the solution HV,W u = 0 as an integral equation. For the moment, in Proposition 5.5 below, we show how this operator relates to the distribution \u0393\u03c4. De\ufb01nition 5.4. Let |\u03c4| \u2265\u03c40, \u03c42 / \u2208Spec(\u2212\u2206g0), so that \u2206\u03c4 has a right inverse G\u03c4 from Theorem 4.1. For functions in L2 c(T ), we de\ufb01ne the operator K\u03c4 := e\u22122\u03c0\u03c4x1G\u03c4e2\u03c0\u03c4x1. Proposition 5.5. The operator K\u03c4 maps L2 c(T ) into H2 loc(T ), is translation invariant, commutes with di\ufb00erentiation, and satis\ufb01es D2K\u03c4 = I on L2 c(T ) and K\u03c4D2 = I on H2 c (T ). Moreover, its distributional kernel is \u0393\u03c4(\u00b7, \u00b7), i.e. K\u03c4f(x) = \u0393\u03c4 \u2217f(x) = \u27e8\u0393\u03c4(x, \u00b7), f\u27e9for f \u2208C\u221e c (T ). Proof. The \ufb01rst claim follows because G\u03c4 maps L2 \u03b4(T ) into H2 \u2212\u03b4(T ). The translation invariance of K\u03c4 follows from the conjugation structure and the translation invariance of G\u03c4. Indeed, if we denote the translation operators by tyf(x) := f(x + y) and note that ty(e\u03bbxf) = e\u03bb(x+y)tyf, then tyK\u03c4 = e\u22122\u03c0\u03c4(x1+y1)tyG\u03c4e2\u03c0\u03c4x1 = e\u22122\u03c0\u03c4(x1+y1)G\u03c4tye2\u03c0\u03c4x1 = e\u22122\u03c0\u03c4(x1+y1)G\u03c4e2\u03c0\u03c4(x1+y1)ty = e\u22122\u03c0\u03c4x1G\u03c4e2\u03c0\u03c4x1ty = K\u03c4ty, as we wanted to prove. The commutativity with di\ufb00erentiation follows from the translation invariance. In addition, if f \u2208L2 c(T ), then e2\u03c0\u03c4x1f \u2208L2 \u03b4(T ), and so D2K\u03c4f = e\u22122\u03c0\u03c4x1(e2\u03c0\u03c4x1D2e\u22122\u03c0\u03c4x1)G\u03c4(e2\u03c0\u03c4x1f) = e\u22122\u03c0\u03c4x1(\u2206\u03c4G\u03c4)(e2\u03c0\u03c4x1f) = f. If f \u2208H2 c (T ), then D2f \u2208L2 c(T ), and the commutativity with di\ufb00erentiation yields that K\u03c4D2f = D2K\u03c4f = f. Finally, for f \u2208C\u221e c (T ) we have that e2\u03c0\u03c4x1f \u2208C\u221e c (T ) \u2286S(T ), and so K\u03c4f(x) = e\u22122\u03c0\u03c4x1G\u03c4(e2\u03c0\u03c4x1f)(x) = e\u22122\u03c0\u03c4x1 Z T g\u03c4(x1 \u2212y1, x\u2032 \u2212y\u2032)e2\u03c0\u03c4y1f(y1, y\u2032)dy1dy\u2032 = Z T e\u22122\u03c0\u03c4(x1\u2212y1)g\u03c4(x1 \u2212y1, x\u2032 \u2212y\u2032)f(y1, y\u2032)dy1dy\u2032 = Z T \u0393\u03c4(x, y)f(y)dy, (45) as we wanted to prove. The purpose of what follows is to show that the mapping properties of K\u03c4 from L2 c(T ) into H2 loc(T ) can be extended to H\u22121 c (T ) into H1 loc(T ). To show this, let us consider the operators K0 0f := \u03930 0 \u2217f, K\u2217 0f := \u0393\u2217 0 \u2217f, R\u03c4f := H\u03c4 \u2217f, with the above de\ufb01nitions for \u03930 0, \u0393\u2217 0, and H\u03c4. Given that \u0393\u03c4 = \u03930 0 + \u0393\u2217 0 + H\u03c4 we have that K\u03c4 = K0 0 + K\u2217 0 + R\u03c4, and so it su\ufb03ces to show that each of these maps H\u22121 c (T ) into H1 loc(T ). Proposition 5.6. The operator K0 0 maps H\u22121 c (T ) into H1 loc(T ). 41 \fProof. Let \u03d5 \u2208H\u22121 c (T ), so that \u03d50 \u2208H\u22121 c ( R) \u2286H\u22121 c (T ). Since |x1| does not depend on x\u2032, we have that |x1| \u2217\u03d5 = |x1| \u2217\u03d50, and it remains to show that |x1| \u2217\u03d50 \u2208H1 loc( R). Let supp(\u03d50) \u2286[\u2212L, L] and let \u03c6 \u2208C\u221e c ( R) be such that \u03c6 \u22611 on [\u2212L, L] and \u03c6 \u22610 outside of [\u22122L, 2L]. Let us show that for \ufb01xed x1 \u2208 R, \u03c6|x1 \u2212\u00b7| \u2208H1( R). Indeed, by Leibniz\u2019 rule we obtain \u2225\u03c6|x1 \u2212\u00b7|\u2225H1(R) \u2272\u2225\u03c6\u2225C1(R)\u2225|x1 \u2212\u00b7|\u2225H1(\u22122L,2L) \u2272L + |x1|, where we allow the constants of the inequality to depend on L and \u03c6. Let \u03a6(x1) := |x1| \u2217\u03d50. Then, |\u03a6(x1)| = |\u27e8\u03d50, |x1 \u2212\u00b7|\u27e9| = |\u27e8\u03c6\u03d50, |x1 \u2212\u00b7|\u27e9| = |\u27e8\u03d50, \u03c6|x1 \u2212\u00b7|\u27e9| \u2264\u2225\u03d50\u2225H\u22121(R)\u2225\u03c6|x1 \u2212\u00b7|\u2225H1(R) \u2272\u2225\u03d50\u2225H\u22121(R)(L + |x1|). This shows that \u03a6 \u2208L\u221e loc( R) \u2286L2 loc( R), which implies that \u03a6\u2032 \u2208H\u22121 loc( R). Moreover, because |x1|\u2032\u2032 = 2\u03b4 R(0), then we have that \u03a6\u2032\u2032 = 2\u03d50 \u2208H\u22121( R). Let \u03b7 \u2208C\u221e c ( R) and \u03c1 = \u03b7\u03a6, so that \u03c1 \u2208L2 c( R). We have to show that \u03c1 \u2208H1( R), which is equivalent to showing that \u27e8\u03be\u27e9b \u03c1(\u03be) \u2208L2( R). Since \u03c1 \u2208L2( R), we have \u27e8\u03be\u27e9b \u03c1(\u03be) \u2208L2(|\u03be| \u22641). Moreover, because \u03a6 \u2208L2 loc( R) and \u03a6\u2032, \u03a6\u2032\u2032 \u2208H\u22121 loc( R), then \u03c1\u2032\u2032 = \u03b7\u2032\u2032\u03a6 + 2\u03b7\u2032\u03a6\u2032 + \u03b7\u03a6\u2032\u2032 \u2208H\u22121( R), Therefore, \u27e8\u03be\u27e9\u22121\u03be2b \u03c1(\u03be) \u2208L2( R), from where we conclude that \u27e8\u03be\u27e9b \u03c1(\u03be) \u2208L2(|\u03be| \u22651). This proves the result. Proposition 5.7. The operator K\u2217 0 : Hs(T ) \u2192Hs+2(T ) is bounded for any s \u2208 R. Proof. For \u03d5 \u2208Hs(T ), let us consider the Fourier series \u03d5(x1, x\u2032) = P k\u2208Zd \u03d5k(x1)ek(x\u2032). From (43) we have that \\ (K\u2217 0\u03d5)0(\u03be) = \\ (\u0393\u2217 0 \u2217\u03d5)0(\u03be) = 0, and for k \u0338= 0 we have | \\ (K\u2217 0\u03d5)k(\u03be)| = | \\ (\u0393\u2217 0 \u2217\u03d5)k(\u03be)| = | d \u0393\u2217 0,k(\u03be)||c \u03d5k(\u03be)| = 1 \u03be2 + |k|2 |c \u03d5k(\u03be)| \u2272\u27e8\u03be, k\u27e9\u22122|c \u03d5k(\u03be)|, where we used in the last step that |k| \u22651 for all k \u0338= 0. Therefore we conclude that \u2225K\u2217 0\u03d5\u22252 Hs+2(T ) = X k\u0338=0 Z R \u27e8\u03be, k\u27e92s+4| \\ (K\u2217 0\u03d5)k(\u03be)|2d\u03be \u2272 X k\u0338=0 Z R \u27e8\u03be, k\u27e92s|c \u03d5k(\u03be)|2d\u03be = \u2225\u03d5\u22252 Hs(T ). Proposition 5.8. The operator K0 := K0 0 + K\u2217 0 maps L2 c(T ) into H2 loc(T ) and H\u22121 c (T ) into H1 loc(T ), satis\ufb01es D2K0 = I on L2 c(T ) and K0D2 = I on H2 c (T ), and is symmetric, i.e. \u27e8K0f, g\u27e9= \u27e8K0g, f\u27e9 for any f, g \u2208H\u22121 c (T ). Proof. The operator R\u03c4 maps L2 c(T ) into C\u221e(T ), because the kernel H\u03c4 is a smooth function. Therefore, K0 = K\u03c4 \u2212R\u03c4 also maps L2 c(T ) into H2 loc(T ). Moreover, we have that K0 0 and K\u2217 0 map H\u22121 c (T ) into H1 loc(T ) from Proposition 5.6 and Proposition 5.7, and therefore so does K0. The identities with the Laplacian follow from those of Proposition 5.5, since the kernel of R\u03c4 is a harmonic function. Finally, the symmetry of the operator follows from the symmetry of the kernel \u03930 := \u03930 0 + \u0393\u2217 0. Proposition 5.9. The operator K\u03c4 maps H\u22121 c (T ) into H1 loc(T ). Proof. Recall that K\u03c4 = K0 + R\u03c4. The result follows from Proposition 5.8 and the fact that R\u03c4 maps H\u22121 c (T ) into C\u221e(T ). 5.1.2 \u03c4-dependent single layer potential Recall that we have the boundedness of tr : Hs(T ) \u2192Hs\u22121/2(\u2202M), for s > 1/2. In particular we have tr : H1(T ) \u2192H1/2(\u2202M) and its adjoint tr\u2217: H\u22121/2(\u2202M) \u2192H\u22121 c (T ). The results from Proposition 5.8 and Proposition 5.9 allow for the following de\ufb01nition. De\ufb01nition 5.10. De\ufb01ne the single layer operator S0 := K0tr\u2217: H\u22121/2(\u2202M) \u2192H1 loc(T ). Similarly, for |\u03c4| \u2265\u03c40, \u03c4 2 / \u2208Spec(\u2212\u2206g0), we de\ufb01ne the \u03c4-dependent single layer operator S\u03c4 := K\u03c4tr\u2217: H\u22121/2(\u2202M) \u2192H1 loc(T ). 42 \fProposition 5.11. Let S denote either of the single layer operators S0 or S\u03c4 from De\ufb01nition 5.10, and let \u0393 and K denote either of \u03930 and K0 or \u0393\u03c4 and K\u03c4, as it corresponds. For \u03d5 \u2208H\u22121/2(\u2202M), the single layer potential S\u03d5 \u2208H1 loc(T ) satis\ufb01es the following properties: a). for x / \u2208\u2202M we have the integral representation S\u03d5(x) = \u27e8\u03d5, tr(\u0393(x, \u00b7))\u27e9, b). S\u03d5 is harmonic in M\u00b1, c). S\u03d5 has no jump at the boundary, tr+(S\u03d5) = tr\u2212(S\u03d5), and therefore has a well-de\ufb01ned trace, d). the normal derivatives of S\u03d5 satisfy that \u2202\u2212 \u03bd S\u03d5 \u2212\u2202+ \u03bd S\u03d5 = 4\u03c02\u03d5 on \u2202M, e). if \u03d5 \u2208H1/2(\u2202M), then S\u03d5|M \u2208H2(M), S\u03d5|M+ has an extension in H2 loc(T ), and tr \u25e6S maps H1/2(\u2202M) into H3/2(\u2202M). Proof. Let \u03d5 \u2208H\u22121/2(\u2202M). For a \ufb01xed x / \u2208\u2202M there exists an open neighborhood N \u2286T , such that \u2202M \u2286N and x / \u2208N. From Proposition 5.3 and the fact that H\u03c4 is smooth we have that \u0393(x, \u00b7) is smooth in N and so tr(\u0393(x, \u00b7)) \u2208H1/2(\u2202M). Therefore, \u27e8\u03d5, tr(\u0393(x, \u00b7))\u27e9= \u27e8tr\u2217\u03d5, \u0393(x, \u00b7)\u27e9= \u0393 \u2217tr\u2217\u03d5(x) = Ktr\u2217\u03d5(x) = S\u03d5(x). The harmonicity of S\u03d5 in M\u00b1 follows from the previous result as \u0393(\u00b7, y) is harmonic in M\u00b1 for any y \u2208\u2202M. The existence of a well-de\ufb01ned trace follows from the fact that S\u03d5 \u2208H1 loc(T ). Given that S\u03d5 \u2208H1 loc(T ) is harmonic in M\u00b1, there are well-de\ufb01ned normal derivatives as elements of H\u22121/2(\u2202M). Moreover, K\u03c4 \u2212K0 = R\u03c4 maps H\u22121 c (T ) into C\u221e(T ), so it su\ufb03ces to show the jump condition for S = S0. Let g \u2208H3/2(\u2202M) and let v \u2208H2 c (T ) be some function extending g. The de\ufb01nition of normal derivatives (5), integration by parts, and Proposition 5.8 give that \u27e8(\u2202\u2212 \u03bd \u2212\u2202+ \u03bd )S0\u03d5, g\u27e9= 4\u03c02 Z T \u2212DS0\u03d5 \u00b7 Dv = 4\u03c02 Z T S0\u03d5D2v = 4\u03c02\u27e8K0tr\u2217\u03d5, D2v\u27e9= 4\u03c02\u27e8tr\u2217\u03d5, K0D2v\u27e9= 4\u03c02\u27e8tr\u2217\u03d5, v\u27e9= 4\u03c02\u27e8\u03d5, g\u27e9. The density of H3/2(\u2202M) in H1/2(\u2202M) implies the jump condition of the normal derivatives. Finally, as in [9], we invoke the transmission property from [12] to prove the higher regularity properties of the single layer potential. Namely, the harmonicity of S\u03d5|M\u00b1 and the jump conditions at the boundary give that if \u03d5 \u2208H1/2(\u2202M), then S\u03d5|M\u00b1 is in H2(M\u00b1 \u2229N), for some neighborhood N \u2286T of \u2202M. The interior regularity of harmonic functions gives that S\u03d5|M \u2208H2(M) and S\u03d5|M+ \u2208H2 loc(M+), and the boundary regularity allows to construct the extension of S\u03d5|M+ to H2 loc(T ). Remark. We will not need this, but the map tr\u25e6S\u03c4 : Hs(\u2202M) \u2192Hs+1(\u2202M) is bounded for s \u2265\u22121/2. 5.2 Equivalent formulations and boundary characterization For the rest of the section we assume that 0 is not an eigenvalue of the magnetic Schr\u00a8 odinger operator HV,W on M. Let |\u03c4| \u2265\u03c40, \u03c4 2 / \u2208Spec(\u2212\u2206g0) as in Theorem 1.2, and let h \u2208H2 loc(T ) be a harmonic function. Theorem 5.12. All the following problems have a unique solution: (DE): u = h + e\u22122\u03c0\u03c4x1r, with r \u2208H2 \u2212\u03b4(T ), solves the di\ufb00erential equation HV,W u = 0 in T, (IE): u \u2208H2 loc(T ) solves the integral equation u + K\u03c4Xu = h in T, (EP): e u \u2208H2 loc(M+) is harmonic, has an extension in H2 loc(T ) of the form h + e\u22122\u03c0\u03c4x1r with r \u2208H2 \u2212\u03b4(T ), and \u2202+ \u03bd e u = 4\u03c02\u039bV,W (tr+(e u)), (BE): f \u2208H3/2(\u2202M) solves the boundary equation (I + tr \u25e6S\u03c4(\u039bV,W \u2212\u039b0,0))f = tr(h). 43 \fThese problems are equivalent in the following sense: (DE) \u21d4(IE): u solves (DE) if and only if u solves (IE), (DE) \u21d4(EP): if u solves (DE), then u|M+ solves (EP), and if e u solves (EP), then there exists an extension u to T that solves (DE), (DE) \u21d2(BE): if u solves (DE), then tr(u) solves (BE), (BE) \u21d2(EP): if f solves (BE), then there is an extension e u to M+ that solves (EP). Proof. From Theorem 1.2 we know that (DE) has a unique solution. It remains to show the equivalence between the existence of solutions, as the equivalence of the uniqueness follows from this. We start proving that the problems (DE) and (IE) are equivalent. Assume that a solution to the equation HV,W u = 0, has the form u = h + e\u22122\u03c0\u03c4x1r, with r \u2208H2 \u2212\u03b4(T ). Then u \u2208H2 loc(T ) and we see that r solves \u2206\u03c4r = e2\u03c0\u03c4x1D2(u \u2212h) = e2\u03c0\u03c4x1D2u = \u2212e2\u03c0\u03c4x1Xu, and e2\u03c0\u03c4x1Xu \u2208L2 c(M). Since r \u2208H2 \u2212\u03b4(T ), the uniqueness from Theorem 1.2 implies that r = \u2212G\u03c4(e2\u03c0\u03c4x1Xu), and so h = u\u2212e\u22122\u03c0\u03c4x1r = u+K\u03c4Xu. Conversely, if u \u2208H2 loc(T ) satis\ufb01es u+K\u03c4Xu = h, then u = h + e\u22122\u03c0\u03c4x1r with r := \u2212G\u03c4(e2\u03c0\u03c4x1Xu) \u2208H2 \u2212\u03b4(T ). This gives that e2\u03c0\u03c4x1D2u = e2\u03c0\u03c4x1D2(u \u2212h) = \u2206\u03c4r = \u2212e2\u03c0\u03c4x1Xu, and thus HV,W u = 0. Now we show that the problems (DE) and (EP) are equivalent. Assume that a solution to the equation HV,W u = 0, has the form u = h+e\u22122\u03c0\u03c4x1r, with r \u2208H2 \u2212\u03b4(T ), so that u \u2208H2 loc(T ). If we let e u := u|M+, then e u \u2208H2 loc(M+), and, given that V, W are supported in M, we have that e u is harmonic in M+. If g \u2208H1/2(\u2202M) and v \u2208H1 c (T ) is some function extending g, then from the de\ufb01nitions (5) and (2) we have \u27e8\u2202+ \u03bd e u, g\u27e9= \u22124\u03c02 Z M+ \u2212De u \u00b7 Dv, \u27e8\u039bV,W (tr\u2212(u)), g\u27e9= Z M \u2212Du \u00b7 Dv + V \u00b7 (vDu \u2212uDv) + (V 2 + W)uv. Since u is a solution to HV,W u = 0 in T , and V, W are supported in M, we obtain that \u2212 Z M+ \u2212De u \u00b7 Dv = \u2212 Z M+ \u2212Du \u00b7 Dv = Z M \u2212Du \u00b7 Dv + V \u00b7 (vDu \u2212uDv) + (V 2 + W)uv which gives that \u2202+ \u03bd e u = 4\u03c02\u039bV,W(tr\u2212(u)). Thus we conclude that \u2202+ \u03bd e u = 4\u03c02\u039bV,W (tr\u2212(u)) = 4\u03c02\u039bV,W (tr+(u)) = 4\u03c02\u039bV,W (tr+(e u)). Conversely, suppose that e u \u2208H2 loc(M+) is harmonic in M+, satis\ufb01es \u2202+ \u03bd e u = 4\u03c02\u039bV,W (tr+(e u)), and is such that e u has an extension in H2 loc(T ) of the form h + e\u22122\u03c0\u03c4x1r on M+ with r \u2208H2 \u2212\u03b4(T ). We want to extend e u to the interior of M in order to solve HV,W u = 0 in T . Let u = DV,W(tr+(e u)) \u2208H2(M), i.e. the solution of the problem \u001a HV,W u = 0 in M\u2212, u = tr+(e u) on \u2202M. and de\ufb01ne u|M+ = e u \u2208H2 loc(M+) and u|M = u \u2208H2(M). Then we have tr+(u) = tr+(e u) = tr\u2212(u) = tr\u2212(u), \u2202+ \u03bd u = \u2202+ \u03bd e u = 4\u03c02\u039bV,W (tr+(e u)) = 4\u03c02\u039bV,W (tr\u2212(u)) = \u2202\u2212 \u03bd u = \u2202\u2212 \u03bd u, 44 \fwhere we used the result from Proposition 2.8. This implies that u is in H2 loc(T ) and solves HV,W u = 0. Moreover, u = h + e\u22122\u03c0\u03c4x1r in T , with r \u2208H2 \u2212\u03b4(T ), where r|M+ = r|M+ and r|M = e2\u03c0\u03c4x1(u \u2212h)|M. Now we prove that (DE) implies (BE). Assume that a solution to the equation HV,W u = 0 has the form u = h+e\u22122\u03c0\u03c4x1r, with r \u2208H2 \u2212\u03b4(T ), so that u \u2208H2 loc(T ) and tr(u) \u2208H3/2(\u2202M). The equivalence between (DE) and (IE) yields that u + K\u03c4Xu = h and so tr(u) + trK\u03c4Xu = tr(h). Taking (exterior) traces in Proposition 5.13 below, gives that trK\u03c4Xu = tr \u25e6S\u03c4(\u039bV,W \u2212\u039b0,0)tr(u), which implies that tr(u) solves (BE). Finally, we show that (BE) implies (EP). Suppose f \u2208H3/2(\u2202M) solves the boundary equation (I+tr\u25e6 S\u03c4(\u039bV,W \u2212\u039b0,0))f = tr(h). Motivated by Proposition 5.13 below, we de\ufb01ne e u := h\u2212S\u03c4(\u039bV,W \u2212\u039b0,0)f. The boundary equation gives that tr(e u) = f. From Proposition 5.11 we know that the restrictions e u|M\u00b1 are in H2 loc(M\u00b1) and are harmonic in M\u00b1, respectively. Since e u|M is harmonic in M and tr(e u) = f, then we have \u2202\u2212 \u03bd e u = 4\u03c02\u039b0,0f. Given that h \u2208H2 loc(T ), the de\ufb01nition of e u, and the jump condition of the normal derivatives from Proposition 5.11 we obtain that \u2202+ \u03bd e u = \u2202\u2212 \u03bd e u + 4\u03c02(\u039bV,W \u2212\u039b0,0)f = 4\u03c02\u039bV,W f. From Proposition 5.11 we know that e u|M+ has an extension in H2 loc(T ). All that remains is to show that e u|M+ has an extension in H2 loc(T ) of the form h+e\u22122\u03c0\u03c4x1r, with r \u2208H2 \u2212\u03b4(T ). Given that e u := h\u2212S\u03c4\u03c6 with \u03c6 \u2208H1/2(\u2202M), all we have to show is that e2\u03c0\u03c4x1S\u03c4\u03c6|M+ has an extension in H2 \u2212\u03b4(T ). From Proposition 5.11 we know that it has an extension in H2 loc(T ), so it su\ufb03ces to show that e2\u03c0\u03c4x1S\u03c4\u03c6 is in H2 \u2212\u03b4(|x1| \u2265L) for some large L. From the integral representation in Proposition 5.11 we see that e2\u03c0\u03c4x1S\u03c4\u03c6(x) = e2\u03c0\u03c4x1\u27e8\u03c6, tr(\u0393\u03c4(x, \u00b7))\u27e9= \u27e8e2\u03c0\u03c4y1\u03c6, tr(g\u03c4(x, \u00b7))\u27e9, where we have used that \u0393\u03c4(x, y) = e\u22122\u03c0\u03c4(x1\u2212y1)g\u03c4(x, y). From Proposition 5.1 we have that the restrictions {tr(D\u03b1 xg\u03c4(x, \u00b7))} are uniformly bounded for |x1| \u2265L with L large. This implies that D\u03b1(e2\u03c0\u03c4x1S\u03c4\u03c6) is uniformly bounded for |x1| \u2265L, and we conclude that e2\u03c0\u03c4x1S\u03c4\u03c6 \u2208H2 \u2212\u03b4(|x1| \u2265L), as desired. The following identity, which follows by integration by parts, is at the core of the results of this section, and we consider it interesting in its own. Proposition 5.13. Let u \u2208H2(M) satisfy HV,W u = 0 in M. Let J : H2(M) \u0589 H1(M) be the compact embedding, and let E : L2(M) \u2192L2(T ) denote the extension by zero, so that EXJu \u2208L2 c(T ). For x \u2208M+ we have the identity K\u03c4(EXJu)(x) = S\u03c4[(\u039bV,W \u2212\u039b0,0)tr\u2212(u)](x). (46) Proof. Let x \u2208M+ be \ufb01xed, so that \u0393\u03c4(x, \u00b7) is smooth and harmonic in a neighborhood M. From the integral representation (45) and the fact that EXJu is supported in M we get that K\u03c4(EXJu)(x) = Z T \u0393\u03c4(x, \u00b7)EXJu = Z M \u0393\u03c4(x, \u00b7)Xu = Z M \u0393\u03c4(x, \u00b7)(2V \u00b7Du+(V 2+D\u00b7V +W)u). (47) From the integral representation in Proposition 5.11 and the de\ufb01nition of the DN map (2) we have that S\u03c4(\u039bV,W tr\u2212(u))(x) = \u27e8\u039bV,W tr\u2212(u), tr(\u0393\u03c4(x, \u00b7))\u27e9 = Z M \u2212Du \u00b7 D\u0393\u03c4(x, \u00b7) + V \u00b7 (\u0393\u03c4(x, \u00b7)Du \u2212uD\u0393\u03c4(x, \u00b7)) + (V 2 + W)u\u0393\u03c4(x, \u00b7). From the integral representation in Proposition 5.11, the harmonicity of \u0393\u03c4(x, \u00b7) in M, the de\ufb01nition (6) of the DN map \u039b0,0 and its symmetry we have that S\u03c4(\u039b0,0tr\u2212(u))(x) = \u27e8\u039b0,0tr\u2212(u), tr(\u0393\u03c4(x, \u00b7))\u27e9= \u27e8\u039b0,0tr(\u0393\u03c4(x, \u00b7)), tr\u2212(u)\u27e9= Z M \u2212D\u0393\u03c4(x, \u00b7) \u00b7 Du. 45 \fTherefore, we obtain S\u03c4[(\u039bV,W \u2212\u039b0,0)tr\u2212(u)](x) = Z M V \u00b7 (\u0393\u03c4(x, \u00b7)Du \u2212uD\u0393\u03c4(x, \u00b7)) + (V 2 + W)u\u0393\u03c4(x, \u00b7). (48) From Proposition 2.7 we have that Z M \u0393\u03c4(x, \u00b7)(V \u00b7 Du + (D \u00b7 V )u) + V \u00b7 (uD\u0393\u03c4(x, \u00b7)) = 0, which implies the equality of (47) and (48) as we wanted. Proposition 5.14. The operator tr \u25e6S\u03c4(\u039bV,W \u2212\u039b0,0) in H3/2(\u2202M) is compact. Proof. Recall that for f \u2208H3/2(\u2202M) we have DV,W f := u \u2208H2(M) as the solution to the Dirichlet problem \u001a HV,W u = 0 in M\u2212, u = f on \u2202M. Let J : H2(M) \u0589 H1(M) be the compact embedding, and let E : L2(M) \u2192L2(T ) denote the extension by zero. Then we have EXJu \u2208L2 c(T ), and so K\u03c4EXJu \u2208H2 loc(T ). For x \u2208M+ we can rewrite the result from Proposition 5.13 as S\u03c4(\u039bV,W \u2212\u039b0,0)f(x) = K\u03c4EXJu(x). (49) The trace of a single layer potential is well-de\ufb01ned, so we can take traces on both sides of (49) to obtain tr \u25e6S\u03c4(\u039bV,W \u2212\u039b0,0) = trK\u03c4EXJDV,W. (50) To prove the result it su\ufb03ces to express the right-hand side of (50) as a composition of bounded operators, together with the compact operator J. Recall that K\u03c4 = e\u22122\u03c0\u03c4x1G\u03c4e2\u03c0\u03c4x1. All of the following are continuous operators, DV,W : H3/2(\u2202M) \u2192H2(M), J : H2(M) \u0589 H1(M), X : H1(M) \u2192L2(M), e2\u03c0\u03c4x1E : L2(M) \u2192L2 \u03b4(T ), G\u03c4 : L2 \u03b4(T ) \u2192H2 \u2212\u03b4(T ), tr \u25e6e\u22122\u03c0\u03c4x1 : H2 \u2212\u03b4(T ) \u2192H3/2(\u2202M). and this completes the proof. Corollary 5.15. The operator I + tr \u25e6S\u03c4(\u039bV,W \u2212\u039b0,0) in H3/2(\u2202M) is continuous and invertible. In particular, the boundary values of the CGO, constructed as u = h + e\u22122\u03c0\u03c4x1r, can be determined by boundary measurements as tr(u) = (I + tr \u25e6S\u03c4(\u039bV,W \u2212\u039b0,0))\u22121tr(h). Proof. The uniqueness of the solution to (BE) in Theorem 5.12 implies that the operator I + tr \u25e6 S\u03c4(\u039bV,W \u2212\u039b0,0) is injective. From Proposition 5.14 and Fredholm\u2019s alternative it follows that it is bijective, and therefore invertible by the Open Mapping Theorem. The fact that the boundary values of the CGO are given by the expression above follows from Theorem 5.12. 6 Reconstruction of the magnetic \ufb01eld As mentioned in the introduction and the previous chapter, the purpose of proving the Carleman estimate Theorem 1.2 is using it to construct many special solutions to the equation HV,W u = 0, in order to recover the magnetic \ufb01eld curl V . In contrast to the previous chapter, we restrict our attention to a particular kind of harmonic functions and show that we can \ufb01nd an amplitude, i.e. a correction factor, that gives appropriate estimates for the remainder term. The choice of the special harmonic functions h = e\u00b12\u03c0|m|x1em(x\u2032), and not any arbitrary harmonic function, comes from the fact that (Dh)2 = 0. We elaborate more on this in a remark after the construction in Proposition 6.2. These 46 \fideas follow the so-called WKB method, and are presented systematically for more general settings in Sections 2 and 5 in [2]; see also Section 4 in [10] or Sections 2 and 3 in [3]. We proceed analogously to the proof Lemma 6.1. in [18] in the Euclidean setting. After the amplitude has been constructed, we de\ufb01ne an analog of the scattering transform from [13] and [18], and show that the estimates for the remainder term allow to disregard them, so that from the boundary measurements we are able to recover integrals involving the magnetic potential. After some work, we will show that this allows for the reconstruction of the magnetic \ufb01eld. The exposition here follows closely the method from [18], until the part involving the analog of the scattering transform. The di\ufb00erence of the methods at this point is due to the fact that the integrals contain terms that are real exponentials (like in the Laplace transform), rather than complex exponentials (like in the Fourier transform). This di\ufb00erence seems di\ufb03cult, if not impossible, to reconcile. As in the previous chapter, we denote by X := 2V \u00b7 D + (V 2 + D \u00b7 V + W) the compactly supported \ufb01rst order di\ufb00erential operator, so that HV,W = D2 + X. For the rest of the chapter, the potentials V, W and the constants R, \u03b4 are \ufb01xed. Any quantities involving them, for instance the constants in the inequalities from the previous chapters, will be regarded as constants. 6.1 Construction of CGOs A special family of harmonic solutions in T is given by the products e\u00b12\u03c0|m|x1em(x\u2032) for any m \u2208 Zd. These solutions are analogous to the Calder\u00b4 on complex exponential solutions e2\u03c0i\u03b6\u00b7x, where \u03b6 \u2208 Cd and \u03b6 \u00b7 \u03b6 = 0. In our case, \u03b6 \u2208 Cd is replaced by (\u00b1i|m|, m) \u2208i R \u00d7 Zd. We construct the correction terms for these harmonic functions in order to solve the equation HV,W u = 0, and make more explicit the corresponding estimates for the correction terms. Proposition 6.1. Let 1/2 < \u03b4 < 1 and assume that 0 is not an eigenvalue of HV,W in M. Let m \u2208 Zd and let \u03c4 > 0 be such that \u03c42 / \u2208Spec(\u2212\u2206g0). Then there exists a unique rm,\u03c4 \u2208H2 \u2212\u03b4(T ) such that um,\u03c4 := e\u22122\u03c0|m|x1em(x\u2032) + e\u22122\u03c0\u03c4x1rm,\u03c4 satis\ufb01es HV,W um,\u03c4 = 0. Moreover, the correction term satis\ufb01es the estimates \u2225rm,\u03c4\u2225L2 \u2212\u03b4(T ) \u2272e2\u03c0|\u03c4\u2212|m||R\u27e8m\u27e9 \u03c4 , \u2225rm,\u03c4\u2225H1 \u2212\u03b4(T ) \u2272e2\u03c0|\u03c4\u2212|m||R\u27e8m\u27e9. In particular, we obtain \u2225rm,\u03c4\u2225L2 \u2212\u03b4(T ) \u22721 if |\u03c4 \u2212|m|| \u22721. Proof. Given that D2(e\u22122\u03c0|m|x1em(x\u2032)) = 0, we have that um,\u03c4 solves HV,W um,\u03c4 = 0 if and only if rm,\u03c4 solves the equation e2\u03c0\u03c4x1HV,W e\u22122\u03c0\u03c4x1rm,\u03c4 = \u2212e2\u03c0\u03c4x1HV,W (e\u22122\u03c0|m|x1em(x\u2032)) = \u2212e2\u03c0\u03c4x1X(e\u22122\u03c0|m|x1em(x\u2032)). The right-hand side is compactly supported and thus in L2 \u03b4(T ). Therefore, Theorem 1.2 gives the existence and uniqueness of a solution in H2 \u2212\u03b4(T ). Finally, we observe that the right-hand side equals f := \u2212e2\u03c0\u03c4x1X(e\u22122\u03c0|m|x1em(x\u2032)) = \u2212[2V \u00b7 (i|m|, m) + (V 2 + D \u00b7 V + W)]e2\u03c0(\u03c4\u2212|m|)x1em(x\u2032), so we can bound it by \u2225f\u2225L2 \u03b4(T ) \u2272e2\u03c0|\u03c4\u2212|m|||R|\u27e8m\u27e9. The estimate for the correction term rm,\u03c4 follows from Theorem 1.2. The estimates of Proposition 6.1 for the correction term are not sharp enough to allow us to neglect them in a later \u201casymptotic expansion\u201d. In order to improve the estimates for the correction term, we need to modify the harmonic function e\u22122\u03c0|m|x1em(x\u2032) appropriately as we show next. 47 \fProposition 6.2. Let 1/2 < \u03b4 < 1. There exist \u03b5, \u03c3 > 0 such that for any m \u2208 Zd, with |m| su\ufb03ciently large, there is a smooth function am(x1, x\u2032), such that am \u22121 is supported on |x1| \u22642|m|\u03c3, and bm(x1, x\u2032) := e2\u03c0|m|x1HV,W e\u22122\u03c0|m|x1em(x\u2032)am, is supported on |x1| \u22642|m|\u03c3 with \u2225bm\u2225L2 \u03b4(T ) \u2272|m|1\u2212\u03b5. Remark. For the rest of the chapter, the notation am, bm does not represent the Fourier coe\ufb03cients of some functions as in previous chapters. Proof. We compute the conjugated operators e2\u03c0|m|x1De\u22122\u03c0|m|x1em(x\u2032) = em(x\u2032)[(i|m|, m) + D], e2\u03c0|m|x1D2e\u22122\u03c0|m|x1em(x\u2032) = em(x\u2032)[(i|m|, m) + D]2 = em(x\u2032)[2(i|m|, m) \u00b7 D + D2]. Therefore, we have the conjugation identity for operators e2\u03c0|m|x1HV,W e\u22122\u03c0|m|x1em(x\u2032) = em(x\u2032)[2(i|m|, m) \u00b7 (D + V ) + HV,W ]. (51) We could de\ufb01ne am := exp(vm), where vm is the solution of the equation (i|m|, m) \u00b7 (Dvm + V ) = 0. This equation can be rewritten as iDx1vm + m |m|Dx\u2032vm = \u2212 \u0012 iF + m |m|G \u0013 . From Theorem 4.3 we know that this equation has a unique solution which decays, and is bounded with bounded derivatives of all orders. The only inconvenient with this is that the term D2vm may not be in L2 \u03b4(T ). Therefore, we are left to rede\ufb01ne am := exp(wm), where wm := vm\u03c8(x1/|m|\u03c3), with \u03c3 > 0 to be determined and \u03c8 a cuto\ufb00function such that \u03c8(t) \u22611 if |t| \u22641 and \u03c8(t) \u22610 if |t| \u22652. With this we have am \u22121 is supported on |x1| \u22642|m|\u03c3 and (D + V )am = am \u0014 (Dvm + V )\u03c8 \u0012 x1 |m|\u03c3 \u0013 + \u0012 1 \u2212\u03c8 \u0012 x1 |m|\u03c3 \u0013\u0013 V + vm 2\u03c0i|m|\u03c3 \u03c8\u2032 \u0012 x1 |m|\u03c3 \u0013 (1, 0, . . . , 0) \u0015 . Because V is compactly supported, we see that the second term vanishes if |m| is su\ufb03ciently large. Moreover, the dot product of (i|m|, m) with \ufb01rst term vanishes (by construction). From this and (51) we are left with bm = em(x\u2032)[2(i|m|, m) \u00b7 (D + V )am + HV,W am] = em(x\u2032) \u0014 am 2i|m|vm 2\u03c0i|m|\u03c3 \u03c8\u2032 \u0012 x1 |m|\u03c3 \u0013 + HV,W am \u0015 . The \ufb01rst term is supported on |m|\u03c3 \u2264|x1| \u22642|m|\u03c3. Using the boundedness of am and the decay estimates for vm from Theorem 4.3, we can bound the L2 \u03b4(T ) norm of the \ufb01rst term by |m| |m|\u03c3 \u0012Z 2|m|\u03c3 |m|\u03c3 1 |x1|2 \u27e8x1\u27e92\u03b4dx1 \u00131/2 \u2272|m| |m|\u03c3 \u00b7 |m|\u03c3(2\u03b4\u22121)/2 = |m|1+\u03c3(2\u03b4\u22123)/2. For the second term, we use that HV,W am = (D2 + X)am. The term Xam represents no problem, as X is compactly supported (in |x1| \u2264R) and am has bounded derivatives of all orders by Theorem 4.3. We are left with D2am = am(D2wm + (Dwm)2), which is supported on |x1| \u22642|m|\u03c3. In addition to the boundedness of am, from Theorem 4.3 we also know that |Dwm|, |D2wm| \u2272\u27e8x1\u27e9\u22121. Therefore we can bound the L2 \u03b4(T ) norms of these terms by \u2225D2am\u2225L2 \u03b4(T ) \u2264\u2225amD2wm\u2225L2 \u03b4(T ) + \u2225am(Dwm)2\u2225L2 \u03b4(T ) \u2272 \u0012Z 2|m|\u03c3 0 1 \u27e8x1\u27e92 \u27e8x1\u27e92\u03b4dx1 \u00131/2 + \u0012Z 2|m|\u03c3 0 1 \u27e8x1\u27e94 \u27e8x1\u27e92\u03b4dx1 \u00131/2 \u2272 \u0012Z 2|m|\u03c3 0 1 \u27e8x1\u27e92 \u27e8x1\u27e92\u03b4dx1 \u00131/2 \u2272|m|\u03c3(2\u03b4\u22121)/2. Taking any \u03c3 > 1 we obtain that \u03c3(2\u03b4 \u22121)/2 > 1 + \u03c3(2\u03b4 \u22123)/2. For instance, if \u03c3 = 2, then we ensure that all these exponents are less than 1, as we wanted to prove. 48 \fRemark. In the setting of Proposition 6.1, the choice am \u22611 gives compact support for bm, but we only obtain \u2225bm\u2225L2 \u03b4(T ) \u2272|m|. Remark. Observe that if h = e\u22122\u03c0|m|x1em(x\u2032), then the condition (Dh)2 = 0 makes the higher order terms in (51) disappear, leaving only to appropriately disregard the next order terms (in this case of order |m|). We use Proposition 6.2 to construct another solution to the equation HV,W u = 0, whose correction term has small norm. We observe that the \u201cmain terms\u201d (e\u22122\u03c0|m|x1em(x\u2032) and e\u22122\u03c0|m|x1em(x\u2032)am) of the two solutions coincide for |x1| \u22652|m|\u03c3, and we later prove that the corrected solutions must coincide. Proposition 6.3. Let 1/2 < \u03b4 < 1, and let \u03b5, \u03c3 > 0 be as in Proposition 6.2. Let m \u2208 Zd, with |m| su\ufb03ciently large, and let \u03c4 > 0 such that \u03c42 / \u2208Spec(\u2212\u2206g0). There exists a unique function e rm,\u03c4 \u2208H2 \u2212\u03b4(T ), such that e um,\u03c4 := e\u22122\u03c0|m|x1em(x\u2032)am + e\u22122\u03c0\u03c4x1e rm,\u03c4, satis\ufb01es HV,W e um,\u03c4 = 0. Moreover, the correction term satis\ufb01es the estimates \u2225e rm,\u03c4\u2225L2 \u2212\u03b4(T ) \u2272e4\u03c0|\u03c4\u2212|m|||m|\u03c3|m|1\u2212\u03b5 \u03c4 , \u2225e rm,\u03c4\u2225H1 \u2212\u03b4(T ) \u2272e4\u03c0|\u03c4\u2212|m|||m|\u03c3|m|1\u2212\u03b5. In particular, if |\u03c4 \u2212|m|||m|\u03c3 \u22721, then \u2225e rm,\u03c4\u2225L2 \u2212\u03b4(T ) \u2272|m|\u2212\u03b5, \u2225e rm,\u03c4\u2225H1 \u2212\u03b4(T ) \u2272|m|1\u2212\u03b5. In addition, if K \u2286T is a compact set, then \u2225e2\u03c0(|m|\u2212\u03c4)x1e rm,\u03c4\u2225L2(K) \u2272|m|\u2212\u03b5, \u2225e2\u03c0|m|x1D(e\u22122\u03c0\u03c4x1e rm,\u03c4)\u2225L2(K) \u2272|m|1\u2212\u03b5, where the constant of the inequality may depend on K. Proof. We have that HV,W e um,\u03c4 = 0 if and only if there exists e rm,\u03c4 which solves e2\u03c0\u03c4x1HV,W e\u22122\u03c0\u03c4x1e rm,\u03c4 = \u2212e2\u03c0(\u03c4\u2212|m|)x1bm. The conclusion follows from Theorem 1.2 and Proposition 6.2. Proposition 6.4. The solutions to the equation HV,W u = 0 constructed in Proposition 6.1 and Proposition 6.3 are equal. Proof. We write e um,\u03c4 = e\u22122\u03c0|m|x1em(x\u2032)am + e\u22122\u03c0\u03c4x1e rm,\u03c4 = e\u22122\u03c0|m|x1em(x\u2032) + e\u22122\u03c0\u03c4x1(e rm,\u03c4 + e2\u03c0(\u03c4\u2212|m|)x1em(x\u2032)(am \u22121)). We have that am \u22121 is a smooth bounded function supported on |x1| \u22642|m|\u03c3; in particular, e2\u03c0(\u03c4\u2212|m|)x1em(x\u2032)(am \u22121) \u2208H2 \u2212\u03b4(T ). The fact that HV,W e um,\u03c4 = 0 and the uniqueness from Proposition 6.1 give that we must have e um,\u03c4 = um,\u03c4. 6.2 Transforms and integrals Recall that for the Laplacian H0,0 := D2 in M there is a well-de\ufb01ned Dirichlet-to-Neumann map \u039b0,0. Moreover, this map is symmetric. If u and \u03c6 are solutions to HV,W u = 0 and H0,0\u03c6 = 0, respectively, then we have the integral identities \u27e8\u039bV,W tr(u), tr(\u03c6)\u27e9= Z M \u2212Du \u00b7 D\u03c6 + V \u00b7 (\u03c6Du \u2212uD\u03c6) + (V 2 + W)u\u03c6, 49 \f\u27e8\u039b0,0tr(u), tr(\u03c6)\u27e9= \u27e8\u039b0,0tr(\u03c6), tr(u)\u27e9= Z M \u2212Du \u00b7 D\u03c6, and so we obtain \u27e8(\u039bV,W \u2212\u039b0,0)tr(u), tr(\u03c6)\u27e9= Z M V \u00b7 (\u03c6Du \u2212uD\u03c6) + (V 2 + W)u\u03c6. (52) Let m, n \u2208 Zd, m, n \u0338= 0, be \ufb01xed. Let mN := Nm \u2208 Zd, where N > 0 is a large integer parameter. Observe \ufb01rst, that mN/|mN| = m/|m|. With the notation from the last section, we see from Theorem 4.3 that vmN = vm, as mN/|mN| = m/|m| and both functions are the decaying solutions to the equation iDx1v + m |m| \u00b7 Dx\u2032v = \u2212 \u0012 iF + m |m| \u00b7 G \u0013 . According to the construction in Proposition 6.2, if N is large enough (depending only on R and \u03c3) and |x1| \u2264R, then amN (x1, x\u2032) := exp \u0012 vmN \u03c8 \u0012 x1 |mN|\u03c3 \u0013\u0013 = exp \u0012 vm\u03c8 \u0012 x1 |mN|\u03c3 \u0013\u0013 = exp(vm) =: e am(x1, x\u2032). (53) Let umN,\u03c4 be the solution to HV,W u = 0 constructed in the previous section as the correction of the harmonic function e\u22122\u03c0|mN|x1emN(x\u2032). We choose \u03c4 = \u03c4(m, N, \u03c3) to satisfy |\u03c4 \u2212|mN|||mN|\u03c3 \u22721, so we have the last estimates in Proposition 6.3 for the correction e rmN,\u03c4 on the compact set M. For the choice of test function we consider the harmonic function \u03c6mN ,n = e2\u03c0|mN+n|x1e\u2212(mN+n)x\u2032. Using (52) we de\ufb01ne the transform T (m, n, N) := \u27e8(\u039bV,W \u2212\u039b0,0)tr(umN,\u03c4), tr(\u03c6mN ,n)\u27e9 = Z M V \u00b7 (\u03c6mN ,nDumN,\u03c4 \u2212umN,\u03c4D\u03c6mN,n) + (V 2 + W)umN,\u03c4\u03c6mN ,n, From Corollary 5.15 we obtain that the transform T (m, n, N) is determined by the knowledge of M and \u039bV,W . In [13] and [18] this is referred as the scattering transform; that name does not seem appropriate in our setting. Let us look at each term of the previous expression on M \u2286[\u2212R, R] \u00d7 Td. From Proposition 6.3 and (53) we have that umN,\u03c4 = e\u22122\u03c0|mN|x1emN(x\u2032)e am + e\u22122\u03c0\u03c4x1e rmN,\u03c4, DumN,\u03c4 = e\u22122\u03c0|mN|x1emN(x\u2032)e am[(i|mN|, mN) + Dvm] + D(e\u22122\u03c0\u03c4x1e rmN ,\u03c4), where we have used that e am := exp(vm) for the second expression. From Theorem 4.3 and Proposition 6.3 we obtain that umN,\u03c4 = e\u22122\u03c0|mN|x1emN(x\u2032)e am + e\u22122\u03c0|mN|x1R1, DumN,\u03c4 = e\u22122\u03c0|mN|x1emN(x\u2032)e am(i|mN|, mN) + e\u22122\u03c0|mN|x1R2, with \u2225Ri\u2225L2(M) = o(N). We also have D\u03c6mN ,n = (\u2212i|mN + n|, \u2212(mN + n))\u03c6mN ,n. Therefore, \u03c6mN ,nDumN,\u03c4 = e2\u03c0(|mN+n|\u2212|mN|)x1e\u2212n(x\u2032)e am(i|mN|, mN) + e2\u03c0(|mN+n|\u2212|mN|)x1 e R1, umN,\u03c4D\u03c6mN ,n = e2\u03c0(|mN+n|\u2212|mN|)x1e\u2212n(x\u2032)e am(\u2212i|mN + n|, \u2212(mN + n)) + e2\u03c0(|mN+n|\u2212|mN|)x1 e R2, umN,\u03c4\u03c6mN ,n = e2\u03c0(|mN+n|\u2212|mN|)x1e\u2212n(x\u2032)e am + e2\u03c0(|mN+n|\u2212|mN|)x1 e R3, with \u2225e Ri\u2225L2(M) = o(N). Finally, observe that |mN + n| \u2212|mN| = (|mN|2 + 2mN \u00b7 n + |n|2) \u2212|mN|2 |mN + n| + |mN| = N N \u00b7 2m \u00b7 n + |n|2 N |m + n N | + |m| \u2192m \u00b7 n |m| =: \u00b5m,n, 50 \fas N \u2192+\u221e. These computations and the estimates from Theorem 4.3 and Proposition 6.3 give that \u2225\u03c6mN,nDumN,\u03c4 \u2212e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)e am(i|mN|, mN)\u2225L2(M) = o(N), \u2225umN,\u03c4D\u03c6mN ,n + e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)e am(i|mN|, mN)\u2225L2(M) = o(N), \u2225(V 2 + W)umN,\u03c4\u03c6m,n\u2225L2(M) = o(N). Thus, from the knowledge of the transform we are able to obtain the integrals I(m, n) := lim N\u2192+\u221e T (m, n, N) 2N = Z M e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(i|m|, m) \u00b7 V e am. We regard these integrals as a \u201cmixed non-linear transform\u201d, in the sense that we have Laplace and Fourier transforms in the real and toroidal variables, respectively, and an additional term e am(x1, x\u2032). 6.3 Determination of the Fourier coe\ufb03cients of the magnetic \ufb01eld In order to reconstruct the curl of V , we could try to remove the \u201cnon-linear\u201d term e am from the mixed transform, i.e. to determine the integrals J(m, n) := Z M e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(i|m|, m) \u00b7 V, and relate them to the integrals I(m, n). These integrals contain real exponentials, instead of only complex exponentials as in [18]. This will turn out in a signi\ufb01cantly di\ufb00erent result. In the appendix, we introduce the necessary notation and prove the following result. Theorem 6.5. We have the following cases depending on the sign of the dot product m \u00b7 n: 1. if m \u00b7 n = 0, then J(m, n) = 0, 2. if m \u00b7 n > 0, then J(m, n) = \u221e X j=1 1 j \u0012\u22122\u03c0 |m| \u0013j\u22121 I\u2212 j (m, n) 3. if m \u00b7 n < 0, then J(m, n) = \u221e X j=1 1 j \u0012 2\u03c0 |m| \u0013j\u22121 I+ j (m, n) Moreover, if m \u00b7 n = \u00b11, then J(m, n) = I(m, n). 6.3.1 Relation between the families {I(m, n)} and {J(m, n)} In this subsection will not be concerned with the explicit relations between these two families of integrals, but rather on the existence of such relation. Let [p, q] \u2286 R be any interval containing [\u2212R, R] so that M \u2286[p, q] \u00d7 Td. The condition supp(V ) \u2286M implies that I(m, n) := Z M e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(i|m|, m) \u00b7 V e am = Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(i|m|, m) \u00b7 V e am. Recall from (53) that e am := exp(vm) and (i|m|, m)\u00b7(Dvm+V ) = 0, so that (i|m|, m)\u00b7(De am+V e am) = 0. This and the fact that (i|m|, m) \u00b7 D(e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)) = 0 allow us to rewrite I(m, n) = \u2212 Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(i|m|, m) \u00b7 De am = \u2212 Z [p,q]\u00d7 Td(i|m|, m) \u00b7 D(e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)e am) = \u2212|m| 2\u03c0 \u0012 e2\u03c0\u00b5m,nx1 Z Td e\u2212n(x\u2032)e am(x1, x\u2032)dx\u2032 \u0013\f \f \f \f x1=q x1=p , (54) 51 \fwhere the last equality follows from the Fundamental Theorem of Calculus and the fact that the torus Td has no boundary. Recall that we are interested in determining the integrals J(m, n) := Z M e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(i|m|, m) \u00b7 V = Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(i|m|, m) \u00b7 V. Using that (i|m|, m) \u00b7 (Dvm + V ) = 0, we can proceed as before to obtain J(m, n) = \u2212 Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(i|m|, m) \u00b7 Dvm = \u2212 Z [p,q]\u00d7 Td(i|m|, m) \u00b7 D(e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)vm) = \u2212|m| 2\u03c0 \u0012 e2\u03c0\u00b5m,nx1 Z Td e\u2212n(x\u2032)vm(x1, x\u2032)dx\u2032 \u0013\f \f \f \f x1=q x1=p . (55) Now we show that we can determine the integrals in (55) from the knowledge of the integrals in (54). First, let us observe that these equalities hold for any p, q such that M \u2286[p, q] \u00d7 Td. Therefore, if necessary we may only consider the case when p, q are large. In addition, observe that for determining the integrals in (55) it su\ufb03ces to determine vm(x1, x\u2032) for |x1| large. Moreover, by Theorem 4.3 we have |vm(x1, x\u2032)| \u21920 as |x1| \u2192+\u221e(uniformly in x\u2032), thus the knowledge of e am(x1, x\u2032) = exp(vm(x1, x\u2032)) \u2192 1 for |x1| large and the invertibility of exp(z) near z = 0 are su\ufb03cent to determine vm(x1, x\u2032). More concretely, we can recover vm(x1, x\u2032) by the power series vm(x1, x\u2032) = log(e am(x1, x\u2032)) = \u221e X j=1 (\u22121)j\u22121 j (e am(x1, x\u2032) \u22121)j. Then, the problem reduces to recover e am(x1, x\u2032) for |x1| large from the knowledge of the integrals in (54). Let us consider the Fourier series vm(x1, x\u2032) = X k\u2208Zd vm,k(x1)ek(x\u2032), so that the Fourier coe\ufb03cient vm,k(x1) solves the equation iDx1vm,k + m \u00b7 k |m| vm,k = \u2212 \u0012 iFk + m |m| \u00b7 Gk \u0013 . By Theorem 4.8, the solution vm,k(x1) vanishes in (\u2212\u221e, \u2212R] or [R, +\u221e) depending whether m \u00b7 k \u22650 or m \u00b7 k \u22640, respectively. Thus, for |x1| \u2265R we have vm(x1, x\u2032) = \u001a v+ m(x1, x\u2032) := P m\u00b7k>0 vm,k(x1)ek(x\u2032) if x1 \u2265R, v\u2212 m(x1, x\u2032) := P m\u00b7k<0 vm,k(x1)ek(x\u2032) if x1 \u2264\u2212R. (56) From this and (55) we obtain J(m, n) = \u2212|m| 2\u03c0 \u00b7 \uf8f1 \uf8f2 \uf8f3 e2\u03c0\u00b5m,nqvm,n(q) if m \u00b7 n > 0, \u2212e2\u03c0\u00b5m,npvm,n(p) if m \u00b7 n < 0, 0 if m \u00b7 n = 0. (57) Moreover, we also have e am(x1, x\u2032) = exp(vm(x1, x\u2032)) = \u001a e a+ m(x1, x\u2032) := exp(v+ m(x1, x\u2032)) if x1 \u2265R, e a\u2212 m(x1, x\u2032) := exp(v\u2212 m(x1, x\u2032)) if x1 \u2264\u2212R. Let us consider the Fourier series e am(x1, x\u2032) = X k\u2208Zd e am,k(x1)ek(x\u2032). 52 \fGiven the form of v\u00b1 m from (56) and the fact that the exponential is a power series, we conclude that e am(x1, x\u2032) = \u001a e a+ m(x1, x\u2032) = 1 + P m\u00b7k>0 e am,k(x1)ek(x\u2032) if x1 \u2265R, e a\u2212 m(x1, x\u2032) = 1 + P m\u00b7k<0 e am,k(x1)ek(x\u2032) if x1 \u2264\u2212R, (58) This and (54) give that I(m, n) = \u2212|m| 2\u03c0 \u0012 e2\u03c0\u00b5m,nx1 Z Td e\u2212n(x\u2032)e am(x1, x\u2032)dx\u2032 \u0013\f \f \f \f x1=q x1=p = \u2212|m| 2\u03c0 \u00b7 \uf8f1 \uf8f2 \uf8f3 e2\u03c0\u00b5m,nqe am,n(q) if m \u00b7 n > 0, \u2212e2\u03c0\u00b5m,npe am,n(p) if m \u00b7 n < 0, 0 if m \u00b7 n = 0. Recall that this holds for any p, q such that [\u2212R, R] \u2286[p, q]. From this and (58) we conclude that e am(x1, x\u2032) = 1 + 2\u03c0 |m| \u00b7 \u001a \u2212P m\u00b7n>0 I(m, n)e\u22122\u03c0\u00b5m,nx1en(x\u2032) if x1 \u2265R, P m\u00b7n<0 I(m, n)e\u22122\u03c0\u00b5m,nx1en(x\u2032) if x1 \u2264\u2212R, (59) Therefore, we have shown that from the integrals I(m, n) we are able to determine e am(x1, x\u2032) for |x1| \u2265R, which in turn determines vm(x1, x\u2032) for |x1| \u2265R, and so the integrals J(m, n). The explicit dependence of J(m, n) on the family of integrals {I(m, k)} is shown in the appendix. 6.3.2 Curl vectors and Laplace transform Let us show how we can use the integrals J(m, n) to recover the Fourier coe\ufb03cients of curl V . Using that supp(V ) \u2286M, we integrate by parts to compute the mixed transform of the terms involved in the magnetic \ufb01eld curl V , Z [p,q]\u00d7 Tde2\u03c0\u00b5m,nx1e\u2212n(x\u2032)Dx1Gj = Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)i\u00b5m,nGj = Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032) \u0012 0, im \u00b7 n |m| \u03b4j \u0013 \u00b7 V, Z [p,q]\u00d7 Tde2\u03c0\u00b5m,nx1e\u2212n(x\u2032)Dx\u2032 jF = Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)njF = Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(nj, 0) \u00b7 V, Z [p,q]\u00d7 Tde2\u03c0\u00b5m,nx1e\u2212n(x\u2032)Dx\u2032 jGk = Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)njGk = Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(0, nj\u03b4k) \u00b7 V, where \u03b41, . . . , \u03b4d are the standard basis vectors in Rd. This means that we are interested in determining the integrals Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(Dx1Gj \u2212Dx\u2032 jF) = Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032) \u0012 \u2212nj, im \u00b7 n |m| \u03b4j \u0013 \u00b7 V, Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(Dx\u2032 jGk \u2212Dx\u2032 kGj) = Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(0, nj\u03b4k \u2212nk\u03b4j) \u00b7 V. Therefore, the problem reduces to obtain the \u201ccurl vectors\u201d \u001a\u0012 \u2212nj, im \u00b7 n |m| \u03b4j \u0013 , (0, nj\u03b4k \u2212nk\u03b4j) \u001b 53 \fas linear combinations of vectors {(i|m|, m)} while keeping n and \u00b5m,n \ufb01xed. Moreover, we would like to have this result for many values of \u00b5. We show in Lemma 6.8, from the following section, that this is indeed the case, so that for \ufb01xed n \u0338= 0, the knowledge of the integrals J(m, n) allows to determine the integrals Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(Dx1Gj \u2212Dx\u2032 jF) = Z q p e2\u03c0\u00b5m,nx1 \u0012Z Td e\u2212n(x\u2032)(Dx1Gj \u2212Dx\u2032 jF)dx\u2032 \u0013 dx1, Z [p,q]\u00d7 Td e2\u03c0\u00b5m,nx1e\u2212n(x\u2032)(Dx\u2032 jGk \u2212Dx\u2032 kGj) = Z q p e2\u03c0\u00b5m,nx1 \u0012Z Td e\u2212n(x\u2032)(Dx\u2032 jGk \u2212Dx\u2032 kGj)dx\u2032 \u0013 dx1. for a sequence of values of \u00b5m,n = gcd(n)/K converging to 0. For f \u2208C\u221e c ([p, q]), its Laplace transform F(\u00b5) := Z q p e2\u03c0\u00b5x1f(x1)dx1 is an entire function, and therefore its knowledge along a convergent sequence is enough to recover the entire function F over all C. We describe this reconstruction in Theorem 6.16 in the following section. The values of F over the imaginary axis correspond to the Fourier transform of f, and therefore it is possible to reconstruct f from the knowledge of F along a convergent sequence. This completes the reconstruction of the Fourier coe\ufb03cients of curl V . 6.4 Appendices 6.4.1 Explicit relation between the families {I(m, n)} and {J(m, n)} Let us prove Theorem 6.5. We mentioned before that we were interested in computing vm = log e am by a power series; in particular, we are concerned with expressions of the form (e am \u22121)k. In (59) we were able to express the Fourier series of e am in terms of the integrals I(m, n). In particular, for x1 \u2265R we have e am(x1, x\u2032) \u22121 = \u22122\u03c0 |m| X m\u00b7k>0 I(m, k)e\u22122\u03c0\u00b5m,kx1ek(x\u2032). Consider the set T + j (m, k) = {\u03ba = (\u03ba1, . . . , \u03baj) \u2208( Zd)j : m \u00b7 \u03bai > 0, \u03ba1 + . . . + \u03baj = k}. Observe that if \u03ba \u2208T + j (m, k), then m \u00b7 k = m \u00b7 (\u03ba1 + . . . + \u03baj) \u2265gcd(m) + . . . + gcd(m) = j gcd(m). This implies that T + j (m, k) is empty when j > m \u00b7 k/ gcd(m); in particular it is empty when m \u00b7 k < 0. Let us de\ufb01ne I+ j (m, k) = X \u03ba\u2208T + j (m,k) I(m, \u03ba1) \u00b7 I(m, \u03ba2) \u00b7 . . . \u00b7 I(m, \u03baj) By our previous observation, we also see that I+ j (m, k) = 0 if j > m \u00b7 k/ gcd(m). Finally, using that \u00b5m,k is a linear function of k we obtain (e am(x1, x\u2032) \u22121)j = \u0012\u22122\u03c0 |m| \u0013j X m\u00b7k>0 I+ j (m, k)e\u22122\u03c0\u00b5m,kx1ek(x\u2032) This implies that if x1 \u2265R, then vm(x1, x\u2032) = \u221e X j=1 (\u22121)j\u22121 j (e am(x1, x\u2032) \u22121)j = \u2212 X m\u00b7k>0 \u0012 \u221e X j=1 1 j \u0012 2\u03c0 |m| \u0013j I+ j (m, k) \u0013 e\u22122\u03c0\u00b5m,kx1ek(x\u2032). From this and (57) we conclude that if m \u00b7 n > 0, then J(m, n) = \u2212|m| 2\u03c0 e2\u03c0\u00b5m,nqvm,n(q) = \u221e X j=1 1 j \u0012 2\u03c0 |m| \u0013j\u22121 I+ j (m, n) (60) 54 \fSimilarly, if x1 \u2264\u2212R we have e am(x1, x\u2032) \u22121 = 2\u03c0 |m| X m\u00b7k<0 I(m, k)e\u22122\u03c0\u00b5m,kx1ek(x\u2032). We consider T \u2212 j (m, k) = {\u03ba = (\u03ba1, . . . , \u03baj) \u2208( Zd)j : m \u00b7 \u03bai < 0, \u03ba1 + . . . + \u03baj = k}. As before, we have that that T \u2212 j (m, k) is empty when j > \u2212m \u00b7 k/ gcd(m); in particular it is empty when m \u00b7 k > 0. Let us de\ufb01ne I\u2212 j (m, k) = X \u03ba\u2208T \u2212 j (m,k) I(m, \u03ba1) \u00b7 I(m, \u03ba2) \u00b7 . . . \u00b7 I(m, \u03baj) By our previous observation, we also see that I\u2212 j (m, k) = 0 if j > \u2212m \u00b7 k/ gcd(m). As before, we obtain (e am(x1, x\u2032) \u22121)j = \u0012 2\u03c0 |m| \u0013j X m\u00b7k<0 I\u2212 j (m, k)e\u22122\u03c0\u00b5m,kx1ek(x\u2032) This implies that if x1 \u2264\u2212R, then vm(x1, x\u2032) = \u221e X j=1 (\u22121)j\u22121 j (e am(x1, x\u2032) \u22121)j = X m\u00b7k<0 \u0012 \u221e X j=1 (\u22121)j\u22121 j \u0012 2\u03c0 |m| \u0013j I\u2212 j (m, k) \u0013 e\u22122\u03c0\u00b5m,kx1ek(x\u2032). From this and (57) we conclude that if m \u00b7 n < 0, then J(m, n) = |m| 2\u03c0 e2\u03c0\u00b5m,npvm,n(p) = \u221e X j=1 1 j \u0012\u22122\u03c0 |m| \u0013j\u22121 I\u2212 j (m, n) (61) It was shown above that I\u00b1 j (m, n) vanish when j > |m \u00b7 n|/ gcd(m), so the sum above is actually a \ufb01nite sum. In particular, if m, n are such that m \u00b7 n = \u00b11, which implies gcd(m) = 1, then we obtain that I\u00b1 j = 0 for j \u22652. Therefore, J(m, n) = I(m, n) if m \u00b7 n = \u00b11. Remark. The relation between these two families of integrals in other problems had already been noted by Eskin\u2013Ralston in [4], and was also used in [18]. In their setting the two families ended up being entirely equal, not as in our problem where this only seems to be true in certain cases. 6.4.2 A linear algebra lemma In the previous subsection we were concerned with determining the curl vectors \u001a\u0012 \u2212nj, im \u00b7 n |m| \u03b4j \u0013 , (0, nj\u03b4k \u2212nk\u03b4j) \u001b as linear combinations of vectors {(i|m|, m)} while keeping n and \u00b5m,n \ufb01xed. We observe that \u0012 \u2212nj, im \u00b7 n |m| \u03b4j \u0013 = i |m|(i|m|nj, (m \u00b7 n)\u03b4j), so we can regard the family of \u201ccurl vectors\u201d as {(i|m|nj, (m \u00b7 n)\u03b4j), (0, nj\u03b4k \u2212nk\u03b4j)}. Let n \u2208 Zd \\ {0}. Consider the set of points U(K) := {(i|m|, m) : m \u2208 Zd, m \u00b7 n = gcd(n), |m| = K}, where gcd(n) denotes the greatest common divisor of all the entries of n. We will show that if d \u22653, then we can construct in\ufb01nitely many K such that linear combinations of elements in U(K) generate all the curl vectors {(iKnj, gcd(n)\u03b4j), (0, ni\u03b4j \u2212nj\u03b4i)}. 55 \fRemark. It may su\ufb03ce to generate each curl vector for in\ufb01nitely many K, but we will show that we can do all of them simultaneously. The curl vectors and the conditions de\ufb01ning U(K) are homogeneous functions of the entries of n, so we can assume without loss of generality that gcd(n) = 1. Moreover, it su\ufb03ces to generate the \ufb01rst family of curl vectors, as we can express (0, ni\u03b4j \u2212nj\u03b4i) = ni(iKnj, \u03b4j) \u2212nj(iKni, \u03b4i). In addition, note that if \u03b4j = \u03b11k1 + . . . + \u03b1NkN, with (i|ki|, ki) \u2208U(K), then nj = \u03b4j \u00b7 n = (\u03b11k1 + . . . + \u03b1NkN) \u00b7 n = \u03b11 + . . . + \u03b1N, and so (iKnj, \u03b4j) = \u03b11(iK, k1) + . . . + \u03b1N(iK, kN). Therefore it su\ufb03ces to construct in\ufb01nitely many K such that the set V (K) := {k \u2208 Zd : k \u00b7 n = 1, |k| = K} has d linearly independent vectors. In what follows, we refer to the rank of a \ufb01nite set of vectors as the dimension of the subspace generated by them. We prove this in several steps. Proposition 6.6. Let d \u22653 and let n \u2208 Zd be such that gcd(n) = 1. Then there exist m1, m2 \u2208 Zd such that {m1, m2, n} are linearly independent and m1 \u00b7 n = m2 \u00b7 n = 1, |m1| = |m2|. Proof. Let p \u2208 Zd be such that p \u00b7 n = 1 and is linearly independent with n. Since d \u22653, there exists q \u2208 Zd \\ {0} orthogonal to both n and p. We can de\ufb01ne m1 := p \u2212q and m2 := p + q. With this we have mi \u00b7 n = (p \u00b1 q) \u00b7 n = 1 \u00b1 0 = 1, |mi|2 = |p \u00b1 q|2 = |p|2 \u00b1 2p \u00b7 q + |q|2 = |p|2 + |q|2. Moreover, the span of {m1, m2, n} is the same as the span of {p, q, n}, from where we conclude that these vectors are linearly independent. Remark. A curious observation is that the only vectors n \u2208 Z2 for which there exist m1, m2 \u2208 Z2 such that m1 \u00b7 n = m2 \u00b7 n = 1, |m1| = |m2|, are the eight vectors \u00b1{(1, 0), (0, 1), (1, 1), (1, \u22121)}. This problem appeared at the Olimpiada Iberoamericana de Matem\u00b4 atica Universitaria 2018. Proposition 6.7. Let d \u22653 and let {m1, m2, n} be as in Proposition 6.6. Consider the integers M := |m1|2 = |m2|2, N := |n|2, P = m1 \u00b7 m2. Then the vectors p1 = (NP \u22121)m1 + (1 \u2212MN)m2 + (M \u2212P)n, p2 = (1 \u2212MN)m1 + (NP \u22121)m2 + (M \u2212P)n, satisfy pi \u00b7 mi = pi \u00b7 n = 0 and |p1| = |p2|. Moreover, the rank of the set {m1, m2, p1, p2} is 3. Proof. The computations are direct: pi \u00b7 mi = [(NP \u22121)mi + (1 \u2212MN)mj + (M \u2212P)n] \u00b7 mi = (NP \u22121)M + (1 \u2212MN)P + (M \u2212P) = 0, pi \u00b7 n = [(NP \u22121)mi + (1 \u2212MN)mj + (M \u2212P)n] \u00b7 n = (NP \u22121) + (1 \u2212MN) + (M \u2212P)N = 0, |pi|2 = |(NP \u22121)mi + (1 \u2212MN)mj + (M \u2212P)n|2 = (NP \u22121)2M + (1 \u2212MN)2M + (M \u2212P)2N + 2[(NP \u22121)(1 \u2212MN)P + (NP \u22121)(M \u2212P) + (1 \u2212MN)(M \u2212P)]. To see that the rank is 3, we \ufb01rst observe that (M \u2212P) > 0, as m1 and m2 are linearly independent. This implies that n is contained in the span of {m1, m2, p1, p2}. As {m1, m2, n} are linearly independent the conclusion follows. 56 \fLemma 6.8. Let d \u22653 and let n \u2208 Zd \\ {0}. We can construct in\ufb01nitely many K for which there are d linearly independent vectors in the set V (K) := {k \u2208 Zd : k \u00b7 n = gcd(n), |k| = K}. Proof. We can assume without loss of generality that gcd(n) = 1. Let {m1, m2, p1, p2} be as in Proposition 6.7, and let {q4, . . . , qd} be an orthogonal set of vectors in Zd perpendicular in addition to {m1, m2, n}, so that it is also perpendicular to the set {m1, m2, p1, p2}. For \ufb01xed nonzero \u03b1, \u03b14, . . . , \u03b1d consider the vectors m1 \u00b1 \u03b1p1 \u00b1 \u03b14q4 \u00b1 . . . \u00b1 \u03b1dqd, m2 \u00b1 \u03b1p2 \u00b1 \u03b14q4 \u00b1 . . . \u00b1 \u03b1dqd. Note that the dot product of any of these vectors with n equals 1, as mi \u00b7 n = 1 and pi \u00b7 n = qj \u00b7 n = 0. Moreover, they have the same norm for any choice of signs, since all the terms are orthogonal to each other and |m1| = |m2| and |p1| = |p2|. Therefore, all these vectors belong to the same V (K) for any choice of signs. Linear combinations of these vectors allow to obtain the set {m1, m2, p1, p2, q4, . . . , qd}. The sets {m1, m2, p1, p2} and {q4, . . . , qd} are perpendicular and its combined rank is 3 + (d \u22123) = d. Di\ufb00erent choices of the integers \u03b1, \u03b14, . . . , \u03b1d give the in\ufb01nitely many values of K. 6.4.3 Reconstruction of an entire function from values along a convergent sequence Suppose F : C \u2192 C is an entire function and {zn} \u2286 C is a known sequence such that zn \u21920 and the sequence {F(zn)} is also known. The Taylor coe\ufb03cients of F can be recovered recursively as follows, F (0)(0) = lim n\u2192+\u221eF(zn), F (m)(0) m! = lim n\u2192+\u221e 1 zm n \u0012 F(zn) \u2212 m\u22121 X k=0 F (k)(0) k! zk n \u0013 , and so F can be reconstructed like this. However, we would like to propose a di\ufb00erent approach based on Newton\u2019s method of divided di\ufb00erences for interpolation polynomials. De\ufb01nition 6.9. Let F (0)(z) := F(z) and de\ufb01ne the divided di\ufb00erences F (n)(z1, . . . , zn+1) := 1 zn \u2212zn+1 (F (n\u22121)(z1, . . . , zn\u22121, zn) \u2212F (n\u22121)(z1, . . . , zn\u22121, zn+1)). Remark. It is clear that divided di\ufb00erences are symmetric with respect to the last two elements, i.e. F (n)(z1, . . . , zn, zn+1) = F (n)(z1, . . . , zn+1, zn). We will not use this, but it is possible to show by the induction that the divided di\ufb00erences are indeed symmetric with respect to all its entries. We can see this in the particular following result. Proposition 6.10. Consider the power function pk(x) := xk. Then, p(n) k (z1, . . . , zn+1) = X |\u03b1|=k\u2212n z\u03b11 1 . . . z\u03b1n+1 n+1 . In particular, p(k) k = 1 and p(n) k = 0 if n > k. The number of monomials in the expression equals the binomial coe\ufb03cient \u0000k n \u0001 . Proof. We prove this by induction. For n = 0 this is true. Then p(n+1) k (z1, . . . , zn+1, zn+2) = X |\u03b1|=k\u2212n z\u03b11 1 . . . z\u03b1n n \u0012z\u03b1n+1 n+1 \u2212z\u03b1n+1 n+2 zn+1 \u2212zn+2 \u0013 = X |\u03b1|=k\u2212n |\u03b2|=\u03b1n+1\u22121 z\u03b11 1 . . . z\u03b1n n z\u03b21 n+1z\u03b22 n+2 = X |\u03b3|=k\u2212n\u22121 z\u03b31 1 . . . z\u03b3n+2 n+2 . 57 \fDe\ufb01nition 6.11. For an entire function F, we de\ufb01ne the n-th derivative majorant by the convergent series |F|(n)(R) := 1 n! \u221e X k=0 |F (n+k)(0)| k! Rk. Proposition 6.12. Let F be an entire function F and zi \u2208 C, |zi| \u2264R. Then the n-th derivative majorant dominates the divided di\ufb00erences, i.e. |F (n)(z1, . . . , zn+1)| \u2264|F|(n)(R). Proof. Let F(z) = P\u221e k=0 akzk. The divided di\ufb00erences are linear operators, so we obtain |F (n)(z1, . . . , zn+1)| \u2264 \u221e X k=0 |ak||p(n) k (z1, . . . , zn+1)| = \u221e X k=0 |an+k||p(n) n+k(z1, . . . , zn+1)|, where we used in the last equality that p(n) k = 0 if k < n from Proposition 6.10. Also from Proposition 6.10 we obtain the bound |p(n) n+k(z1, . . . , zn+1)| \u2264 \u0012n + k n \u0013 Rk. Therefore we conclude that |F (n)(z1, . . . , zn+1)| \u2264 \u221e X k=0 \f \f \f \f F (n+k)(0) (n + k)! \f \f \f \f \u0012n + k n \u0013 Rk = |F|(n)(R). Theorem 6.13. Let F be an entire function and let zi \u21920. Then, the Taylor coe\ufb03cients of F can be recovered by the divided di\ufb00erences: lim m\u2192+\u221eF (n)(zm+1, . . . , zm+n+1) = F (n)(0) n! . Proof. Proceeding as in the previous proof, we can bound |F (n)(zm+1, . . . , zm+n+1) \u2212an| \u2264 \u221e X k=1 |an+k||p(n) n+k(zm+1, . . . , zm+n+1)| \u2264|F|(n)(max{|zm+1|, . . . , |zm+n+1|}) \u2212|F|(n)(0). Taking limits as m \u2192+\u221egives the result. The previous result already allows for the reconstruction of the entire function F from the values along the convergent sequence. However, we provide a slightly more explicit reconstruction for F using Newton\u2019s divided di\ufb00erences interpolation polynomials. De\ufb01nition 6.14. Given a function f and z1, . . . , zN, we de\ufb01ne the N-th interpolation polynomial by fN(z; z1, . . . , zN) = N\u22121 X n=0 f (n)(z1, . . . , zn+1) n Y m=1 (z \u2212zm). Proposition 6.15. The interpolation polynomials satisfy fN(zk; z1, . . . , zN) = f(zk) for k = 1, . . . , N. Proof. We prove the result by induction. For the base case we have f1(z) = f(z1). Assume the result is true for N. We have that fN+1(z; w1, . . . , wN, wN+1) = fN(z; w1, . . . , wN) + f (N)(w1, . . . , wN+1) N Y m=1 (z \u2212wm). 58 \fThis and the inductive hypothesis give that for k = 1, . . . , N we have fN+1(wk; w1, . . . , wN, wN+1) = fN(wk; w1, . . . , wN) = f(wk). In addition, directly from the de\ufb01nitions it follows that fN+1(z; z1, . . . , zN, zN+1) = fN+1(z; z1, . . . , zN+1, zN). These two observations imply the result. Theorem 6.16. Let F be an entire function and suppose that zi \u2208 C, zi \u21920. Then, F can be recovered as a limit of the interpolation polynomials: F(z) = lim N\u2192+\u221eFN(z; z1, . . . , zN) = \u221e X n=0 f (n)(z1, . . . , zn+1) n Y m=1 (z \u2212zm). The series converges absolutely and uniformly over compact sets. Proof. Let |zi| \u2264R. From Proposition 6.12 we can bound absolutely the series by \u221e X n=0 \f \f \f \fF (n)(z1, . . . , zn+1) n Y m=1 (z \u2212zm) \f \f \f \f \u2264 \u221e X n=0 |F|(n)(R)(|z| + R)n = \u221e X n=0 \u0012 1 n! \u221e X m=0 |F (m+n)(0)| m! Rm \u0013 (|z| + R)n = \u221e X k=0 |F (k)(0)| k! (|z| + 2R)k, where we used the binomial theorem in the last equality. The right-hand side is a uniformly convergent series over compact sets. It follows from Weierstrass\u2019 test that the convergence of the series \u221e X n=0 f (n)(z1, . . . , zn+1) n Y m=1 (z \u2212zm) is absolute and uniform over compact sets. Since the partial sums are polynomials, in particular entire, then the limit e F(z) must be entire as well. However, from Proposition 6.15 we know that e F(zk) = F(zk). Thus, F and e F are entire and coincide over a convergent sequence, and so F \u2261e F." + }, + { + "url": "http://arxiv.org/abs/0804.4070v1", + "title": "The effects of distributed life cycles on the dynamics of viral infections", + "abstract": "We explore the role of cellular life cycles for viruses and host cells in an\ninfection process. For this purpose, we derive a generalized version of the\nbasic model of virus dynamics (Nowak, M.A., Bangham, C.R.M., 1996. Population\ndynamics of immune responses to persistent viruses. Science 272, 74-79) from a\nmesoscopic description. In its final form the model can be written as a set of\nVolterra integrodifferential equations. We consider the role of age-distributed\ndelays for death times and the intracellular (eclipse) phase. These processes\nare implemented by means of probability distribution functions. The basic\nreproductive ratio $R_0$ of the infection is properly defined in terms of such\ndistributions by using an analysis of the equilibrium states and their\nstability. It is concluded that the introduction of distributed delays can\nstrongly modify both the value of $R_0$ and the predictions for the virus\nloads, so the effects on the infection dynamics are of major importance. We\nalso show how the model presented here can be applied to some simple situations\nwhere direct comparison with experiments is possible. Specifically,\nphage-bacteria interactions are analysed. The dynamics of the eclipse phase for\nphages is characterized analytically, which allows us to compare the\nperformance of three different fittings proposed before for the one-step growth\ncurve.", + "authors": "Daniel Campos, Vicen\u00e7 M\u00e9ndez, Sergei Fedotov", + "published": "2008-04-25", + "updated": "2008-04-25", + "primary_cat": "q-bio.PE", + "cats": [ + "q-bio.PE" + ], + "main_content": "Introduction The interactions between viruses and cells in an infection process can be seen as an ecological system within the infected host. The mathematical description of these systems has attracted increasing interest in the last years (Wodarz, 2006), especially concerning the characteristics of the immune response to a viral attack. A decade ago, Nowak and Bangham (1996) presented what has been called thereafter the Basic Model of Virus Dynamics (BMVD). This model has become quite popular among theorists and experimentalists (see Nowak and May (2000) and Perelson (2002) for some understanding reviews). The interplay between the BMVD and the e\ufb00ect of an immune response has proved useful to describe the dynamics of chronic HIV infections (Perelson, 2002). Furthermore, it has provided interesting results regarding topics as the performance of drug therapies (Bonhoe\ufb00er et al., 1997; Wodarz and Nowak, 1999), lymphocyte exhaustion (Wodarz et al., 1998), etc. The BMVD describes the time evolution of non-infected cells (X), infected cells (Y ) and viruses (V ) by the system of equations dX dt = \u03bb \u2212\u03b4X \u2212\u03b2XV dY dt = \u03b2XV \u2212aY dV dt = kY \u2212\u03b2XV \u2212uV. (1) The infection process is governed by the parameter \u03b2, which determines the rate of successful contacts between the target cells and the viruses. Mortality terms for the three species are considered with constant death rates \u03b4, a and u, respectively. The parameter k measures the rate at which virions are released 3 \ffrom a single infected cell. Finally, new target cells are produced by the host at a constant rate \u03bb. Despite the success achieved by the BMVD, it is clear that the model described in (1) is just a \ufb01rst approximation to the real underlying process. Probably the strongest simpli\ufb01cation made in the model is that it assumes that the death rates are exponentially distributed (i.e., mortalities are considered as Markovian random processes) and therefore do not take into account accurately the details of the cellular life cycles. However, delays and structured life cycles are expected to play a very signi\ufb01cant role in the dynamics of viral infections. For example, the infection process involves an intracellular phase of the virus, also known as the eclipse phase, which is not explicitly considered in (1). For this reason, in the recent years some works have explored the e\ufb00ects of constant and distributed delays in the BMVD, also in the case where an immune response is considered. Herz et al., (1996) showed for the \ufb01rst time the importance of delays in order to explain the virus loads observed in HIV patients under drug treatment. This delayed model was later explored from a more formal point of view by Tam (1999). Similar ideas, with di\ufb00erent expressions for the infection term, were considered by Culshaw and Ruan (2000), Fort and M\u00b4 endez (2002) and Li and Wanbiao (1999). The e\ufb00ect of distributed delays was explored for di\ufb00erent models of virus dynamics by Banks et al., (2003), Mittler et al., (1998) and Lloyd (2001). Finally, the role of a delayed immune response has been the subject of extensive research. Some examples are Buric et al., (2001), Canabarro et al., (2004), Wang et al., (2007) and the references there in, which focused on the chaotic patterns which can appear in these systems. The papers mentioned above have helped us to understand how delays can 4 \fmodify the cell-virus and virus-immune system dynamics. However, most of those works focused on the case where only one of the processes (usually the intracellular phase) is delayed. So, they do not considered the possibility of di\ufb00erent delays for each process, whose combined contributions could modify the dynamical behavior of the system. On the other hand, the introduction of delays in the virus dynamics has been usually based on phenomenological (not always rigorous) arguments, without providing a justi\ufb01cation of the delayed equations proposed. Only in Banks et al., (2003), Fort and M\u00b4 endez (2002) and Wearing et al., (2005) a more formal discussion was provided. We stress that the implementation of delays into dynamical models is sometimes tricky, as memory e\ufb00ects can lead to the breakdown of hypothesis that are well established for Markovian processes. In fact, there is currently a very active research on this subject from the point of view of statistical mechanics (see, for example, Allegrini et al., 2003, Allegrini et al., 2007, Rebenshtok and Barkai, 2007 and the references therein). According to these ideas, a rigorous mathematical approach is necessary to reach an accurate physical description of virus dynamics with delays. Here, we propose a system of Volterra integrodi\ufb00erential equations which is a generalization of the BMVD. This system of equations is derived from a mesoscopic approach where balance equations for each species (X, Y and V ) are considered explicitly. Mesoscopic descriptions as that considered here (based on Continuous-Time Random Walk processes) have become quite usual tools for the description of physical and biological processes. At this stage, they have proved useful for the study of heat transport (Emmanuel and Berkowitz, 2007), biological invasions (M\u00b4 endez et al., unpublished), tumor cell growth (Fedotov and Iomin, 2007), solute transport in porous media (Berkowitz et 5 \fal., 2000), earthquakes dynamics (Helmstetter and Sornette, 2002), \ufb01nancial markets (Masoliver et al., 2006) and many other. Here we will explore for the \ufb01rst time their application to the \ufb01eld of virus dynamics. Then, the aim of this paper is to use an integrodi\ufb00erential approach to show how distributed delays can strongly in\ufb02uence the predictions from the BMVD. We \ufb01nd that the value of the basic reproductive ratio R0 and the values of the virus load can drastically change, in accordance with similar conclusions found in Lloyd (2001) from the analysis of the intracellular phase. Furthermore, the advantage of using such a general formalism as the one proposed here is that di\ufb00erent situations of interest can be analyzed as particular cases of the model. According to this, we show how our model can be used to \ufb01t and characterize the one-step growth (osg) curve observed in phage-bacteria interactions. Three \ufb01ttings proposed before by di\ufb00erent authors are compared. We \ufb01nd that, albeit the three approaches \ufb01t reasonably well the osg curve, their predictions concerning the dynamics of the eclipse phase are slightly di\ufb00erent. In the following, we show how a generalized version of the BMVD can be obtained using a mesoscopic description. In Section 2 we present our model, whose formal derivation is given in the Appendix for the sake of clarity. In Section 3 we explore the equilibrium states and their stability, which let us de\ufb01ne the basic reproductive ratio R0. After that, we consider speci\ufb01c situations of special interest in virus dynamics. We consider the e\ufb00ects of distributed delays in the phase eclipse (Section 4) and in the mortalities for cells and viruses (Section 5). We also show how the model derived in Section 2 works in the case of phages-bacteria dynamics (Section 6), and we provide some examples using experimental data extracted from the literature. Finally, the main conclusions 6 \fobtained from our study are summarized in Section 7. 2 The BMVD with distributed delays The model we consider here is depicted in Figure 1. It follows the same scheme as the BMVD but some of the random processes (those indicated by the dotted lines) are governed by their corresponding probability distribution functions (PDF). So that, \u03d5X(t) represents the probability that a target cell X dies at age t, with equivalent de\ufb01nitions for \u03d5Y (t) and \u03d5V (t) for infected cells and viruses. Similarly, the function \u03c6(t) determines the dynamics of the eclipse phase: a cell that becomes infected at time t0 can release \u03c6(t) viruses at time t0 + t. The Volterra integrodi\ufb00erential equations corresponding to the scheme in Figure 1 read dX(t) dt = \u03bb \u2212\u03b2X(t)V (t) \u2212 Z t 0 X(t \u2212t\u2032)\u03a8X(t\u2032)\u2126X(t \u2212t\u2032, t)dt\u2032 dY (t) dt = \u03b2X(t)V (t) \u2212 Z t 0 Y (t \u2212t\u2032)\u03a8Y (t\u2032)dt\u2032 dV (t) dt = \u2212\u03b2X(t)V (t) + Z t 0 \u03b2X(t \u2212t\u2032)V (t \u2212t\u2032)\u03c6(t\u2032)\u03a6Y (t\u2032)dt\u2032 \u2212 Z t 0 V (t \u2212t\u2032)\u03a8V (t\u2032)\u2126V (t \u2212t\u2032, t)dt\u2032. (2) The formal derivation of this model in terms of a mesoscopic description is provided in the Appendix. The functions \u03a8X, \u03a8Y , \u03a8V are de\ufb01ned by their Laplace transforms (we denote the Laplace transform of a function by the brackets [\u00b7]s with the conjugate variable s) [\u03a8X]s \u2261[\u03d5X]s [\u03a6X]s [\u03a8Y ]s \u2261[\u03d5Y ]s [\u03a6Y ]s [\u03a8V ]s \u2261[\u03d5V ]s [\u03a6V ]s , (3) 7 \fwhere \u03a6X(t) \u2261 R \u221e t \u03d5X(t\u2032)dt\u2032 is the survival probability for the cells of age t. Analogous de\ufb01nitions hold for \u03a6Y and \u03a6V . According to (3), the function \u03a8X(t) can be interpreted as the instantaneous death rate for a cell X of age t. Then, the term R t 0 X(t\u2212t\u2032)\u03a8X(t\u2032)\u2126X(t\u2212t\u2032, t)dt\u2032 represents a generalized death term in which age-distributed death rates are considered, and where \u2126X(t \u2212 t\u2032, t\u2032) is the probability that a particle X does not become infected during the time interval (t \u2212t\u2032, t). Similarly, the term R t 0 \u03b2X(t \u2212t\u2032)V (t \u2212t\u2032)\u03c6(t\u2032)\u03a6Y (t\u2032)dt\u2032 represents the release of new virions from those cells that became infected at time t \u2212t\u2032, provided that these cells have survived up to time t. The system of equations (2-3) represents our generalization of the BMVD to the case with distributed delays. An important conclusion from (2) is that the density of infected cells Y does not appear in the equations for X(t) and V (t). It means that the formalism introduced here allows us to reduce the BMVD to a 2-species model. We do not need to consider explicitly the density Y (t); the existence of the infected cells is implicitly considered by means of the function \u03a6Y appearing in the equation for V (t). 3 Equilibrium states and their stability The equilibrium states of the model (2) come from the analysis of the \ufb01xed points of the system at t \u2192\u221e. There are two possible equilibrium states: the \ufb01rst one is the trivial, infection-free state, given by (Xeq, Yeq, Veq) = (\u03bb\u03c4 X, 0, 0). (4) where we use \u03c4 i = R \u221e 0 \u03a6i(t)dt to denote the average lifetime of species i, with i = X, Y, V . The second state corresponds to the case of a successful infection 8 \fde\ufb01ned by Xeq Z \u221e 0 e\u2212\u03b2Xeqt\u03a6V (t)dt = \u03bb\u03c4 X R \u221e 0 e\u2212\u03b2\u03bb\u03c4 X t\u03a6V (t)dt R0 Yeq = \u03bb\u03c4 Y \u03b2Veq Z \u221e 0 e\u2212\u03b2Veqt\u03a6X(t)dt Z \u221e 0 e\u2212\u03b2Veqt\u03a6X(t)dt = Xeq \u03bb (5) where the equations (27,28) have been used, and we have de\ufb01ned R0 \u2261\u03b2\u03bb\u03c4 X \u0014Z \u221e 0 e\u2212\u03b2\u03bb\u03c4 X t\u03a6V (t)dt \u0015 \u0014Z \u221e 0 \u03c6(t)\u03a6Y (t)dt \u0015 . (6) As can be seen from (5), it is not possible to give explicit expressions for the equilibrium densities. However, it can be proved that the infected state only has biological meaning (Yeq > 0 and Veq > 0) if R0 > 1. To see this, note that the condition R0 > 1 applied to the \ufb01rst equation of (5) implies Xeq < \u03bb\u03c4 X, which means that the equilibrium density in the infected state is lower than in the trivial state. Using that condition, it follows that the third equation in (5) has necessarily a positive solution for Veq. Hence, R0 can be properly de\ufb01ned as the basic reproductive ratio, which is a key parameter in epidemiology and virus dynamics in order to predict the emergence of an infection (Anderson and May, 1991; Nowak and May, 2000). For R0 < 1 we have that every single virus generates statistically less than one new virus, so a permanent infection is not possible and the infected state does not exist. We also note that the case explored in the present paper, and so the expression (6), is more general than recent estimations for R0 where the possibility of a distributed intracellular period was also taken into account (He\ufb00ernan and Wahl, 2006). We will now explore the stability of the equilibrium states found. For this purpose, we will use the usual linear-stability analysis, so we introduce X(t) = Xeq + \u03b4X(t) and V (t) = Veq + \u03b4V (t). Inserting these de\ufb01nitions into (2) and 9 \flinearizing about the equilibrium state we obtain the following system for the perturbations d\u03b4X(t) dt = \u2212\u03b2Veq\u03b4X(t) \u2212\u03b2Xeq\u03b4V (t) \u2212 Z t 0 \u03b4X(t \u2212t\u2032)\u03a8X(t\u2032)dt\u2032 +\u03b2Xeq Z t 0 \u03b4V (t \u2212t\u2032)\u03a8X(t\u2032)t\u2032e\u2212\u03b2Veqt\u2032dt\u2032 d\u03b4V (t) dt = \u2212\u03b2Xeq\u03b4V (t) \u2212\u03b2Veq\u03b4X(t) + \u03b2Veq Z t 0 \u03b4X(t \u2212t\u2032)\u03c6(t\u2032)\u03a6Y (t\u2032)dt\u2032 +\u03b2Xeq Z t 0 \u03b4V (t \u2212t\u2032)\u03c6(t\u2032)\u03a6Y (t\u2032)dt\u2032 (7) \u2212 Z t 0 \u03b4V (t \u2212t\u2032)\u03a8V (t\u2032)dt\u2032 + \u03b2Veq Z t 0 \u03b4X(t \u2212t\u2032)\u03a8V (t\u2032)t\u2032e\u2212\u03b2Xeqt\u2032dt\u2032. Since this system is now linear, we can propose for the perturbations exponential solutions of the form e\u00b5t to get the characteristic equation 0 = \u0010 \u00b5 + \u03b2Xeq + [\u03a8X]\u00b5 \u0011 \u0010 \u00b5 + \u03b2Xeq \u2212\u03b2Xeq [\u03c6\u03a6Y ]\u00b5 + [\u03a8X]\u00b5 \u0011 \u2212\u03b22XeqVeq 1 \u2212d [\u03a8X]\u00b5 d\u00b5 ! 1 \u2212[\u03c6\u03a6Y ]\u00b5 \u2212d [\u03a8V ]\u00b5 d\u00b5 ! , (8) where we de\ufb01ne [f]\u00b5 \u2261 R e\u2212\u00b5tf(t)dt in accordance with the notation used above for the Laplace transform. a) Infection-free equilibrium state First we analyze the stability of the trivial state corresponding to the absence of viruses. Introducing (4) into (8) we obtain 1 = \u03b2Xeq [\u03a6V ]\u00b5+\u03b2Xeq [\u03c6\u03a6Y ]\u00b5 . (9) From (9), it is easy to \ufb01nd the necessary condition for the transition from stability to instability. In the BMVD it is known that the condition R0 \u22771 determines the stability of the infected-free state. From (9), it is possible to prove that, in general, this condition holds for any choice of the PDF\u2019s. The right hand side in that equation is a monotonically decreasing positive function 10 \fof \u00b5 and takes the value R0 at \u00b5 = 0. Then, if R0 > 1 both curves always intersect at a single point for a positive value of \u00b5, which is nothing but the su\ufb03cient condition for the state to be unstable, independently of the PDF\u2019s considered. If R0 < 1 both curves always intersect at a single point but now for a negative value of \u00b5. In this case the infection-free equilibrium state is linearly stable and infection dies out. b) Infected equilibrium state Using (5), the characteristic equation (8) for the infected state becomes extremely complicated to treat, and it makes impossible to determine analytically the stability of the infected state. However, we can still deduce the behavior of this state by imposing some conditions to prevent the system from behaving unrealistically. First, we mention again that the infected state does not exist for R0 < 1, so we only need to study the case R0 > 1. Second, we can rewrite the \ufb01rst equation in (5), using (6) and the de\ufb01nition of the Laplace transform, as [\u03d5V ]\u03b2Xeq = [\u03c6\u03a6Y ]\u00b5 \u22121 [\u03c6\u03a6Y ]\u00b5 . (10) Then, we conclude that there is only one possible positive solution for Xeq, as the left hand side of this equation is a monotonically decreasing function of Xeq. From that, similar arguments can be applied to the third equation in (5), so it follows that the solution for Veq is unique too. As a whole, we have that the infected state is always unique. This, together with the unstability of the non-infected state for R0 > 1, allows us to conclude that the infected state cannot be an unstable node or a saddle point, as it would imply that for some initial conditions the system would grow without control towards the state X \u2192\u221eand/or V \u2192\u221e. This unbounded behavior is not possible in 11 \four system. Then, the only possibility is that the infected state is stable for R0 > 1. The derivations presented in this Section show that the introduction of distributed delays does not modify the stability conditions of the BMVD. Although our mesoscopic model (2) is much more general that the original version (1), we \ufb01nd that the condition R0 \u22771 is always the one that determine the stability of the two possible equilibrium states. Note also that the condition to have an infected state of coexistence between viruses and cells (R0 > 1) can be interpreted as a threshold value for the contact rate \u03b2 > 1 \u03bb\u03c4 X hR \u221e 0 e\u2212\u03b2\u03bb\u03c4 X t\u03a6V (t)dt i [ R \u221e 0 \u03c6(t)\u03a6Y (t)dt] . (11) 4 The BMVD with a delayed eclipse phase We have presented a general model which takes into account distributed delays for the cellular death and the eclipse phase. However, the application of the general case requires knowing all the temporal distributions considered, which is not always possible at practice. Then, it can be useful to study some speci\ufb01c and simpler cases which have a special interest for application purposes. First, we consider the case where no age-distributed e\ufb00ects are introduced in the death process i.e. the probability of death is independent of the age of the cells. This corresponds to the situation used in the BMVD, which in our integrodi\ufb00erential model is recovered by assuming \u03d5X, \u03d5Y , \u03d5V as exponentially decaying functions (\u03d5X(t) = \u03b4e\u2212\u03b4t, \u03d5Y (t) = ae\u2212at, \u03d5V (t) = ue\u2212ut). For the eclipse phase, we can assume that when a cell is infected, it takes a \ufb01xed constant time \u03c4 until the \ufb01rst virion is released and after that, virions are 12 \fcontinuously released at a constant rate k. The delay \u03c4 is the time necessary to inject the viral core into the cell and make its genetic machinery start the reproduction process. So that, the function \u03c6(t) in our model will be taken as a step function \u03c6(t) = kH(t \u2212\u03c4),where H() is the Heaviside function. This speci\ufb01c example has been studied by some authors before (Herz et al, 1996; Tam, 1999; Culshaw and Ruan, 2000), so we can compare the predictions from our model with those previous approaches. Replacing the distribution functions \u03d5i(t), \u03c6(t) into the general model (2) we obtain dX dt = \u03bb \u2212\u03b4X \u2212\u03b2XV dY dt = \u03b2XV \u2212aY dV dt = Z t \u03c4 \u03b2X(t \u2212t\u2032)V (t \u2212t\u2032)ke\u2212at\u2032dt\u2032 \u2212\u03b2XV \u2212uV. (12) In the equation for V (t), the expression \u03b2X(t \u2212t\u2032)V (t \u2212t\u2032) represents those cells that became infected at time t \u2212t\u2032. So, the new virions appeared are equal to that expression multiplied by the rate k and by the probability e\u2212at\u2032 that the infected cells have survived from time t \u2212t\u2032 to t. The expression of R0 that one obtains for this case, from (6), is R0 = \u03b2\u03bb \u03b4u k ae\u2212a\u03c4 \u22121 ! . (13) Note that the system (12) is apparently di\ufb00erent to the previous models proposed before for the analysis of a delayed eclipse phase (Herz et al., 1996; Tam, 1999; Culshaw and Ruan, 2000). In those works a delayed term \u03b2X(t \u2212 \u03c4)V (t \u2212\u03c4) was introduced ad hoc in the evolution equation for Y (t): 13 \fdX dt = \u03bb \u2212\u03b4X \u2212\u03b2XV dY dt = \u03b2X(t \u2212\u03c4)V (t \u2212\u03c4)e\u2212a\u03c4 \u2212aY dV dt = kY \u2212\u03b2XV \u2212uV. (14) However, it is easy to see that the value of R0 for this model is exactly the expression (13), and the equilibrium states coincide with those found from our model too. Actually, both models represent the same underlying process except for one subtle detail. In the model (14), the fraction of cells \u03b2X(t\u2212\u03c4)V (t\u2212\u03c4) are considered as infected cells only after the time delay \u03c4. But during the period from t \u2212\u03c4 to \u03c4 these cells \u2019disappear\u2019, it is, they do not enter neither in the equation for Y nor in those for X or V . Instead, in our model the cells become Y cells at time t \u2212\u03c4 and they start releasing the new virions at time t, so our approach is phenomenologically more correct. Regarding the dynamics of both models, the only di\ufb00erence between (12) and (14) will be in the solution for Y (t): the value predicted by the model (14) will be always below the real one, as some infected cells are not being counted. 5 The e\ufb00ect of age-distributed times for cellular death Now we try to study a more realistic case according to the experimental data available in the literature. We will consider that the eclipse phase follows the same dynamics as that in Section 4. But the death times are now assumed to follow Gamma distributions, which are quite standard curves used for \ufb01tting experimental data to cellular death times (see for example the recent work by 14 \fHawkins et. al. (2007)). Hence, in this case we will use \u03c6(t) = kH(t \u2212\u03c4) \u03d5i(t) = t\u03b1i\u22121e\u2212t/\u03c4 \u2217 i (\u03c4 \u2217 i )\u03b1i \u0393(\u03b1i) (15) for i = X, Y, V , where \u0393(\u00b7) denotes the gamma function and \u03b1i and \u03c4 \u2217 i are the characteristic parameters of the Gamma distribution for mortality, with the average lifetime given by \u03c4 i = \u03c4 \u2217 i \u03b1i. Inserting these distributions into (6) the basic reproductive ratio R0 reads R0 = (1 + \u03b2\u03bb\u03c4 X\u03c4 \u2217 V )\u03b1V \u22121 (1 + \u03b2\u03bb\u03c4 X\u03c4 \u2217 V )\u03b1V k\u03c4 \u2217 Y e\u2212\u03c4/\u03c4 \u2217 Y \u03b1Y \u22121 X j=0 \uf8ee \uf8f0\u03b1Y \u2212j j! \u03c4 \u03c4 \u2217 Y !j\uf8f9 \uf8fb (16) for \u03b1Y integer. From (16), it follows that the in\ufb02uence of distributed death ages could be important for the value of R0 and, as a result, it strongly modi\ufb01es the value of the virus load at equilibrium. This e\ufb00ect is represented in Figure 2, which shows the numerical solution V (t) obtained from the model (2) for different values of the parameter \u03b1 (for simplicity we de\ufb01ne \u03b1 \u2261\u03b1X = \u03b1Y = \u03b1V ). For \u03b1 = 1 we recover the case where the death probabilities are exponentially distributed, it is, the prediction by the BMVD. In the three curves shown, the average lifetimes for the three species are kept the same. It allows us to compare properly the e\ufb00ects of the mortality distributions on the virus load dynamics. Two main di\ufb00erences are observed between the curves in Figure 2. First, note that the virus loads decrease in time for t < 2; this is because we have used a delay \u03c4 = 2 for the eclipse phase, so only after t = \u03c4 the infected cells start to release the \ufb01rst virions, and then the virus load increases drastically. The minimum value observed at t = 2 is much lower in the case \u03b1 = 1. This is because the BMVD assumes unrealistic high probabilities of death for the early stage of the infection, an e\ufb00ect which can be corrected by the Gamma-distributed mortalities used here. This point is of 15 \fgreat importance concerning the probabilities of a primary immune response to successfully clear the infection. Second, we also \ufb01nd important di\ufb00erences between the maximum virus loads reached at equilibrium; for the parameters used in Figure 2, the \ufb01nal virus load for \u03b1 = 1 is approximately 10-fold higher than in the case \u03b1 = 3. Therefore, we conclude that the BMVD underestimates the virus loads in the early stages of the infection and overestimates the peak of the virus load, if compared with the case of distributed mortalities considered here. In consequence, it turns out that we need to know with some detail the life cycle of viruses and cells to obtain an accurate picture of the infection dynamics. 6 Application to phage-bacteria interactions The interaction between phages and bacteria can be described as two consecutive steps: adsorption and reproduction (Mc Grath and Sinder, 2007). Adsorption consists in a collision between phage and bacteria resulting in a group, called infected bacteria, constituted by the bacteria and the phage attached to its membrane. The second step begins when the phage inoculates its genetic material into the host bacteria and begins to replicate it. From this time onwards the number of new viruses increases inside the bacteria, stopping when the bacteria bursts at the end of the latent period. Basically, the main di\ufb00erence between this situation and those explored in the previous Sections is that for phages the eclipse phase \ufb01nishes with a lytic process that involves the death of the infected cell. In terms of the model presented here, this idea can be introduced simply by choosing the appropriate form for the function \u03c6(t). 16 \fHere we deal with the reproduction process, which is known to produce a characteristic one-step growth curve V (t) for virulent phages. Let us consider that at t = 0 the phage inoculates its genome and all the bacteria become infected instantaneously, with Y (t = 0) = Y (0). Then, we can de\ufb01ne JV (t) = Y (0)\u03c6(t) as the rate of viruses released at time t, following the same notation as in the Appendix (see Equation (26) and the comments below). As all the cells are assumed to be already infected at t = 0, the infection process for t > 0 can be obviated. We can thus take \u2126V = 1 in Equation (26) to obtain V (t) = V (0)\u03a6V (t) + Z t 0 Y (0)\u03c6(t \u2212t\u2032)\u03a6V (t\u2032)dt\u2032, (17) which constitutes our theoretical model for the osg curve. If the osg is known from experiments, the function \u03c6(t) can be determined by \ufb01tting that curve to some function and applying \u03c6(t) = 1 Y (0) dV dt + Z t 0 V (t \u2212t\u2032)\u03a8V (t\u2032)dt\u2032 ! osg (18) which comes directly from the solution of (17). However, the result (18) can only be applied if we know the function \u03a8V , which is related to the mortality distribution \u03d5V according to (3). At practice, the probability of death for the viruses is usually considered very small in the time scale of the experiments, so it can be neglected. In that case, \u03a8V \u22480 and then we \ufb01nd that \u03c6(t) becomes proportional to the derivative of the one-step growth (osg) curve \u03c6(t) = 1 Y (0) dV dt ! osg . (19) For \ufb01tting the one-step growth V (t), some authors have considered before a piecewise function composed by three segments (You et al., 2002; Hadas et al., 1997). Continuous functions have been proposed too, for example error functions (Rabinovitch et al., 1999) or logistic-like functions (Fort and M\u00b4 endez, 17 \f2002; Alvarez et al., 2007). For these three cases one \ufb01nds that the corresponding expressions for \u03c6(t) are those shown in Table 1. We have written there the functions in terms of the parameters r, \u03c4 and V\u221e. For the sake of completeness, we also show the relation between these parameters and the eclipse time, the rise rate and the burst size, which are commonly used in experimental works to characterize the osg curve (a proper de\ufb01nition of these is provided in Figure 3). In Figure 4 we show with symbols the experimental results for one-step growth of phage T7 on E. coli BL21 grown at di\ufb00erent rates (You et al., 2002), while the speci\ufb01c values obtained from the adjustment in each case are detailed in Table 2. The solid curves in Figure 4 represent the \ufb01tting of the experimental results to the logistic-like function, exhibiting a good agreement. The segments (dotted lines) and the error function (dashed lines) \ufb01ttings are also showed in the plot; in the latter, the coincidence with the logistic-like case is so high that both curves are almost indistinguishable. From each one of the \ufb01ttings the corresponding expression for \u03c6(t) has been estimated. The comparison between them is shown in Figure 5, where we plot for simplicity only one of the three cases presented in Figure 4 (the two cases non-shown exhibit a very similar behavior). We observe that for the \u2019error\u2019 and the \u2019logistic-like\u2019 cases, peaked \u03c6(t) functions with very similar characteristics are obtained. The \u2019segments\u2019 case, in turn, leads to a discontinuous expression for \u03c6(t) which slightly di\ufb00ers from the other two. So, we can conclude that the \u2019segments\u2019 \ufb01tting gives a poorer estimate for the behavior of \u03c6(t) and this can in\ufb02uence the \ufb01nal value for R0. We note that in this speci\ufb01c application for phages a new de\ufb01nition of R0 18 \fis necessary, as can be seen by inspecting (6). To this end, we must \ufb01nd the equilibrium states of the system dX dt = \u2212\u03b2X(t)V (t) dV dt = \u2212\u03b2X(t)V (t) + Z t 0 \u03b2X(t \u2212t\u2032)V (t \u2212t\u2032)k(t\u2032)\u03c6(t\u2032)dt\u2032 (20) and their stability. Introducing X(t) = Xeq + \u03b4X(t) and V (t) = Veq + \u03b4V (t) and linearizing about the equilibrium states (Xeq, 0) and (0, Veq) one can check that the basic reproductive ratio R0 \u2261 Z \u221e 0 k(t)\u03c6(t)dt (21) must be higher than 1 for a successful phage growth. Making use of (19) R0 = 1 Y (0) Z \u221e 0 dV dt ! osg dt = [V\u221e\u2212V (0)]osg Y (0) (22) which is the burst size. This result simply demonstrates that in the case of phage-bacteria interactions the burst size plays the role of a basic reproductive ratio (the infection is successful only for R0 > 1). 7" + }, + { + "url": "http://arxiv.org/abs/0804.3485v1", + "title": "Limited resources and evolutionary learning may help to understand the mistimed reproduction in birds caused by climate change", + "abstract": "We present an agent-based model inspired by the Evolutionary Minority Game\n(EMG), albeit strongly adapted to the case of competition for limited resources\nin ecology. The agents in this game become able, after some time, to predict\nthe a priori best option as a result of an evolution-driven learning process.\nWe show that a self-segregated social structure can emerge from this process,\ni.e., extreme learning strategies are always favoured while intermediate\nlearning strategies tend to die out. This result may contribute to\nunderstanding some levels of organization and cooperative behaviour in\necological and social systems. We use the ideas and results reported here to\ndiscuss an issue of current interest in ecology: the mistimings in egg laying\nobserved for some species of bird as a consequence of their slower rate of\nadaptation to climate change in comparison with that shown by their prey. Our\nmodel supports the hypothesis that habitat-specific constraints could explain\nwhy different populations are adapting differently to this situation, in\nagreement with recent experiments.", + "authors": "Daniel Campos, Josep E. Llebot, Vicen\u00e7 M\u00e9ndez", + "published": "2008-04-22", + "updated": "2008-04-22", + "primary_cat": "q-bio.PE", + "cats": [ + "q-bio.PE" + ], + "main_content": "Introduction Minority games (Challet and Zhang, 1998), and more recently Evolutionary Minority Games (EMG) (Johnson et. al., 1999a; Johnson et. al., 2000; de Cara et. al., 2000; Johnson et. al., 2003; Hod and Nakar, 2002; Hod, 2003; Sysi-Aho et. al., 2003; Johnson et. al., 1999b; Lo et. al., 2000), have received widespread attention in recent years as a useful model to describe competition for highly limited resources in complex systems, especially in economics. These games are essentially based on a minority rule (Challet and Zhang, 1998) according to which N agents compete repeatedly for some resources by choosing between two options A or B. Each agent makes its choice, and those agents belonging to the less (most) frequently chosen option are considered the winners (losers), so they are rewarded (\ufb01ned). So, the idea behind this game is that the agents must always try to be in the minority: few individuals choosing the same option as yourself means less competitors, and so it should be easier to obtain the resource. The decisions taken by the agents are chosen according to a pool of strategies available, and these strategies are based on the m previous outcomes in the game, as that information is assumed to be accessible to all of the agents. To give a simple example, a speci\ufb01c strategy in a minority game with m = 2 has the form S = {(A, A) \u2192A, (A, B) \u2192B, (B, A) \u2192A, (B, B) \u2192A}. This means that if the two previous winning options in the game were (A,A), an agent following strategy S will choose option A the next time; if the last winning options were (A,B), that agent will choose B, and so on. At the beginning of the game several strategies are assigned to each agent, and the agent tends to choose from among them the strategy that gave better results 3 \fin the past; however, many di\ufb00erent versions of the minority game exist, where the rules that determine the strategies chosen by the agents are di\ufb00erent. Here, we will skip the minor details on the mechanisms of the minority game, since that is outside the scope of the current work; an exhaustive compilation of works on minority games can be found in http://www.unifr.ch/econophysics. In the evolutionary version (EMG) of the game (Johnson et. al., 1999a), all the agents are assigned the same strategies but they can i) follow that given strategy with probability pk or ii) do exactly the opposite with probability 1 \u2212pk, where pk is di\ufb00erent for each agent (the subindex k denotes the kth agent). Those agents performing the worst (losing many times) are forced to change their value of pk; so, in the EMG there is an implicit learning process based on trial and error. As a consequence, the system tends towards an optimal distribution of pk values for which the number of winners is as close to N/2 as possible (note that, by de\ufb01nition, in a minority game the number of winners cannot be higher than N/2). As reported in (Johnson et. al., 1999a), the most striking result arising from the EMG is the natural emergence of segregated behaviour: those agents that behave in an extreme way (pk \u21920 and pk \u21921) perform better than those with intermediate behaviour, so that the individuals tend to segregate into two groups: those who always follow the given strategy and those who never follow the strategy. From the point of view of complex systems, it has been claimed that this result may help to understand some levels of organization such as crowding (Johnson et. al., 2000; Cont and Bouchaud, 2000) and cooperation (de Cara, 2000), which are common in many social and biological systems. Speci\ufb01cally, within the context of the EMG some authors have coined the term unintentional or indirect cooperation to illustrate the behaviour observed (Quan et. al., 2003; 4 \fHod and Nakar, 2004). This concept refers to the fact that in the EMG many agents tend to behave similarly (either pk \u21920 or pk \u21921), but not consciously, but rather because the global winning probability is higher that way. This is di\ufb00erent from other games (for example, the well-known Prisoner\u2019s Dilemma) where cooperation is a conscious option given to the agents (Nowak et. al., 2004; Nowak 2006). 2 Minority Games in ecology In general, minority games are helpful to describe multi-agent systems where each agent (individual) is able to analyze the history of the system (i.e., the success of the di\ufb00erent strategies used before) in order to make its next decision. For this reason they have been especially designed and used to explain the complex dynamics of some \ufb01nancial markets (Johnson et. al., 2003), albeit some authors have stated that similar ideas could also hold within an ecological context; probably the best example being foraging behaviour (Hod and Nakar, 2002; Hod, 2003). However, as far as we know very few real efforts have been made to extend minority games to ecological situations. In (Tella et. al., 2001) the authors presented a model, inspired by the rules of the minority game, to explore the colonial versus solitary behaviour in birds as a function of predation pressure, and some discussions on the connection between minority games and ecological evolution were provided in (de Cara et. al., 2000; Aho et. al., 2003). The apparent lack of interest by ecologists in these games is probably due to the fact that the most interesting and dramatic situations concerning decisionmaking in animals are not well described by such concepts as trial and error 5 \fand pool of strategies involved in minority games. Instead, in ecology most of the interest lies in understanding those situations where individuals perform just one or a few critical decisions throughout their whole life (concerning, for example, timing in reproduction or choice of habitat); these decisions have been called \u2019\ufb01tness-critical actions\u2019 in a very recent work by Heesch and Little (2006). Intuitively, decision-making in these \u2019\ufb01tness-critical actions\u2019 follows quite simple mechanisms (compared to the complex rules of minority games): the individuals need to use their skills or their experience to predict the a priori best option. By a priori best option we mean that option which would be the winning one in the case where half of the agents choose A and the other half choose B. In the basic minority game described above we have considered that the agents choose between two identical options A or B, so there is no a priori best option. However, it is easy (and more realistic) to consider a game where A and B are intrinsically di\ufb00erent. For example, in the case of habitat selection, individuals usually need to choose between di\ufb00erent options with di\ufb00erent habitat qualities. Some individuals may be able, from past experience, to know in advance which the best choice is e.g. that where the availability of food is higher. But if all the individuals are able to do this, then all of them will choose the same option and the availability of food will decrease there; in that case the a priori best option is not necessarily the winning option. Those individuals that are not able to determine what the a priori best option is will probably behave randomly or persistently (always choosing the same option). The role of evolution and natural selection is thus expected to be crucial in these processes, as stated in (Heesch and Little, 2006). We note that these decision-making mechanisms are also common in human 6 \fbehaviour. For instance, drivers who have to choose between two alternative routes in order to avoid tra\ufb03c jams do not analyze every past experience and make a decision according to a pool of strategies (contrary to what is suggested by some authors (Hod, 2003)), but mainly use simpler strategies like persistent behaviour (they always choose the same route because they do not like to take risks) or they may simply listen to the tra\ufb03c news to \ufb01nd out what the a priori best route is. According to these arguments, some essential elements which are absent in the EMG must be considered in order to get a realistic implementation of minority games in ecology. So, the aim of this work is to present a new game where competition for resources is also introduced by means of a minority rule, but the dynamics and strategies followed by the agents aim to capture the dynamics of some ecological systems. In what follows, we will refer to this new model as the Evolutionary Learning Game (ELG). 3 Mistiming in predator-prey systems caused by climate change We now introduce a speci\ufb01c problem that has attracted the interest of ecologists in recent years (van Noordwijk et. al., 1995; Visser et. al., 1998; Grieco et. al., 2002; Visser et. al., 2004; Gienapp and Visser, 2006) and has strongly motivated our approach. In many species of bird, individuals must face the problem of choosing the correct time for egg laying. This choice becomes dramatic if the availability of food is restricted to a very short period of time. So, for survival in breeding, the correct timing of egg laying is necessary, so that the feeding period matches the food peak. This process has been studied in recent decades for some species, such as great tits (Parus major) and 7 \fblue tits (Parus caeruleus), whose main prey (caterpillar) is only available for two or three weeks in the late spring (Visser et. al., 2004). At the moment of egg laying (approximately one month before), the birds do not know when the food peak will happen. The problem is partially overcome by the way many of these birds develop with age the ability to follow some cues (based on climate and other environmental parameters) to predict the right time for laying (van Noordwijk et. al., 1995; Grieco et. al., 2002; Gienapp and Visser, 2006). In general, this capacity of an individual to adapt its behaviour to the environmental conditions is known as phenotypic plasticity, and is usually a heritable trait. Speci\ufb01cally, it has been demonstrated (Nussey et. al., 2005) that plasticity in egg laying for birds is heritable. The e\ufb00ects of global climate change, however, have put many biological species to the test (Parmesan, 2006). As a consequence of warmer springs, caterpillars have advanced their hatching date in many habitats (Visser et. al., 1998; Visser et. al., 2004), so those birds with a higher plasticity in laying are expected to adapt better to the new situation. According to the observational data, some bird populations have become adapted, but in some other cases a very weak response to the new situation has been observed (Visser et. al., 2004; Gienapp and Visser, 2006). In the latter case, the mismatching between the feeding period and the food peak will probably lead to a decline in the number of individuals (Both et. al., 2006) or the habitat \ufb01tness (Visser, 2007). Although di\ufb00erent explanations have been provided, there is no clear understanding of why di\ufb00erent populations show di\ufb00erent responses to the changing conditions (Gienapp and Visser, 2006). As we discuss below, our model provides some arguments that support the idea that resource constraints from each speci\ufb01c habitat may be responsible for these di\ufb00erences. 8 \f4 Rules of the Evolutionary Learning Game We need to introduce two basic ideas that are missing from the original formulation of the EMG, in order to reach a more realistic description of ecological systems: i) First, reproduction and death processes must play a fundamental role in the dynamics of the system. In the EMG the agents continue to play inde\ufb01nitely, but in ecology the consequences of choosing a wrong option can obviously be dramatic. If we want to explore the dynamics of systems over representative time scales, it is necessary to assume that the individuals may disappear (die) and/or be replaced by new individuals (newborns) with some probability. Moreover, the outcome obtained from any decision taken by the agents must a\ufb00ect in some way their reproduction/survival probabilities. ii) Secondly, in the EMG all of the agents have access to the same information, and so all of them may use the same strategy. In ecology, however, as long as an individual grows up it gains experience and, in consequence, it is expected to choose better options. So, an individual learning process must be considered somehow. In fact, the situation where the strategies chosen by the agents are based on their individual histories has already been studied for the EMG (see (de Cara, 2000) and the references therein), but here we will explore the concept of learning from a di\ufb00erent perspective. In our model, each of the N agents competing in the game must repeatedly choose between options A or B. Whether or not the decision taken by the agent is the good one will be determined by a minority rule with an arbitrary cuto\ufb00, as de\ufb01ned in (Johnson et. al., 1999b). This means that we assign a resource 9 \fcapacity L to one of the two options (we consider 0 < L < N/2 without loss of generality) and a resource capacity N \u2212L to the other option. If the real number of agents choosing the \ufb01rst option is below L, then the resources per capita in that option are higher than in the other one, so those agents are the winners and the agents choosing the other option are the losers. If the number of agents choosing the \ufb01rst option is above L, then the contrary arguments hold. We will consider that the option with capacity L is not always the same, but is chosen randomly every time step in order to incorporate the e\ufb00ects of a \ufb02uctuating environment, so sometimes option A will be the a priori best option (that with a higher capacity) and sometimes not. At each time step, the winners are rewarded with the possibility of reproductive success. Every winner is given the possibility of producing a newborn agent with a probability r. The newborn will replace one of the agents in the game (to keep N constant) chosen randomly, so we assume that all of the agents are equally likely to die. The agents choose option A or B according to the following rules. Younger agents act persistently: they make their \ufb01rst choice randomly and, after that, they continue to choose the same option. However, after each time step all the persistent agents are given the possibility of learning with probability ppk (here pp stands for phenotypic plasticity and the subindex k denotes the kth agent). If they learn, it means that they give up persistent behaviour; from then on, they always choose the option with a higher capacity, so we will say that they become wise agents. Phenotypic plasticity in our model is thus considered equivalent to a learning 10 \fcapacity. This capacity can be inherited as follows: when a newborn appears, its characteristic probability ppk is chosen randomly from an interval of width w centred on the value of ppk\u2032 from its father, with re\ufb02ecting boundary conditions at ppk = 0 and ppk = 1. So, we introduce an evolutionary dynamics for the probabilities ppk into the model in a similar way as in the original formulation of the EMG (Johnson et. al., 1999a). But note that in this case the meaning of the width w is extremely important for the dynamics of the system, as it measures the heritability of ppk, so the best strategies ppk will be transmitted to the newborns only if w is not too high. These are all the rules for our ELG. All of the agents will become wise sooner or later unless they die \ufb01rst, but if there are too many wise agents then the a priori best option will be crowded and will probably be the wrong one. A complex dynamic thus emerges where learning as fast as possible is not necessarily the best strategy, which may seem counterintuitive at \ufb01rst. As some of the rules presented could be considered too simple or unrealistic from a biological point of view, we tried to implement many di\ufb00erent models with increasingly complex rules in order to compare their performances: i) We tried to introduce explicit reproduction and death algorithms in many di\ufb00erent ways (for example, by using exponential or logistic growth), so that the number of agents N was allowed to change with time. ii) We tried to replace the minority rule with some other competition rules, even rules that allowed all the agents to be winners (or losers) at the same time. For instance, we considered two independent capacities LA and LB for the two possible options, so if the number of individuals choosing option A is above (below) LA those agents are considered losers (winners). 11 \fiii) We tried to reward and/or \ufb01ne agents on their reproductive success and/or their probability of survival. Of course, it is not necessary for the reproductive success to be completely suppressed for the losers as in the simpli\ufb01ed version we have described; we could consider two reproductive rates rw and rl for winners and losers respectively, with rw > rl. iv) We tried to consider that the switching from persistent to wise behaviour is not so radical, but the agents learn progressively according to a rate given by ppk. After these and many other trials, we have found that the qualitative behaviour exhibited by the ELG (which is shown in the following Section) is highly robust. According to our results, it seems that there are only two elements which are strictly necessary in order to obtain that behaviour: i) a learning process regulated by the probabilities ppk and ii) that the number of agents rewarded (\ufb01ned) is proportional to the number of winners (losers). The version of the ELG we have presented here is one of the simplest possible, and so it o\ufb00ers the advantage that some analytical treatment is possible, as we will show below. 5 Results The greatest interest of our model lies in the form of the distribution of phenotypic plasticities P(ppk) that is reached in the steady state. In Figure 1 we summarize the behaviour of P(ppk) as a function of the three parameters of the model: L, r and w. All the results shown here were obtained by computing the form of P(ppk) for N = 2001 after 10000 time steps (which is far enough to 12 \freach the steady state), and carried out an average of 25 di\ufb00erent realizations. Initially all the agents were considered newborns and the values of ppk were assigned randomly; anyway, we have checked that our results are independent of the initial conditions chosen. The series of plots from 1.a to 1.d shows how P(ppk) changes when the value of L is modi\ufb01ed. Figures 1.a and 1.d correspond to extreme situations that are clearly predictable. In the \ufb01rst case, when L \u2192N/2 both options A and B have similar resource capacities. Therefore, performing as a wise agent does not represent an advantage, because the resource of choice (i.e. the larger one) reaches its smallest possible size leading to overcrowding among wise agents. As a consequence, learning is avoided and there is a tendency ppk \u21920. In the regime L \u21920 one of the two options is much better than the other one. In this situation, performing as a wise agent is a strong advantage, and so the tendency ppk \u21921 should be expected. But, surprisingly, there is a wide range of intermediate values of L where segregated (obviously asymmetric) behaviour is found. This means that in intermediate situations the dynamics of the system tends to favour individuals which either learn as fast as possible or avoid learning as much as possible. Although segregated behaviour was also found for the EMG, the situation reported here is clearly di\ufb00erent. In the case of the EMG (Johnson et. al., 1999a) the segregated behaviour in the steady state was independent of the initial conditions and the values of the parameters introduced. However, Hod and Nakar (2002) proved later that the model is extremely sensitive to the prize-to-\ufb01ne ratio, so for some parameters one observes a sharp transition where self-segregation is destroyed. On the contrary, we have not noticed such e\ufb00ects in our model, but the form of P(ppk) always changes smoothly for 13 \fany region of parameters considered. The other main di\ufb00erence between the EMG and the ELG is that the results obtained here show an asymmetric distribution of P(ppk). The reason for this is that in the ELG the persistent and wise agents do not necessarily choose di\ufb00erent options (while in the EMG pk \u21920 and pk \u21921 represent opposite behaviours). This, together with the di\ufb00erent backgrounds considered and some ideas discussed below, shows that the general dynamics of the EMG and the ELG are di\ufb00erent, although there are some major similarities between both. One can observe in the series 1.e to 1.h the role of the reproduction probability r on P(ppk) while keeping the other parameters constant. A high value of r involves the appearance of many newborns and, according to the discussion above, a wise strategy will then perform better that in a situation with few newborns. For this reason, the model shows a tendency ppk \u21920 for low r and a tendency ppk \u21921 for high r. In intermediate situations, a segregated distribution is found again. Finally, the role of w is shown in plots from 1.i to 1.l. As discussed above, the value of w determines the heritability of the phenotypic plasticity. Low values of w represent a high level of heritability and so the best strategies persist, while a high value of w means that best strategies are not well transmitted to breeding; so, for the latter P(ppk) is expected to tend to be uniform. The value of w also indirectly a\ufb00ects the number of winners and losers; if w is too high the best strategies do not persist and then the average number of winners decreases. For this reason it is di\ufb03cult to predict the exact form of P(ppk) as a function of w. Actually, the speci\ufb01c role of w in the game is fairly complicated, so this point will be addressed in detail in a further study. Note that this is another important di\ufb00erence from the case of the EMG, where the 14 \f\ufb01nal distribution P(pk) is almost independent of w (Johnson et. al., 1999a). We can give some analytical support to the results shown in Figure 1 by means of a mean-\ufb01eld-like approach as proposed before for the EMG (Lo et. al., 2000). Here we keep as much as possible to the notation used there in order to facilitate understanding. First of all, note that we know the option that the wise agents will choose. We also know that every persistent agent made its \ufb01rst choice randomly, so we should expect on average for half of them to choose A and the other half to choose B. Therefore, the whole problem is reduced to \ufb01nding out how many persistents are in the game. We denote FN(n) as the probability of n of the N agents in the game being persistent. Similarly, we de\ufb01ne Gk N\u22121(n) as the probability of n of the agents being persistent, given that the kth agent is the only one that has not made its choice yet. Then, the following relation holds: FN(n) = \u0393ppkGk N\u22121(n \u22121) + (1 \u2212\u0393ppk) Gk N\u22121(n), (1) where \u0393ppk is the probability of the kth agent being persistent, given that its phenotypic plasticity is ppk. In the following, we will use the simpli\ufb01ed notation \u0393 \u2261\u0393ppk. On the other hand, the winning probability \u03c4ppk of an agent that has a plasticity ppk can be written as \u03c4ppk = \u0393 2 \u03b1\u22121 X n=0 Gk N\u22121(n) + \u0012 1 \u2212\u0393 2 \u0013 N\u22121 X n=\u03b1+1 Gk N\u22121(n), (2) where \u03b1 = Int(2L) denotes the integer part of 2L. Following now the same treatment as in (Lo et. al., 2000), we get from (1) and (2) the expression 15 \f\u03c4ppk = \u0393 2 \u03b1 X n=0 FN(n) + \u0012 1 \u2212\u0393 2 \u0013 N\u22121 X n=\u03b1+1 FN(n) +\u0393(\u0393 \u22123/2)Gk N\u22121(\u03b1). (3) As stated in (Lo et. al., 2000), the two \ufb01rst terms correspond to the winning probability of a kth agent whose action does not modify the result of the game, once the other N \u22121 agents have made their choice. Hence, the third term is the essential one, as it measures the in\ufb02uence of the kth agent on the \ufb01nal result. For example, imagine that after the \ufb01rst N \u22121 agents have made their choice, one half of them decide to follow option A (so one half choose B); then, the last agent?s decision will determine which the winning option is; the in\ufb02uence of this \ufb01nal decision on the probability \u03c4ppk is what the last term in (3) measures. In order to analyze this term, we need to \ufb01nd the explicit expression of \u0393 as a function of ppk. This is easy to do, because after each time step the persistents have a probability ppk of becoming wise agents and an average survival probability s (according to the rules of our ELG, only the range 0.5 < s < 1 holds here). Then, the probability of the kth agent being persistent is \u0393ppk = P\u221e i=0 [si(1 \u2212ppk)i] P\u221e i=0 si = 1 \u2212s 1 \u2212s(1 \u2212ppk). (4) Now, we can give an explicit expression for the term \u0393(\u0393\u22123/2) from Equation (3). It can be seen that \u0393(\u0393 \u22123/2) has the same appearance throughout the whole range 0 < ppk < 1 and for the proper range of survival probabilities 0.5 < s < 1. It is always a negative convex function with a relative minimum at ppk = (1 \u2212s)/3s. This means that the third term in (3) always tends to favour extreme values of ppk, which facilitates the emergence of segregated behaviour; so, this gives some justi\ufb01cation to our numerical results. 16 \fWe stress, however, that stationary analytical approaches such as the one used here have some limitations, as the EMG and similar models never reach a true stationary distribution (Hod, 2002). In fact, we have found for the ELG that the number of persistents oscillates periodically over time, in accordance with similar results found for the EMG (Hod, 2002). In that work, it was also argued that when the amplitude of these oscillations increases, we observe in the EMG a transition from self-segregated behaviour to clustering (where clustering is characterized by a single-peaked distribution of pk values around pk = 1/2). It is interesting to note that, on the contrary, in the ELG the amplitude of the oscillations increases in the region where self-segregated behaviour is found; so, the oscillations in our model seem to enforce self-segregation rather than destroying it. This situation is shown in Figure 2, where the number of persistents in the steady regime is plotted as a function of time for di\ufb00erent situations; dotted, dash-dotted and solid lines correspond to the cases 1.a, 1.c and 1.d reported in Figure 1, respectively. From Figure 2, it is also clear that the mean number of persistents in the game increases as L decreases, in accordance with our discussion above. 6 Discussion We have presented a model that sets the ideas of minority games, which are considerably popular tools for describing competition for resources in economics, into an ecological context. The model presented here shows how social segregation emerges from an evolutionary learning process (determined by the distribution P(ppk)) in a group of individuals competing for strongly limited resources. Note that if the learning process were not introduced to the model, 17 \fthen the dynamics of the system would be trivial. Neither persistent nor wise behaviour on its own is an e\ufb03cient strategy; evolutionary learning is the key ingredient here for \ufb01nding e\ufb03cient cooperation between both. This idea, together with the robustness shown by our model (many other implementations with more realistic rules led to similar qualitative results) seems to suggest that our model could be of interest for understanding social organization in complex evolutionary systems. The results obtained here show that in the situation described by our model intermediate learning strategies cannot persist for long times; the tendency is always towards improvement (ppk \u21921), suppression (ppk \u21920) or the coexistence of both (segregated distribution). It also contradicts the intuitive idea that those individuals with a higher learning capacity must always be favoured by selection. This is a consequence of the strong competition process which is assumed in our ELG and in minority games in general: sometimes the a priori worst option can be the best because many agents tend to choose the a priori best option and then competition for the latter is higher. Finally, we come back to the problem of egg laying in birds described before, which can now be addressed using the ideas about learning and phenotypic plasticity discussed here. In order to provide an analogy with our model, we could imagine that option A means laying early and option B means laying later. Those individuals with a higher plasticity very quickly become able to follow some environmental cues in order to predict the right option A or B. However, if the penalty for choosing a wrong option is not high (because there are some other food resources available, or there are some other environmental constraints on laying...) then selection will not favour individuals with higher plasticity. In that case, when the individuals must face sudden environmen18 \ftal changes their capacity to respond will be weak. In those habitats where phenotypic plasticity is strongly rewarded (i.e., for L small in our model), individuals are expected to follow e\ufb03cient learning strategies, so they will be able to respond better to environmental changes. Therefore, the results of our model provide an evolutionary basis to the idea that environmental constraints from each speci\ufb01c habitat could explain why di\ufb00erent bird populations are responding di\ufb00erently to climate-driven changes in the behaviour of their prey, as recent experiments have suggested (Gienapp and Visser, 2006). However, empirical evidence supporting our ideas about learning and phenotypic plasticity is still lacking. We believe that it would be of major interest if experimentalists were to attempt to check the predictions made by our model in real ecological systems. Acknowledgements This research has been partially supported by the Generalitat de Catalunya through grant 2006-BP-A-10060 (DC), by the project CGL 2007-60797 (JELl) and by grants FIS 2006-12296-C02-01, SGR 2005-00087 (VM)." + } + ], + "Ellen M. Voorhees": [ + { + "url": "http://arxiv.org/abs/2201.11086v1", + "title": "Can Old TREC Collections Reliably Evaluate Modern Neural Retrieval Models?", + "abstract": "Neural retrieval models are generally regarded as fundamentally different\nfrom the retrieval techniques used in the late 1990's when the TREC ad hoc test\ncollections were constructed. They thus provide the opportunity to empirically\ntest the claim that pooling-built test collections can reliably evaluate\nretrieval systems that did not contribute to the construction of the collection\n(in other words, that such collections can be reusable). To test the\nreusability claim, we asked TREC assessors to judge new pools created from new\nsearch results for the TREC-8 ad hoc collection. These new search results\nconsisted of five new runs (one each from three transformer-based models and\ntwo baseline runs that use BM25) plus the set of TREC-8 submissions that did\nnot previously contribute to pools. The new runs did retrieve previously unseen\ndocuments, but the vast majority of those documents were not relevant. The\nranking of all runs by mean evaluation score when evaluated using the official\nTREC-8 relevance judgment set and the newly expanded relevance set are almost\nidentical, with Kendall's tau correlations greater than 0.99. Correlations for\nindividual topics are also high. The TREC-8 ad hoc collection was originally\nconstructed using deep pools over a diverse set of runs, including several\neffective manual runs. Its judgment budget, and hence construction cost, was\nrelatively large. However, it does appear that the expense was well-spent: even\nwith the advent of neural techniques, the collection has stood the test of time\nand remains a reliable evaluation instrument as retrieval techniques have\nadvanced.", + "authors": "Ellen M. Voorhees, Ian Soboroff, Jimmy Lin", + "published": "2022-01-26", + "updated": "2022-01-26", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION A primary motivation for the Text REtrieval Conferences (TRECs) is to build large test collections for the information retrieval research community [23]. The goal is for these collections to be reusable\u2014meaning that retrieval systems that did not participate in the collection-building process could still be evaluated fairly using them\u2014and, in particular, to be useful for evaluating systems that did not exist at the time the collection was built. Examination of the TREC ad hoc collections shortly after they were built supported the conclusion that the collections are indeed reusable [6, 28]. The TREC-8 ad hoc collection has been considered an especially reliable collection given both the high-quality (manual) runs that contributed to its construction and the large set of relevance judgments made for it [21]. Parts of the community, however, have been skeptical of the viability of using TREC ad hoc collections to research new neural retrieval models. The primary argument has been that since neural methods \u201cwork very differently\u201d from \u201ctraditional\u201d retrieval models, they would retrieve many previously unjudged documents that are relevant. Neural methods would not be properly rewarded for actually \u201cbeing better\u201d, and thus it would be misleading to assess progress based on these incomplete evaluation instruments. This line of reasoning is, for example, explicitly stated in Yilmaz et al. [26], and the TREC Common Core track was started in large part so that the newer retrieval models could contribute to the construction of additional ad hoc collections.1 The TREC-8 collection, the last of the TREC ad hoc collections, was created in 1999, long before the emergence of the current neural models. We can thus use these models to test the reusability claims of the original TREC ad hoc collections. This paper reports on one such test, a look at how the TREC-8 ad hoc collection evaluates representative runs from three transformer-based retrieval models (a reranker, a dense retrieval model, and a sparse retrieval model) and two new baselines (BM25-based). New pools created from these five runs and a set of TREC-8 submissions that did not previously contribute to the pools were judged by TREC assessors. Some new relevant documents were found, as expected, but most of the newly retrieved (and previously unjudged) documents were judged not relevant. The ranking of systems by mean score when evaluated using the official TREC-8 relevance judgment set and the newly expanded relevance judgment set are almost identical with Kendall\u2019s \ud835\udf0fcorrelations greater than 0.99. Correlations for individual topics are also high. Thus, the answer to the question posed in the title appears to be, yes, at least for the TREC-8 collection examined in our experiments. The contribution of this paper is, to our knowledge, the first time this question has be rigorously tackled and answered. While there are additional nuances to this high-level finding (see Section 5), it does appear that this well-built test collection has stood the test of time and remains a reliable evaluation instrument, even as retrieval techniques have advanced significantly. 2 BACKGROUND AND RELATED WORK A retrieval test collection consists of a set of documents, a set of information needs called topics that can be met by those documents, and a set of relevance judgments that say which documents should be retrieved for which topics. The set of judgments in a collection is often referred to as the qrels (short for query-relevance), a convention we will follow in this paper. Given a test collection, the retrieval output of a search engine (a ranked list of documents retrieved for each topic and called a run) can be evaluated using a variety of measures that are functions of the ranks at which relevant documents are retrieved. The very first retrieval test collections had complete judgments; that is, every document was judged by a human for every topic. However, complete judgments are only practical for small test collections, and small test collections are not representative of the challenges operational search systems encounter. To build larger 1See https://trec-core.github.io/2018/. arXiv:2201.11086v1 [cs.IR] 26 Jan 2022 \ftest collections, some sort of sampling procedure is needed so that for each topic a human judge looks at only a tiny portion of the entire document set. TREC was the first to implement a process called pooling [16] to sample the document corpus and build much larger test collections than were previously available. In pooling, the set of documents to be judged for a topic, the pool, is the union of the documents retrieved in the top \ud835\udf06ranks over a given set of runs (such as the runs submitted to a particular evaluation, for example). Larger values of \ud835\udf06lead to more documents in the pools and produce deeper pools than smaller values of \ud835\udf06. The assessor for the topic assigns a relevance judgment to each document in the pool. Any document that was not in the pool is thus not judged; any such unjudged document encountered in the evaluation of a run is assumed to be not relevant. The rationale of pooling is the belief that taking sufficiently many top-ranked documents from a diverse set of effective runs will capture most relevant documents such that treating all other documents as not relevant can still yield a reliable evaluation. All of the early TREC collections, including the TREC-8 ad hoc collection, were built using pooling. Every year, each TREC participant submitted a handful of different runs. Pools were constructed using a subset of the submitted runs from each participant, with the total number of runs contributing to the pools determined by the judgment budget. Runs that contributed to the pools are called \u201cjudged runs\u201d and the remainder are \u201cunjudged runs\u201d. Furthermore, each run is also designated as being either automatic or manual. An automatic run is a run in which there was no manual intervention of any kind to produce the ranked lists of documents from the topic statements; a manual run is anything else, which may encompass simple tweaks to the topic statement to intensive interaction with a retrieval system (including manual and possibly iterative query formulation, relevance assessment for feedback, etc.). Zobel showed that the quality of a collection built through pooling depends on both the diversity of the runs and the depth (\ud835\udf06) to which the pools were constructed, but found the TREC collections of the day to be reliable in that they evaluated unjudged runs fairly [28]. Analysis of the TREC-8 collection immediately after its construction using a variant of the process Zobel used found it, too, to be reliable [22]. The process simulated the evaluation of \u201cnew\u201d retrieval methods by removing from the qrels those relevant documents that only a single participant contributed to the pools and comparing the evaluation of that participant\u2019s runs when using either the full or the reduced qrels. For TREC-8, manual runs contributed most of the unique relevant documents and were consequently most affected by the removal of their uniquely retrieved relevant documents. The change in evaluation scores for TREC-8 automatic runs with and without their own uniques was negligible, probably because the manual runs had retrieved so many relevant documents. The quality of the pools is known to be significantly enhanced by the presence of recall-oriented manual runs such that the organizers of the NTCIR workshops performed their own manual runs to supplement the pools when building their first collections [8]. Unfortunately, pooling has its own size dependency and cannot be used to create reliable collections for arbitrarily large document corpora without arbitrarily large judgment budgets [3]. New ways of building test collections and new evaluation measures that accommodate missing judgments continue to be active research areas. The available tools for gauging test collection quality, such as the uniques test, all rely on runs available during collection construction, and thus indicate true problems when they detect a problem with a collection but may not detect problems that nonetheless exist (the \u201cunknown unknowns\u201d). This paper does not offer new methods for gauging collection quality, but our results do confirm that the TREC-8 collection scored our new neural runs fairly. 3 METHODS This section first describes the process used to obtain new judgments for the TREC-8 collection and then describes the new runs themselves. 3.1 Pooling The TREC-8 ad hoc collection contains approximately 525,000 fulltext documents drawn from the Financial Times, the Los Angeles Times, the Foreign Broadcast Information Service, and the Federal Register and 50 topics (numbers 401\u2013450). The TREC task from which the collection was created received a total of 129 runs from 41 participants. Pools were created using \ud835\udf06= 100 over 71 of these runs resulting in a total of 86,830 judged documents across all 50 topics with the smallest pool containing 1046 documents and the largest 2992. Thirteen of the submitted runs were manual runs and the rest were automatic runs. See the TREC-8 overview paper for more details about the collection [22]. The five new runs that are the subject of this analysis consist of two BM25-based baselines and representative samples of three transformer-based retrieval models. These five runs are described in detail in Section 3.2. For the collection to be unfair to these runs, the runs would have to retrieve unjudged documents that are in fact relevant, and determining that requires additional human relevance judgments. But adding new judgments to an existing test collection is fraught with complications. Relevance is known to be idiosyncratic to the individual assessor making the judgment [19] and cherry-picking documents from a small set of runs risks biasing the qrels in favor of those runs. We used the following procedure to obtain new judgments to control for these factors as much as possible. We constructed depth-50 pools using the five new runs and 52 of the 58 unjudged runs submitted to TREC-8. The six unjudged runs from TREC-8 that were again not pooled are ineffective runs (MAP scores less than 0.1 as evaluated on the original qrels) that contain disproportionately many unjudged documents. For each topic, any previously-judged document in the new pool was removed, and a TREC assessor judged the remainder. In keeping with the original TREC-8 judgment protocol, assessors assigned binary judgments of not relevant or relevant to each document in the (remainder) pool, and were instructed to judge a document as relevant if any part of it was relevant. The TREC assessor for a topic for the new pools was not the same assessor for that topic as in TREC-8. (The TREC-8 assessors were not available, and after 20+ years since they last assessed the topics they would have been essentially different assessors, anyway.) The current assessor was given access to the previous qrels and asked \f0 100 200 300 400 Number relevant in original qrels 0 100 200 300 Number documents in new judgments new pool size new relevant found Figure 1: The number of documents in the remainder pool and the number relevant documents found in it conditioned on the number of relevant documents in the original qrels for each of the 50 topics. No new relevant documents were found for 18 of the topics. to review those judgments to get a sense of how the original assessor judged the topic before beginning their own judgments. The combined set of judgments may well be less internally-consistent than the original set, but any such conflicts are unlikely to matter for this experiment. The TREC assessor has no knowledge of which system retrieved which documents and so cannot be systematically biased for or against particular systems. Historically, assessors like to find relevant documents so they are unlikely to arbitrarily declare documents to be not relevant. Further, assessors tend to disagree on \u201cedge case\u201d documents and our main concern is new runs retrieving clearly relevant but previously unseen documents. The total number of documents in the remainder pools was 3842 with the smallest pool containing 9 documents and the largest 359. The total number of relevant documents found is 158, with 17/359 new relevant documents found for topic 417 and no new relevant documents found for 18 topics. Figure 1 shows the number of newly judged documents and the corresponding number of relevant documents found per topic (on the \ud835\udc66-axis) conditioned on the number of relevant documents for the topic in the original qrels (on the \ud835\udc65-axis). Contrary to Zobel\u2019s [28] and Harman\u2019s [6] findings that topics with large relevant set sizes have even more relevant in the unjudged documents, no such correlation between the number of existing and newly found relevant documents is apparent in this case. Consistent with their findings, though, the newly found relevant documents are not concentrated in a small set of runs (see Figure 2). 3.2 Retrieval Runs We began with two bag-of-words baselines produced by the Anserini IR toolkit, which is built on the open-source Lucene search library to support reproducible research [24]: \u2022 Anserini BM25: Lucene\u2019s implementation of BM25 [14], which can be viewed as a BM25 variant (see detailed discussions in Kamphuis et al. [7]). \u2022 Anserini BM25+RM3: BM25 with the RM3 [1] pseudo relevance feedback technique, as described in Yang et al. [25]. This provides a competitive baseline, especially with respect to pre-BERT neural models. In addition, we also generated three new runs with neural models: \u2022 monoBERT + MaxP: a reranking model based on monoBERT [13] that takes advantage of the MaxP technique [4] to overcome the length limitations associated with transformers. Here, we rerank the output of BM25 from Anserini (see above). Our implementation is described in Zhang et al. [27] and trained on the MS MARCO (V1) passage data.2 \u2022 TCT-ColBERT (v2) [12]: a representative example of the class of so-called dense retrieval models that takes advantage of transformers to convert documents into dense vectors. Retrieval is then recast as a nearest neighbor search problem in vector space. To address the length limitations associated with transformers, documents are first segmented into passages, and each passage is encoded independently. At retrieval time, the highest-scoring passage score is taken as the score of the document it came from to generate a document ranking for evaluation. The encoder models are trained on the MS MARCO (V1) passage data. \u2022 uniCOIL (with doc2query\u2013T5 expansions) [10]: a representative example of the class of so-called sparse retrieval models. These models likewise take advantage of transformers to generate vector representations from documents and queries, but the main difference here is that these models retain the vocabulary space as the basis of the vectors, and thus they can be viewed as bag-ofwords weighting functions that are learned from large amounts of data. As with TCT-ColBERT, documents are segmented into passages and independently encoded, and retrieval (which can be performed with standard inverted indexes) likewise takes the highest passage score as the document score. The encoder models are trained on the MS MARCO (V1) passage data. Together, these models cover the three main ways that transformers are used today for retrieval [11]: reranking bag-of-words candidates, dense retrieval models, and sparse retrieval models. At a high level, while none of the three would be considered \u201cstate of the art\u201d (SOTA) in terms of standard benchmark datasets, they can be fairly characterized as competitive models against which putative SOTA models would be evaluated. Note that for simplicity, these models have all been trained on the MS MARCO (V1) passage test collection and applied for retrieval (inference) in a zero-shot manner. For reranking approaches (e.g., monoBERT), there is substantial evidence that they are able to maintain high levels of effectiveness even when applied to texts beyond the domain on which it is trained [2, 9, 11]. That is, rerankers exhibit good cross-domain generalizations with respect to relevance. In contrast, there is evidence that dense retrieval models in general have difficulty with cross-domain generalization, with evidence from multi-domain datasets such as BIER [18]. There appears to be some evidence that sparse retrieval models may generalize across 2https://github.com/microsoft/msmarco \fdomains better [5], but evidence here is more scant. The crossdomain generalization deficiencies of dense and sparse retrieval models (and how to rectify the situation) is the subject of ongoing research, but to our knowledge there have not emerged best practices that we can simply \u201cdrop in\u201d for these experiments. Thus, we decided on zero-shot inference so as to not conflate aspects of modeling approaches not germane to our research question. We acknowledge that this is a weakness in our design, as we further discuss in Section 5. 4 RESULTS Our research question is whether the five new runs described in Section 3.2 are evaluated fairly by the original TREC-8 test collection. Put differently, would a researcher using the original test collection to compare the effectiveness of one of the new runs to a TREC-8 submission (that contributed to the pools) reach the same conclusion had the new run also contributed to the pools? To answer this question, we simply evaluate all 134 runs (129 TREC-8 submissions plus 5 new runs) using both the original qrels and an expanded qrels that is the union of the original plus new judgments and rank the runs by mean evaluation score. If the two rankings of runs are almost the same, this suggests that the new runs can indeed be fairly evaluated, and that the original collection is reliable. We use Kendall\u2019s \ud835\udf0fmeasure of association [17] as the similarity measure of system rankings to operationalize \u201calmost the same\u201d. Kendall\u2019s \ud835\udf0fcomputes a normalized count of the number of pairwise swaps it takes to turn one ranking into the other. The \ud835\udf0franges from \u22121.0 to 1.0 where 1.0 indicates the rankings are identical, \u22121.0 indicates the rankings are exact opposites of one another, and 0.0 indicates the rankings are uncorrelated. Kendall\u2019s \ud835\udf0fis not an ideal similarity measure [15]. Its values depend on the number of items being ranked, so are granular when there are few items. The values and are also sensitive to the average difference in mean scores, so small differences in average scores that are not meaningful in practice may still change the order of systems making rankings look less similar than they actually are. But for our purposes where there are 134 runs in the ranking and the rankings are generally stable \ud835\udf0fis less problematic. The implementation of Kendall\u2019s \ud835\udf0fused here handles tied scores in the rankings by omitting the tied run pair from the computation. We used both mean average precision (MAP) and mean precision at ten documents retrieved (P@10) as evaluation measures. The Kendall\u2019s \ud835\udf0fbetween system rankings for MAP is 0.9933 and for P@10 is 0.9991, indicating very consistent ranking of systems by the two different qrels. Such small differences in rankings are well within the noise level of information retrieval evaluation, for example, changes in relevance assessors create larger differences [19]. Table 1 reports the MAP scores and corresponding ranks over all evaluated runs for the best run overall (a manual run), the best automatic run, the median run, and the five new runs as computed using the original and expanded qrels. As the \ud835\udf0findicates, the ranks of the runs change minimally. The absolute value of the MAP scores decreases when computed using the expanded qrels: the expanded recall base decreases the runs\u2019 scores more than retrieving additional relevant documents helps since each run retrieves at most only a few additional relevant documents. Run MAP (orig) rank MAP (exp) rank Top manual run 0.4692 1 0.4587 1 Top automatic run 0.3303 11 0.3262 11 median run 0.2602 66 0.2568 67 BM25 0.2515 76 0.2497 74 BM25 + RM3 0.2750 50 0.2721 50 monoBERT + MaxP 0.2728 52 0.2721 51 TCT-ColBERT 0.2209 96 0.2198 96 uniCOIL 0.2343 85 0.2325 84 Table 1: The effectiveness of a few selected runs and our additional runs, using the original qrels (orig) and the expanded qrels (exp). Ranks are out of 134 total runs (129 TREC-8 submissions plus 5 new runs). Qualitatively, the effectiveness of the runs and their rank positions are generally within expectations, although there are a few surprises. The bag-of-words BM25 baseline is roughly \u201cmiddle of the pack\u201d, which makes sense since BM25 \u201cof today\u201d is likely not very different from BM25 and comparable bag-of-words models from two decades ago in terms of effectiveness. Pseudo relevance feedback (RM3) improves over bag-of-words BM25, once again as expected, and the improvements are consistent with the literature. It was also expected that we see improvements from monoBERT + MaxP compared to BM25 (which the reranker uses as a source of candidates), but the amount of improvement is somewhat disappointing\u2014 transformer-based reranking only achieves effectiveness comparable to pseudo relevance feedback. That is, a lot of \u201ceffort\u201d with neural models was expended to achieve only what can be obtained with a technique that is over a decade old.3 This result appears inconsistent with work on the TREC 2004 Robust Track (which uses the same document corpus as TREC-8 but with additional topics), where researchers have reported quite impressive scores, even besting the most effective run from the participants [2, 9]. The fact that TCT-ColBERT and uniCOIL underperform the BM25 baseline is consistent with previous work, given that the model is trained on another collection and applied in a zero-shot manner [18]. It is worth noting that in our experiments, these neural runs are the first and only runs that we generated\u2014with no tuning (of, for example, inference-time hyperparameters) or consultation of the evaluation scores whatsoever. Examination of the number of newly found relevant documents per run, shown in Figure 2, explains why the rankings are as consistent as they are. The figure plots the number of newly found relevant documents against the number of previously-unjudged documents retrieved by a run in the top 100 ranks over all topics for each of the 57 runs that contributed to the new pools. The TCTColBERT run returned both more previously unjudged documents and more newly found relevant documents than any of the TREC-8 submissions, and monoBERT+maxP and TCT-ColBERT each retrieved the maximum of 23 newly found relevant documents. But 23 additional relevant documents in a run is an average of slightly less than one additional relevant document for every two topics, 3We did not experiment with reranking BM25 + RM3. \f0 500 1000 1500 number previously-unjudged retrieved 0 5 10 15 20 25 number new relevant retrieved B B M T U Figure 2: Number of newly found relevant documents vs. number of previously unjudged documents found in the top 100 ranks totaled over all topics. Unjudged TREC-8 submissions are plotted with a dot; new runs are plotted with the first character of their run name (B for both BM25 baselines, U for uniCOIL, T for TCT-ColBERT, and M for monoBERT). and most runs added fewer than 10 additional relevant (an average of one additional relevant for every five topics). Since averages can often hide significant variance among individual topics, we computed per-topic \ud835\udf0fscores to check for topics that were impacted by newly found relevant documents. For each of the 50 topics, we ranked the systems by their scores on that topic as computed using each of the qrels and computed the \ud835\udf0fbetween the two rankings. For P@10, topic 401 had a \ud835\udf0fof 0.9988 and all other topics had a \ud835\udf0fof 1.0. There was more variability for MAP with individual topic \ud835\udf0fvalues ranging from 0.8852 to 1.0 (the 18 topics with no newly found relevant had \ud835\udf0fvalues of 1.0). The one topic with \ud835\udf0f< 0.9 is topic 432 that has 28 relevant documents in the original qrels and an additional four relevant documents were found for it. Most automatic runs have very poor effectiveness for topic 432; the several runs that retrieved one or two of the newly found relevant documents had very large changes in rank even though the average precision score did not change much in absolute terms. The run with the largest change in ranks was the monoBERT run that improved 53 ranks with a change in AP score from 0.0022 to 0.0188. 5 DISCUSSION The ranking of runs by the expanded qrels is nearly identical to the one when ranked by the original qrels, so we conclude that the TREC-8 collection is reusable. This conclusion appears to contradict the warning by Yilmaz et al. that the evaluation of neural methods on collections built solely from traditional methods may be unfair [26], but the source of disagreement is in the quality of the respective collections. The quality of a pooled collection is strongly dependent on the effectiveness of the runs from which the pools were taken, the diversity of those runs, and the pool depth [20, 28]. In contrast, Yilmaz et al. were explicitly interested in shallow pools, and so the resulting evaluation using them can be expected to be noisy. Our claim of reusability is specific to the TREC-8 collection, though the results are likely to apply to other similarly-constructed collections such as the TREC-6 and TREC-7 ad hoc collections. Our experiments also do not prove that there cannot be some other retrieval method that would produce a run\u2014we\u2019ll call magical\u2014 that would in fact be evaluated unfairly. The only way to prove that is to judge the entire document set for each topic.4 However, we believe that the existence of run magical is unlikely. For magical to be evaluated unfairly it would have to both discover a sufficient number of new relevant documents and also rank those new relevant before the majority of the known relevant for sufficiently many topics (otherwise, the existing qrels will score it correctly). Such a result is made even more unlikely by the fact that TREC topic development process, in which the TREC assessor who authors a topic performs a few manual searches to estimate the likely number of relevant, was designed to create topics that are expected to have fairly limited relevant sets [6]. The argument against the existence of run magical also addresses a possible objection to our experimental methodology: that the dense retrieval model (TCT-ColBERT) and the sparse retrieval model (uniCOIL) were used in a zero-shot manner. Due to challenges in cross-domain inference, the effectiveness of both approaches is low, even worse than bag-of-words BM25 with Anserini (which is unsurprising). Had we performed appropriate domain adaptation on the dense and sparse retrieval models, they would have been more effective overall, and perhaps would have uncovered more previously unseen but relevant documents. As already argued, such a result is possible, but unlikely. Furthermore, there has not, to our knowledge, emerged a consensus on domain adaptation best practices for such models, and thus any technique we apply runs the risk of being idiosyncratic. To appropriately use in-domain training data would necessitate some type of cross-validation setup, which would render our experimental setup needlessly complicated. We feel that we have made the appropriate design choices in this first attempt to answer our core research question, and leave more nuanced examination of these additional factors for future work. The quality of the TREC-8 collection has long been attributed to the effective manual runs that contributed to the pools during its construction. In principle, it is the effectiveness of a run and not the type of the method that creates it that matters, but to date only manual enrichment of the pools has been sufficiently effective. Since neural methods, especially dense retrieval models, attempt to overcome the kinds of semantic mismatch that traditional bagof-words methods are susceptible to and humans handle with ease, we examined whether the original TREC-8 collection would have been as good had it been built using the original TREC-8 automatic runs and the new neural runs but no manual runs. 4Including the new judgments, on average each topic has 1800 judged documents (or 0.3% of the document corpus). \fTo accomplish this, we removed all the relevant documents that had been contributed to the original pools by manual runs only, and merged those modified pools with depth-100 pools built from the five new runs and the originally unjudged TREC-8 submissions (minus the three manual submissions from that set). Of the 1131 relevant documents that only manual runs contributed to the original pools across all 50 topics, fewer than 100 were recovered by the new pools. The TCT-ColBERT run found 50 of these documents, the monoBERT run found 22, and the uniCOIL run 17 (some of the same documents were found by more than one run). Each of the remaining runs found substantially fewer documents; the BM25+RM3 run found just two, for example. So there is some support for the claim that the neural methods are finding different documents than traditional (automatic) methods. There is a total of 4728 relevant documents in the original qrels, so the loss of 1000 relevant documents is almost a fifth of the known relevant. Nonetheless, the Kendall\u2019s \ud835\udf0fcorrelation between rankings of runs evaluated with the original qrels and evaluated with the minus-manual-runs qrels is still very high (0.9964 for P@10 and 0.9818 for MAP). How can this be so? The manual runs are so much better than the other runs that they still evaluate as better using only the documents they retrieved in common with the automatic runs; the automatic runs did not retrieve the lost documents in the top 100 ranks (by definition of them being manual-only), so were mostly unaffected by their loss; and there are only three neural runs (which did have modest movement in the system rankings). 6" + } + ], + "Jimmy Lin": [ + { + "url": "http://arxiv.org/abs/2404.15279v1", + "title": "Jointly Modeling Spatio-Temporal Features of Tactile Signals for Action Classification", + "abstract": "Tactile signals collected by wearable electronics are essential in modeling\nand understanding human behavior. One of the main applications of tactile\nsignals is action classification, especially in healthcare and robotics.\nHowever, existing tactile classification methods fail to capture the spatial\nand temporal features of tactile signals simultaneously, which results in\nsub-optimal performances. In this paper, we design Spatio-Temporal Aware\ntactility Transformer (STAT) to utilize continuous tactile signals for action\nclassification. We propose spatial and temporal embeddings along with a new\ntemporal pretraining task in our model, which aims to enhance the transformer\nin modeling the spatio-temporal features of tactile signals. Specially, the\ndesigned temporal pretraining task is to differentiate the time order of\ntubelet inputs to model the temporal properties explicitly. Experimental\nresults on a public action classification dataset demonstrate that our model\noutperforms state-of-the-art methods in all metrics.", + "authors": "Jimmy Lin, Junkai Li, Jiasi Gao, Weizhi Ma, Yang Liu", + "published": "2024-01-21", + "updated": "2024-01-21", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.AI" + ], + "main_content": "Introduction Similar to visual and acoustic signals, tactile signals are important for modeling and understanding humans. In recent years, various wearable electronics have been designed to collect tactile signals, which are widely used in multiple scenarios, especially in healthcare and robotics (Zhu et al. 2019; Fan et al. 2020; Lou et al. 2020; Okunevich et al. 2021). The collected tactile signals can be utilized for different purposes, and one of their main applications is the action classification task. Sundaram et al. (2019) propose to identify hand actions by tactile signals with sensors in gloves. Luo et al. (2021) and Wicaksono et al. (2022) use wearable electronic socks to collect tactile signals for feet action classification. Figure 1 is an example, where the continuous tactile signals are collected by e-textile sensors in socks, and then used to classify the action (e.g., walking, etc.). Tactile signals are spatially and temporally sensitive, hence utilizing their spatio-temporal features is important for action classification. Firstly, tactile signals are spatially sensitive as they are not translation invariant. The same signals in different positions (i.e., collected by various sensors) *Corresponding authors. Copyright \u00a9 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: An overview of action classification based on tactile signals collected by wearable electronic socks. indicate distinct actions. For example, the same signals collected by sensors located in different positions should be classified as standing on toes or heels, respectively. Secondly, tactile signals are temporally sensitive as they are collected regularly with high frequency, e.g., in 10HZ (10 data points per second), and the time order of these signals is informative. For example, if ignoring the order of collected signals, two signal sequences collected by the same sensor from distinct actions may be seen as identical actions (i.e., same elements with different orders), which becomes useless in classification. Furthermore, we want to point out that jointly modeling spatial and temporal features is essential for tactile signals in action classifications. We conduct an empirical study on a real-scenario dataset (Luo et al. 2021), and draw the spatial and temporal features of different actions in Figure 2. The heatmap of each action shows the averaged results of all samples, which indicates spatial features. The temporal change of a specific sensor shows the averaged sequence data of all samples collected by this sensor, which indicates temporal features. As shown in Figure 2(a), two actions, stand on toes and lean left, have similar temporal features but different spatial features. However, in Figure 2(b), two actions, upstairs and walk fast, have similar spatial features but different temporal features. These observations verify tactile signals\u2019 spatio-temporal features, and further indicate that only using one of them is inadequate for classification. However, existing tactile methods lack the ability to capture the mentioned temporal and spatial nature of tactile signals simultaneously. On the one hand, most previous tactile-related studies adopt CNN-based methods to model arXiv:2404.15279v1 [eess.SP] 21 Jan 2024 \fFigure 2: Empirical study of actions in a tactile dataset. Heatmaps are the averaged results of all samples collected by sensors in the left foot, and the tactile sensor of Figure 2(a) and 2(b) is located at positions (5,20) and (28,19) of the left foot, respectively. the tactile signal frames and then combine them by concatenating or sequential models, which fail to jointly capture their translation variance and temporal properties (Luo et al. 2021; Sundaram et al. 2019; Gao et al. 2020). On the other hand, various transformer models have been designed to handle different continuous signals. But most of them (Zerveas et al. 2021; Tong et al. 2022; Amiridi, Darnell, and Jewell 2022) focus on temporal features, which are inadequate to model tactile signals\u2019 spatial nature, especially the translation variant property. In this paper, we design a Spatio-Temporal Aware tactility Transformer (STAT) to utilize tactile signals for action classification, which utilizes their temporal and spatial features simultaneously. We design spatial and temporal embeddings to explicitly model the translation variant and sequential features of tactile signals, respectively. Additionally, we introduce a temporal pretraining task to enhance the modeling of temporal features by distinguishing the time order of signal tubelets. After pretraining the STAT transformer, the embedding of the [CLS] token is utilized for action classification. Experimental results on tactile show that our model outperforms all baseline methods in all metrics, including state-of-the-art multivariate and video classification models. Further analyses verify the effectiveness of the proposed pretraining task and embeddings. To the best of our knowledge, this is the first transformer model designed for tactile signals by jointly modeling spatio-temporal features, which can be applied to various tactile-related scenarios. Related Work Action Classification with Tactile Signals In recent years, various wearable electronics have been designed to model user actions based on tactile signals in different scenarios. Luo et al. (2021) design wearable electronic socks to classify user walking actions. Noh et al. (2021) use tactile signals in healthcare scenarios, which predict the fall risk of users. Robotic studies also point out that modeling tactile signals are important in understanding humans (Kragic et al. 2018; Negre et al. 2018). Despite the importance of tactile signals in various scenarios, we find that previous tactile classification models are unable to capture the spatial and temporal properties of tactile signals simultaneously. Sundaram et al. (2019) use CNN to capture the embedding of each frame and simply concatenate them for action classification. Recent studies enhance this method by adopting a GRU/LSTM model rather than concatenation to model the sequential features (Luo et al. 2021; Okunevich et al. 2021; Gao et al. 2020). Cao et al. (2020) introduces temporal attention operation combined with spatial features in separate phases. However, as CNNs are designed to utilize the translation invariance features, they fail to capture tactile signals\u2019 translation variance. Different from previous studies, we design a new transformer model to jointly capture the spatio-temporal features of tactile signals for action classification. Transformers for Continuous Signals Transformer models (Vaswani et al. 2017) have achieved great success in continuous signal classification tasks, e.g., videos and multivariate continuous signals (Tong et al. 2022; Zerveas et al. 2021; Zhao et al. 2022). We briefly review related transformers here, especially video transformers, as the input shape of videos is similar to tactile signals. Existing transformer models fail to utilize the spatial and temporal features of tactile signals simultaneously. On the one hand, most video transformers use the visual transformer (Dosovitskiy et al. 2021) as a backbone model, and further propose new masking or input strategies (Arnab et al. 2021a; Bertasius, Wang, and Torresani 2021; Yan et al. 2022a; Liu et al. 2022; Yan et al. 2022b). Recent models propose to capture the spatio-temporal features of videos, e.g., VideoMAE (Tong et al. 2022), SSTSA (Alfasly et al. 2022). However, as video transformers aim to model the translation invariant of videos, they only use position embeddings in encoding, which fail to model the translation variant spatial property of tactile signals. On the other hand, most transformer methods proposed for multivariate continuous signals focus on modeling their temporal features, while ignoring the spatial relations among different signals (i.e., where the signals are collected), such as TST (Zerveas et al. 2021) and the transformer proposed by Hannan et al (2021). Furthermore, most transformer models rely on the masking and reconstruction pretraining task (Devlin et al. 2019; Bao et al. 2022; Tong et al. 2022), which cannot explicitly capture the temporal/spatial features of continuous signals. Although these transformers are not designed for tactile signals, we will use them to verify the effectiveness of STAT. \fFigure 3: An overview of STAT model. Spatial and temporal embeddings are designed to jointly capture both properties. Approach Problem Statement Our goal is to utilize tactile signals to classify user actions, where the data can be collected by various wearable electronics. The wearable devices often arrange sensors as a matrix, so we define the data matrix collected in a specific time point as a frame. Then, the task is defined as follows. Given a tactile signal tensor X \u2208RC\u00d7T \u00d7H\u00d7W , where C represents the number of wearable devices, T represents the length of signal sequences (i.e., T frames), H and W mean the number of sensors in each column/row (i.e., the shape of frames), respectively. An example of tactile signals is shown in Figure 4(a). Xci,tj,hk,wl represents the value collected by the sensor of device ci in position (hk, wl) at time tj. Each tactile segment X has an activity label y, and the total number of activity types is M. Our target is to accurately classify the given tactile signal X to its label y. Overview We propose a spatio-temporal aware tactility transformer for the action classification task based on tactile signals, which is named STAT. A new pretraining task and two extra embeddings are designed to capture the temporal and spatial features of tactile signals jointly in STAT. Firstly, the designed spatio-temporal aware transformer encoder is introduced. We convert the tactile signal tensor to a tubelet sequence. Besides the widely used tubelet and position embeddings, we propose to add spatial and temporal embeddings to capture each tubelet\u2019s temporal and spatial features, respectively. Then, multi-layer transformer encoders are adopted to calculate the representations of tactile signals. Then, the adopted pretraining tasks are defined. Aside from the common masking and reconstruction loss, we designed a temporal pretraining task to explicitly discriminate the time order of tubelet pairs. Finally, we show how to adopt our model for action classifications. Spatio-Temporal Aware Transformer Encoder We will introduce the designed spatio-temporal aware transformer encoder shown in Figure 3. To simplify the notations, we only show the process for handling tactile tensor collected from one wearable device (i.e., X \u2208RT \u00d7H\u00d7W ), as we can easily expand our model to C-channel transformers to utilize signals collected by C devices. Tubelet Inputs As the spatial and temporal dimensions of the tactile signals can be redundant, directly adopting the whole data in classification may result in reduced efficiency. Motivated by previous video transformer models that convert the video clip into tubelets to alleviate the spatiotemporal redundancy, we follow these studies by transferring the tactile signals into a tubelet sequence (Arnab et al. 2021b; Liu et al. 2021; Fan et al. 2021; Tu et al. 2022). We define a tubelet as Q \u2208RL\u00d7P \u00d7P , where L represents its sequence length (i.e., the number of frames) and P represents the patch size (i.e., height and width). Figure 4(b) shows some examples of the converted tubelets, and the total number of tubelets for a tactile signal tensor X is Ntube = THW/(LP 2). Figure 4: (a) Visualization of tactile signal X \u2208RT \u00d7H\u00d7W . (b) Tubelet inputs, where each tubelet Q \u2208RL\u00d7P \u00d7P . \fSpatio-Temporal Enhanced Tubelet Embeddings Most video transformer models adopt the tubelet embeddings and position embeddings as the input of transformer encoders (Dosovitskiy et al. 2021; Tong et al. 2022). However, due to the fact that tactile signals do not have the translation invariance property as images/videos, simply adopting these settings cannot capture the spatial features of tactile signals. Additionally, jointly modeling spatial and temporal features are also essential in distinguishing actions, as shown in Figure 2. Thus, we propose to add spatial embeddings and temporal embeddings for each tubelet to capture the spatiotemporal features of tactile signals jointly. Spatial Embeddings. Each tubelet is collected from a patch of sensors, and the sensors are located in certain positions. We use a spatial embedding espatial k to represent where the tubelet signal is collected from, so that the spatial features will be encoded to explicitly model the translation variance. Tubelets collected by the same batch of sensors will get the same spatial embeddings, and the number of spatial embedding types is Nspace = HW/P 2. Following the traditional calculation of position embeddings (Vaswani et al. 2017), we utilize the sinusoidal positional encoding table to calculate the spatial embedding espatial k , where k represents the spatial position and k \u2208 {1, 2, .., Nspace}. The calculation is defined in Equation (1): espatial (k,2d) = sin( k 10000 2d D ) espatial (k,2d+1) = cos( k 10000 2d D ) (1) Where D represents the embedding dimensions, and espatial (k,d) refers to the d-th dimension of espatial k (d \u2208 {0, 1, 2, 3..., D}). Through this encoding process, the spatial embeddings can provide the transformer encoder with spatial knowledge of the tactile signals, which contributes to modeling the translation variant features. Temporal Embeddings. For tactile signal tubelets, their temporal features are important in distinguishing various actions. We propose temporal embeddings to represent the location of the tubelet in the time sequence, which refers to when the tubelet is collected. Similar to the spatial embeddings, we use the sinusoidal positional encoding Equation (1) to generate the temporal embedding etemporal k . The number of temporal embedding types is Ntemp = T/L, and tubelets collected in the same frames have the same etemporal k , where k \u2208 {1, 2, ..., Ntemp}. These new embeddings are used to enhance the model with additional information about the spatial and temporal properties of tactile signals jointly, and Figure 3 shows an example. Then, we aggregated the proposed two embeddings with tubelet and position embeddings to calculate the input matrix Einput of transformer encoders by Equation (2). Ultimately, the aggregation of embeddings allows for the simultaneous embedding of spatial and temporal properties. Einput = Etubelet + Eposition + Espatial + Etemporal (2) As shown in Figure 3, we append a [CLS] token at the beginning of the tubelet sequence, which is often used to represent the whole embedding sequence in transformer models. The tubelet, position, temporal, and spatial embeddings of this token are randomly initialized and optimized during training. Hence, we have Einput = [Einput [CLS], Einput Q1 , ..., Einput QNtube] \u2208R(Ntube+1)\u00d7D. Transformer Encoders We utilize the classical transformer encoder (Vaswani et al. 2017) as the backbone network, whose effectiveness has been verified in various domains and tasks (Devlin et al. 2019; Arnab et al. 2021b; Dosovitskiy et al. 2021). Our transformer encoder takes in Einput defined in the previous subsection. As the transformer encoder often consists of K transformer layers, we note the primary input Einput as E(0), and E(k) = Transformerk(E(k\u22121)), where k \u2208{1, 2, ..., K}. The output of the final layer TransformerK is the encoded representation of input tokens, and E(K) [CLS] is the final embedding of tactile signals. Pretraining Tasks Pretraining has been verified to be an effective technique to enhance transformer models in various scenarios, e.g., BERT for text (Devlin et al. 2019), BEIT for image (Bao et al. 2022), and VideoMAE for video (Tong et al. 2022). To achieve better classification performances, we choose to pretrain our STAT model before applying it to action classifications. We propose to use two pretraining tasks here, as shown in Figure 5. The first one is the masked tubelet reconstruction (MTR) task, which aims to reconstruct the masked input tubelets, which is also used in previous video transformers (Tong et al. 2022). The other one is our designed temporal pretraining task to explicitly model the temporal features of tactile signal tubelets. Although temporal embeddings are helpful in capturing temporal properties, we prefer to add a specific pretraining task due to the importance of temporal features in distinguishing different actions. Figure 5: Illustrations of the adopted two pretraining tasks. \fMasked Tubelet Reconstruction As shown in Figure 5(a), we use a masked auto-encoder to reconstruct the input signals, which is the MTR task in previous studies (Tong et al. 2022; Arnab et al. 2021a). MTR randomly masks tubelets from videos and reconstructs them in pretraining, and its loss function is defined as follows: LMTR = 1 |M| X t\u2208M |Vt \u2212\u02c6 Vt|2 (3) Where M is the set of masked tubelets\u2019 indexes, V is the input video, and \u02c6 V is the reconstructed one. Specially, we adopt spatial-based random masking instead of randomly masking all tubelets. In this strategy, we randomly select sensor groups from the Nspace types for masking. All signals collected by the chosen masking sensors (i.e., tubelets with the same spatial embeddings) will be masked. The motivation is that this masking strategy will contribute to better utilizing the spatial features among sensors. For the mask ratio, we leave it as a hyper-parameter study. Temporal Pretraining Our self-supervised temporal pretraining task enhances transformer encoders by training to distinguish the time order of two randomly selected tubelets, so that the temporal features can be maintained in the model, as shown in Figure 5(b). Firstly, two tubelets Qi and Qj are randomly selected from the whole set. Note that we should make sure Qi and Qj are collected at different times, so the temporal embeddings of them are different (i.e., etemporal Qi \u0338= etemporal Qj ). Then, we use the encoded embeddings E(K) Qi and E(K) Qj of tubelet Qi and Qj to identify the time order of them. If tubelet Qi is collected earlier than Qj, the identification result should be ytemp i,j = 1, otherwise 0. We choose a simple but effective way to optimize this task. We concatenate the two embeddings E(K) Qi and E(K) Qj , and use a linear layer with a sigmoid activation function to predict the time order \u02c6 ytemp i,j . A binary cross-entropy loss is utilized to optimize this pretraining task. Moreover, for each tactile signal tensor, only randomly selecting one pair of tubelets for pretraining is not enough. So Ncomp tubelet pairs are randomly selected and used in pretraining, i.e., (Qi1, Qj1), ..., (QiNcomp, QjNcomp). Finally, the loss function is formally defined in Equation (4). \u02c6 ytemp in,jn =\u03c3(Wframe(E(K) Qin \u2295E(K) Qjn )\u22a4 Ltemp = \u2212(ytemp in,jn log(\u02c6 ytemp in,jn) + (1 \u2212ytemp in,jn) log(1 \u2212\u02c6 ytemp in,jn)) (4) Where \u2295means vector concatenation, \u03c3 is a sigmoid activation function, n \u2208{1, 2, ..., Ncomp} and Wframe \u2208R1\u00d72D. To simultaneously utilize these two tasks in pretraining, we add an extra setting that only randomly selects unmasked tubelets, so we can optimize them together. Specifically, we aggregate the MTR loss with our temporal loss through a weight coefficient \u03b2, which is a hyper-parameter. The final pretraining loss is defined as follows: Lpretrain = LMTR + \u03b2Ltemp (5) Actions #Samples Actions #Samples Downstairs 4,942 Stand toes 3,978 Jump 3,090 Upstairs 5,025 Lean left 5,047 Walk 6,078 Lean right 5,011 Walk fast 5,360 Stand 5,024 Table 1: Statistics of each type of action. With the spatial-based random mask strategy in the MTR task and the designed temporal pretraining task, we enhance the representation ability of transformer encoders to better capture the spatial and temporal properties of tactile signals jointly. Similar to the pretraining of video transformers, we only use the training set of tactile signals for pretraining, as there lacks large scale open tactile datasets. Fine-Tuning for Action Classification After introducing our STAT model and pretraining tasks, we will show how to train STAT for action classification. We follow the approach of other transformer models by using the embedding of [CLS] token to represent the entire signal sequence. Firstly, we take the embedding of [CLS] token from the last transformer layer block (i.e., E(K) [CLS]), which represents the whole input signal. Then, we add a linear layer on the top of this embedding to classify it into action types (shown in Figure 1). The loss function is: \u02c6 yi =\u03b4(Wc(E(K) [CLS])\u22a4+ bc) L =CrossEntropy(\u02c6 yi, yi) (6) Where \u03b4 is the softmax activation function, Wc \u2208RM\u00d72D and bc \u2208R1\u00d7M, and yi is a one-hot vector where only the index of the true label is 1. Experiments Experimental Settings Dataset As tactile action classification is a promising new application scenario that is under development, there is only one large-scale open dataset by far. So our experiments are conducted on the public tactile signal dataset1, which is collected by individuals with two wearable electronic socks to perform specific actions. The dataset consists of tactile signals with 9 labeled actions, namely walking, leaning on the left foot, leaning on the right foot, climbing downstairs, climbing upstairs, jumping, standing on toes, fast walking, and standing upright. The statistics are shown in Table 1. T, H, and W are set to 45, 32, and 32, respectively. As the sampling frequency is 15HZ, each piece of data is collected in 3 seconds. Following the providers\u2019 settings, 500 and 1,000 samples of each action are used in validation and testing, respectively, and the other samples are used in training (each action type will be sampled to 4,000 samples). Only the training set will be adopted for model pretraining to avoid data leakage. 1http://senstextile.csail.mit.edu/ \fModels ACC@1 ACC@3 Macro-F1 CNN&GRU (Luo et al. 2021) 0.8794\u00b10.0280 0.9497\u00b10.0183 0.8743\u00b10.0319 TST (Zerveas et al. 2021) 0.8701\u00b10.0252 0.9637\u00b10.0147 0.8660\u00b10.0272 VideoMAE (Tong et al. 2022) 0.7705\u00b10.0906 0.9287\u00b10.0177 0.7521\u00b10.1027 STAT w/o pretraining 0.8050\u00b10.0549 0.9528\u00b10.0225 0.7946\u00b10.0652 STAT 0.9033\u00b10.0098 0.9830\u00b10.0081 0.9015\u00b10.0104 Table 2: Overall performances of all models. downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast Predicted label downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast True label 819 66 0 7 69 5 5 19 10 24 930 0 0 42 4 0 0 0 44 224 664 0 0 0 68 0 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 1000 0 0 0 0 0 8 0 0 0 992 0 0 0 98 70 27 82 46 20 530 27 101 4 9 0 0 0 1 56 864 66 1 0 0 0 0 0 9 0 989 (a) CNN&GRU downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast Predicted label downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast True label 865 9 0 15 68 2 27 4 11 153 773 0 6 17 15 0 18 18 23 23 867 0 3 71 14 0 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 1000 0 0 0 162 8 0 22 88 0 611 106 3 4 5 1 0 0 1 20 778 190 6 0 0 1 0 0 56 0 937 (b) TST downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast Predicted label downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast True label 484 46 0 76 264 61 53 2 13 46 689 1 115 108 26 1 1 12 99 0 828 0 15 11 47 0 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 1000 0 0 0 0 44 0 0 10 22 925 0 0 0 159 5 7 62 160 9 441 25 133 5 0 3 0 0 0 99 560 333 0 0 0 0 1 0 6 0 993 (c) VideoMAE downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast Predicted label downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast True label 686 39 0 0 190 6 75 3 0 5 920 0 13 40 1 6 7 9 5 0 948 0 28 14 4 2 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 994 0 6 0 137 2 0 21 80 0 721 35 3 5 0 0 0 0 0 20 908 67 1 0 0 0 0 0 33 10 955 (d) STAT Figure 6: Confusion Matrices of all models. Hyper-parameters Values #Comparison Pairs Ncomp 10, 20, 30, 40, 50 Loss Weight \u03b2 0.5, 0.75, 1, 1.5, 2, 2.5 Masking Ratio 0.1 0.9 with step length 0.1 Adam Learning Rate 1e-3, 5e-3, 1e-2 Transformer Layer K 3, 6, 9, 12 Table 3: Summarization of tuned hyper-parameters. Metrics We use accuracy and macro-F1 as evaluation metrics. As there are multiple classes, we report both Top-1 & Top-3 accuracy as in previous studies (Luo et al. 2021). Besides, we add Macro-F1 to show the comprehensive performance on the imbalanced dataset of all models. Baselines To demonstrate the effectiveness of our model, we use several state-of-the-art baselines: \u2022 CNN&GRU (Luo et al. 2021): This method adopts convolution and recurrent networks for action classification with tactile signals; \u2022 TST (Zerveas et al. 2021): TST is a state-of-the-art transformer-based model for continuous multivariate signal classification with pretraining; \u2022 VideoMAE (Tong et al. 2022): This is a state-of-the-art video classification model with masked auto-encoders. Implementation Details We tune hyper-parameters as shown in Table 3. In addition, the tubelet parameters L and P are set to 5 and 4, while the pretraining and fine-tuning epoch is set to 60. The embedding dimension D is set to 768, in which batch size is 64 and weight decay is 1e-4. For baselines, we employ their public implementations and tune them with hyperparameters suggested by their authors. All experiments are implemented by Pytorch 1.7 and executed on 4 Tesla V100 or GeForce RTX 3090 GPUs. Note that only the training data is used for the pretraining of TST, VideoMAE, and STAT to avoid data leakage. Experiments are repeated 5 times with different random seeds. Besides, the total training time of STAT is similar to VideoMAE (10 hours). The code is available at https://github.com/Aressfull/sock classification. Overall Performances Experimental results of our STAT and baselines are reported in Table 2. TST, VideoMAE, and STAT models are pretrained with the training set, and STAT w/o pretraining is directly trained for the classification task. Firstly, our pretrained STAT outperforms all baseline models in all metrics, showing that jointly modeling spatial and temporal features contributes to better action classification results. STAT achieves 2.7%, 2.0%, and 3.1% improvements than the best baseline in ACC@1, ACC@3, and Macro-F1, respectively. Secondly, STAT without pretraining performs worse than most baselines, showing that our pretraining provides significant improvements for STAT in the action classification task. Thirdly, for the baseline models, the widely used tactile CNN&GRU model achieves comparable results as TST, showing that modeling spatial features and temporal features are both important in action classification. However, VideoMAE model performs the worst, which indicates that simply reusing the video transformer will get worse performance. The reason should be videoMAE cannot \f# TE SE TPT ACC@1 ACC@3 Macro-F1 1 \u2713 0.8764 0.9678 0.8715 2 \u2713 0.8417 0.9506 0.8337 3 \u2713 \u2713 0.8947 0.9854 0.8957 4 \u2713 0.8299 0.9507 0.8247 5 \u2713 \u2713 \u2713 0.9033 0.9830 0.9015 Table 4: Experimental results of various ablation strategies. TE: Temporal Embeddings, SE: Spatial Embeddings, and TPT: Temporal Pretraining Task. capture the translation variant of tactile signals, and tactile signals are more dense than videos (VideoMAE only uses 8 frames but uses 45 frames here). To further analyze the performances of different models in various classes, we show the confusion matrices of all models on the test set in Figure 6. From the figures, we have the following observations: Firstly, CNN&GRU performs worse in temporal and spatial sensitive classes, i.e., upstairs and lean left, showing the weaknesses of current tactile classification models. Specifically, CNN&GRU classifies many upstairs samples as walk fast due to their similar spatial features (as shown in Figure 2(b)), while our STAT can distinguish these actions more accurately due to the modeling of temporal features. Secondly, TST performs even worse than CNN&GRU in many actions, indicating that focusing on modeling temporal features is not enough for tactile signals. For example, TST mistakes a number of lean left samples as stand on toes because they have similar temporal features (as shown in Figure 2(a)). Our STAT rarely makes mistakes on these actions as they are distinct in spatial features. Thirdly, our STAT model performs the best in most classes, as we jointly capture both the spatial and temporal properties of tactile signals. Meanwhile, due to the translation variant property of tactile signals, VideoMAE, which is designed for video classifications, is unsuitable for this task. Analyses Ablation Study To verify the effectiveness of the designed pretraining task and embeddings, we conduct ablation studies. Table 4 shows our ablation strategies and their performances. Note that the MTR pretraining task, position, and tubelet embeddings are used in all experiments, as we focus on analyzing the newly designed models here. We have the following observations from the results: Firstly, all designed modules contribute to the classification task, as STAT (Strategy 5) achieves the best performance with all modules in ACC@1 & Macro-F1, and comparable results in ACC@3. Secondly, by comparing Strategies 1,2,3 in pairs, we find that removing any one of the two designed embeddings will result in a large drop in performance. Besides, temporal embeddings are more important than spatial embeddings, as Strategy 1 performs better. Thirdly, STAT with both embeddings (Strategy 3) outperforms STAT with only the temporal task (Strategy 4) in all metrics. This indicates that only adopting the proposed pretraining task cannot make full use of its ability. 10 20 30 40 50 Number of comparison pairs: Ncomp 0.75 0.80 0.85 0.90 0.95 Value ACC@1 ACC@3 Macro F1 Figure 7: Effect of the number of comparison pairs. 0.5 1.0 1.5 2.0 2.5 Loss weight: 0.75 0.80 0.85 0.90 0.95 Value ACC@1 ACC@3 Macro F1 Figure 8: Effect of the weight of temporal pretraining loss. Hyper-parameter Analyses Due to the space limit, we only show two conducted hyper-parameter experiments. Effect of the Number of Comparison Pairs Ncomp. To verify the effect of Ncomp in the temporal pretraining task, we conduct analyses experiments and summarize the results in Figure 7. The best performance is achieved when Ncomp = 30. Fewer comparison pairs perform worse may be caused by insufficient training, while more pairs will not contribute to better results either. Effect of the Loss Weight \u03b2. We adjust the weight \u03b2 for our temporal pretraining task in Equation (5) in different values, and the results are shown in Figure 8. It indicates that a too-low or too-high value of \u03b2 will hurt the performance of our STAT model, and \u03b2 = 1 performs the best." + }, + { + "url": "http://arxiv.org/abs/2312.01556v1", + "title": "Searching Dense Representations with Inverted Indexes", + "abstract": "Nearly all implementations of top-$k$ retrieval with dense vector\nrepresentations today take advantage of hierarchical navigable small-world\nnetwork (HNSW) indexes. However, the generation of vector representations and\nefficiently searching large collections of vectors are distinct challenges that\ncan be decoupled. In this work, we explore the contrarian approach of\nperforming top-$k$ retrieval on dense vector representations using inverted\nindexes. We present experiments on the MS MARCO passage ranking dataset,\nevaluating three dimensions of interest: output quality, speed, and index size.\nResults show that searching dense representations using inverted indexes is\npossible. Our approach exhibits reasonable effectiveness with compact indexes,\nbut is impractically slow. Thus, while workable, our solution does not provide\na compelling tradeoff and is perhaps best characterized today as a \"technical\ncuriosity\".", + "authors": "Jimmy Lin, Tommaso Teofili", + "published": "2023-12-04", + "updated": "2023-12-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "Introduction In so-called dense retrieval models [Karpukhin et al., 2020], queries and passages are both encoded as dense vector representations (often called embeddings), and top-k retrieval is formulated as a nearest neighbour search problem. That is, given a query vector, the system\u2019s task is to retrieve the top-k most similar passage vectors with respect to a simple comparison operation, typically the inner (dot) product. Today, these dense representation vectors are typically derived from transformer-based models fine-tuned on a dataset of relevant query\u2013passage pairs; a common configuration involves models that generate vectors of 768 dimensions. Despite the observation that dense retrieval models and sparse bag-of-words lexical models such as BM25 capture parametric variations of a bi-encoder architecture [Lin, 2021], implementations of top-k retrieval are quite different for the two classes of models. For sparse bag-of-words vectors, the venerable inverted index has served as the workhorse for top-k retrieval dating back many decades. For dense vectors, current best practices take advantage of hierarchical navigable small-world network (HNSW) indexes [Malkov and Yashunin, 2020] to perform approximate nearest neighbour search; the Faiss [Johnson et al., 2019] library provides an implementation that is widely used today. Lin [2021] pointed out that the core retrieval problem can be decomposed into two independent components, what he refers to as the logical scoring model and the physical retrieval model. That is, the generation of vector representations from content is distinct from efficient solutions to the top-k retrieval problem. Of course, there exists a strong affinity between sparse representations and inverted indexes, on the one hand, and dense representations and HNSW indexes, on the other. However, this tight coupling does not necessarily need to be the case. In this work, we explore the contrarian approach of searching dense representations with inverted indexes. Building on previous work [Teofili and Lin, 2019], we apply two types of transformations\u2014 \u201cfake words\u201d and \u201clexical LSH\u201d\u2014that enable dense representations to be captured in standard inverted indexes and we empirically evaluate top-k retrieval using these two techniques. Such a solution is arXiv:2312.01556v1 [cs.IR] 4 Dec 2023 \fpotentially interesting because it enables dense and sparse retrieval using a single infrastructural component, obviating the need to maintain and coordinate different types of indexes. We present experiments on the MS MARCO passage ranking dataset, evaluating three dimensions of interest: output quality, speed, and index size. Results show that it is possible to perform top-k retrieval on dense representations using only inverted indexes: Compared to HNSW indexes, we can achieve reasonable effectiveness with much smaller indexes, but unfortunately, search is impractically slow. Thus, while our proposed techniques are \u201cworkable\u201d, they do not appear to provide a compelling tradeoff in the overall design space. We would characterize them as a \u201ctechnical curiosity\u201d, but still worthwhile for the community to be aware of. Perhaps our efforts will become a part of further breakthroughs that will yield a practical solution. 2 Methods In this work, we examined two techniques for top-k retrieval on dense vectors using inverted indexes. Both techniques were originally implemented in the Anserini toolkit [Yang et al., 2018] using \u201cstock parts\u201d from the open-source Lucene search library, as part of the work described in Teofili and Lin [2019]. However, this previous work pre-dated the advent of dense retrieval models and focused on similarity comparisons with word embeddings, which lacked a concrete task. Here, we applied the same techniques, but to an actual real-world retrieval application. \u201cFake words\u201d. We implemented the approach described in Amato et al. [2016], which encodes the features of a vector as a number of \u201cfake\u201d terms proportional to the feature value according to the following scheme: Given a vector w = (w1, . . . , wm), each feature wi is associated with a unique alphanumeric term \u03c4i such that the document corresponding to the vector w is represented by \u201cfake words\u201d generated by \u222am i=1 \u222a\u230aQ\u00b7wi\u230b j=1 \u03c4i, where Q > 1 is a quantization factor. Thus, the fake words encoding maintains direct proportionality between the float value of a feature and the term frequency of the corresponding fake index term. Feature-level matching for retrieval is achieved by matching on these fake words with scores computed by Lucene\u2019s ClassicSimilarity, which is a tf\u2014idf variant. Finally, for this approach to be effective, vector inner products have to be equivalent to cosine similarity, which can be achieved by normalizing the vectors to unit length. \u201cLexical LSH\u201d. We implemented an approach that lexically quantizes vector components for easy indexing and search in Lucene using LSH [Gionis et al., 1999]. While LSH is, of course, not new, to our knowledge, Teofili [2018] was the first to devise an implementation that directly integrates with inverted indexes inside Lucene. Given a vector w = (w1, . . . , wm), each feature wi is rounded to the d-th decimal place and tagged with its feature index i as a term prefix. For example, consider w = {0.12, 0.43, 0.74}. If d = 1, w is converted into the tokens 1_0.1, 2_0.4, and 3_0.7. In our implementation, tokens are aggregated into n-grams and finally passed to an LSH function, which is implemented in Lucene as MinHashFilter, to hash the n-grams into a configurable number of buckets b. Thus, the vector w is represented as a set of LSH-generated text signatures for tagged and quantized feature n-grams. 3 Experiments While our techniques are agnostic with respect to the actual dense retrieval model, for fair comparisons to HNSW indexes in Lucene, we needed vector representations that are normalized to unit length because Lucene\u2019s implementation is restricted to top-k retrieval using cosine similarity (as opposed to general inner products). Many dense retrieval models generate representations that do not perform this normalization. For their HNSW experiments in Lucene, Ma et al. [2023] had to fine-tune a new embedding model from scratch, which they called cosDPR-distil. To facilitate comparisons to this work, we used the same model. We evaluated top-k retrieval using standard evaluation methodology on the MS MARCO passage ranking test collection [Craswell et al., 2021], comprising three separate sets of queries: the 6980 2 \fdev DL19 DL20 Index Size RR@10 R@1k QPS nDCG@10 R@1k nDCG@10 R@1k (GB) BM25 0.1840 0.8526 426.49 0.5058 0.7501 0.4796 0.7863 2.5 FW (Q = 10) 0.0045 0.0440 940.36 0.0063 0.0167 0.0241 0.0408 0.2 FW (Q = 20) 0.2937 0.9142 18.65 0.5795 0.7238 0.5912 0.7327 1.3 FW (Q = 30) 0.3498 0.9580 5.31 0.6488 0.7788 0.6483 0.7980 2.8 FW (Q = 40) 0.3605 0.9668 2.80 0.6857 0.7902 0.6666 0.8194 4.2 FW (Q = 50) 0.3627 0.9669 1.91 0.6930 0.7957 0.6724 0.8193 5.7 FW (Q = 60) 0.3681 0.9707 1.56 0.6849 0.8005 0.6832 0.8261 7.1 FW (Q = 70) 0.3657 0.9695 1.42 0.6933 0.7979 0.6823 0.8223 8.4 FW (Q = 80) 0.3642 0.9708 1.25 0.6833 0.8015 0.6706 0.8267 9.7 FW (Q = 90) 0.3668 0.9733 1.19 0.7013 0.8006 0.6750 0.8271 11 LexLSH (b = 100) 0.2284 0.8365 6.26 0.4233 0.5614 0.4880 0.6391 0.9 LexLSH (b = 200) 0.2959 0.9267 3.10 0.5810 0.6610 0.5863 0.7485 1.4 LexLSH (b = 300) 0.3180 0.9457 2.06 0.6167 0.7272 0.6265 0.7866 2.1 LexLSH (b = 400) 0.3309 0.9538 1.41 0.6398 0.7450 0.6505 0.7924 2.7 LexLSH (b = 500) 0.3397 0.9569 1.09 0.6443 0.7556 0.6548 0.7992 3.3 LexLSH (b = 600) 0.3457 0.9596 0.83 0.6716 0.7610 0.6569 0.8131 3.9 LexLSH (b = 700) 0.3474 0.9609 0.74 0.6843 0.7735 0.6558 0.8157 4.5 LexLSH (b = 800) 0.3496 0.9611 0.67 0.6778 0.7784 0.6669 0.8181 4.9 LexLSH (b = 900) 0.3496 0.9611 0.66 0.6778 0.7784 0.6669 0.8181 4.9 HNSW (default) 0.3881 0.9732 47.78 0.7159 0.8101 0.6967 0.8391 26 HNSW (optimized) 0.3885 0.9747 387.29 0.7250 0.8222 0.7025 0.8520 26 Table 1: Performance of our proposed fake words and lexical LSH techniques on the MS MARCO passage corpus. queries from the development (dev) set, as well as queries from the TREC 2019 and 2020 Deep Learning Tracks [Craswell et al., 2019, 2020]. Our experiments were performed with Anserini at commit e99c73d (11/25/2023) on a Mac Studio with an M1 Ultra processor containing 20 cores (16 performance and 4 efficiency) and 128 GB memory, running macOS Sonoma 14.1.1 and OpenJDK 11.0.13. Unless otherwise specified, all runs used 16 threads. Results are presented in Table 1, covering the aspects of performance that we are interested in: output quality as measured in standard IR effectiveness metrics, speed in terms of query throughput (measured in queries per second or QPS), and index size (measured with the du -h command). The rows capture results either with the fake words technique (FW), parameterized by Q, or the lexical LSH technique, parameterized by the number of buckets b (with d = 1). Query throughput is measured only on the dev set, which has a sufficient number of queries (6980) to obtain reliable measurements; we observe only small variations from run to run. We report the average of three trials. In all cases, experimental runs used pre-encoded queries\u2014that is, cached representations from neural inference applied to the queries. To better understand the overhead associated with query inference, we refer readers to evaluations in Chen et al. [2023]. For reference, evaluation of BM25 is presented in the top row, and evaluation of HNSW indexes for cosDPR-distil is presented in the final two rows. The \u201cdefault\u201d HNSW condition characterizes performance using Anserini \u201cout of the box\u201d with default parameters (M set to 16, efC set to 100, 16 indexing threads). The \u201coptimized\u201d index was constructed with efC set to 1000 (all other parameters being the same), but optimized down to a single index segment (which is a very time-consuming operation); this is the same exact index instance used in Chen et al. [2023]. This optimization greatly increases search performance, but unless the document collection is static, this step is unrealistic for real-world use. Note that since HNSW indexing is non-deterministic, different index instances (i.e., from running the indexer multiple times) may exhibit small effectiveness variations. We report scores from our specific index instances, which may differ slightly from the official Anserini reproducibility documentation. From Table 1, looking at the fake words technique, it appears that the sweet spot is around Q = 40. A bit higher effectiveness comes at a roughly 30% decrease in QPS, and the index size of 2.8 GB remains quite modest. For the lexical LSH technique, the sweet spot appears to be around b = 400, 3 \fwith a 2.7 GB index. At a high level, it appears that the fake words technique provides better tradeoffs than the lexical LSH technique. Overall, for both techniques we would characterize the effectiveness as \u201cacceptable\u201d, with compact inverted indexes that are much smaller than the HNSW indexes. However, search is impractically slow compared to retrieval based on HNSW indexes. While it would be possible to perform more exhaustive parameter tuning, better parameter selection alone is unlikely to close the performance gap between either technique and HNSW indexes. The much smaller index sizes offered by our techniques definitely present an advantage over HNSW indexes, but we do not see a compelling use case for either of these techniques. Thus, we would characterize the techniques presented here as \u201ctechnical curiosities\u201d, but impractical overall. Although we can imagine a number of further explorations, such as hybrid dense\u2013sparse models with inverted indexes, further pursuit of these avenues does not seem particularly promising at present. It is clear that more breakthroughs are needed, and perhaps this work will represent a step along the way. 4" + }, + { + "url": "http://arxiv.org/abs/2308.14963v1", + "title": "Vector Search with OpenAI Embeddings: Lucene Is All You Need", + "abstract": "We provide a reproducible, end-to-end demonstration of vector search with\nOpenAI embeddings using Lucene on the popular MS MARCO passage ranking test\ncollection. The main goal of our work is to challenge the prevailing narrative\nthat a dedicated vector store is necessary to take advantage of recent advances\nin deep neural networks as applied to search. Quite the contrary, we show that\nhierarchical navigable small-world network (HNSW) indexes in Lucene are\nadequate to provide vector search capabilities in a standard bi-encoder\narchitecture. This suggests that, from a simple cost-benefit analysis, there\ndoes not appear to be a compelling reason to introduce a dedicated vector store\ninto a modern \"AI stack\" for search, since such applications have already\nreceived substantial investments in existing, widely deployed infrastructure.", + "authors": "Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian", + "published": "2023-08-29", + "updated": "2023-08-29", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "Introduction Recent advances in the application of deep neural networks to search have focused on representation learning in the context of the so-called bi-encoder architecture, where content (queries, passages, and even images and other multimedia content) is represented by dense vectors (so-called \u201cembeddings\u201d). Dense retrieval models using this architecture form the foundation of retrieval augmentation in large language models (LLMs), a popular and productive approach to improving LLM capabilities in the broader context of generative AI (Mialon et al., 2023; Asai et al., 2023). The dominant narrative today is that since dense retrieval requires the management of a potentially large number of dense vectors, enterprises require a dedicated \u201cvector store\u201d or \u201cvector database\u201d as part of their \u201cAI stack\u201d. There is a cottage industry of startups that are pitching vector stores as novel, must-have components in a modern enterprise architecture; examples include Pinecone, Weaviate, Chroma, Milvus, Qdrant, just to name a few. Some have even argued that these vector databases will replace the venerable relational database.1 The goal of this paper is to provide a counterpoint to this narrative. Our arguments center around a simple cost\u2013benefit analysis: since search is a brownfield application, many organizations have already made substantial investments in these capabilities. Today, production infrastructure is dominated by the broad ecosystem centered around the open-source Lucene search library, most notably driven by platforms such as Elasticsearch, OpenSearch, and Solr. While the Lucene ecosystem has admittedly been slow to adapt to recent trends in representation learning, there are strong signals that serious investments are being made in this space. Thus, we see no compelling reason why separate, dedicated vector stores are necessary in a modern enterprise. In short, the benefits do not appear to justify the cost of additional architectural complexity. It is important to separate the need for capabilities from the need for distinct software components. While hierarchical navigable small-world network (HNSW) indexes (Malkov and Yashunin, 2020) 1https://twitter.com/andy_pavlo/status/1659740200266870787 arXiv:2308.14963v1 [cs.IR] 29 Aug 2023 \frepresent the state of the art today in approximate nearest neighbor search\u2014the most important operation for vector search using embeddings\u2014it is not clear that providing operations around HNSW indexes requires a separate and distinct vector store. Indeed, the most recent major release of Lucene (version 9, from December 2021) includes HNSW indexing and vector search, and these capabilities have steadily improved over time. The open-source nature of the Lucene ecosystem means that advances in the core library itself will be rapidly adopted and integrated into other software platforms within the broader ecosystem. The growing popularity of so-called embedding APIs (Kamalloo et al., 2023) further strengthens our arguments. These APIs encapsulate perhaps the most complex and resource-intensive aspect of vector search\u2014the generation of dense vectors from pieces of content. Embedding APIs hide model training, deployment, and inference behind the well-known benefits of service-based computing, much to the delight of practitioners. To support our arguments, we demonstrate vector search with OpenAI embeddings (Neelakantan et al., 2022) using the popular MS MARCO passage ranking test collection (Bajaj et al., 2018). Specifically, we have encoded the entire corpus and indexed the embedding vectors using Lucene. Evaluation on the MS MARCO development set queries and queries from the TREC Deep Learning Tracks (Craswell et al., 2019, 2020) show that OpenAI embeddings are able to achieve a respectable level of effectiveness. And as Devins et al. (2022) have shown, anything doable in Lucene is relatively straightforward to replicate in Elasticsearch (and any other platform built on Lucene). Thus, we expect the ideas behind our demonstration to become pervasive in the near future. We make available everything needed to reproduce the experiments described in this paper, starting with the actual OpenAI embeddings, which we make freely downloadable.2 At a high-level, our demonstration shows how easy it is to take advantage of state-of-the-art AI techniques today without any AI-specific implementations per se: embeddings can be computed with simple API calls, and indexing and searching dense vectors is conceptually identical to indexing and searching text with bag-of-words models that have been available for decades. 2 From Architecture to Implementation The central idea behind the bi-encoder architecture (see Figure 1) is to encode queries and passages into dense vectors\u2014commonly referred to as \u201cembeddings\u201d\u2014such that relevant query\u2013passage pairs receive high scores, computed as the dot product of their embeddings. In this manner, search can be reformulated as a nearest neighbor search problem in vector space: given the query embedding, the system\u2019s task is to rapidly retrieve the top-k passage embeddings with the largest dot products (Lin, 2021). Typically, \u201cencoders\u201d for generating the vector representations are implemented using transformers, which are usually fine-tuned in a supervised manner using a large dataset of relevant query\u2013passage pairs (Karpukhin et al., 2020; Xiong et al., 2021). This formulation of search, in terms of comparisons between dense vectors, differs from \u201ctraditional\u201d bag-of-words sparse representations that rely on inverted indexes for low-latency query evaluation. Instead, nearest neighbor search in vector space requires entirely different techniques: indexes based on hierarchical navigable small-world networks (HNSW) (Malkov and Yashunin, 2020) are commonly acknowledged as representing the state of the art. The Faiss library (Johnson et al., 2019) provides a popular implementation of HNSW indexes that is broadly adopted today and serves as a standard baseline. Despite conceptual similarities (Lin, 2021), it is clear that top-k retrieval on sparse vectors and dense vectors require quite different and distinct \u201csoftware stacks\u201d. Since hybrid approaches that combine both dense and sparse representations have been shown to be more effective than either alone (Ma et al., 2022b; Lin and Lin, 2023), many modern systems combine separate retrieval components to achieve hybrid retrieval. For example, the Pyserini IR toolkit (Lin et al., 2021a) integrates Lucene and Faiss for sparse and dense retrieval, respectively. Recognizing the need for managing both sparse and dense retrieval models, the dominant narrative today is that the modern enterprise \u201cAI stack\u201d requires a dedicated vector store or vector database, alongside existing fixtures such as relational databases, NoSQL stores, event stores, etc. A vector store would handle, for example, standard CRUD (create, read, update, delete) operations as well as nearest neighbor search. Many startups today are built on this premise; examples include Pinecone, Weaviate, Chroma, Milvus, Qdrant, just to name a few. This is the narrative that our work challenges. 2https://github.com/castorini/anserini/blob/master/docs/experiments-msmarco-passage-openai-ada2.md 2 \fRanked List \u201cDocuments\u201d Query Doc Encoder Query Encoder Top-k Retrieval Figure 1: A standard bi-encoder architecture, where encoders generate dense vector representations (embeddings) from queries and documents (passages). Retrieval is framed as k-nearest neighbor search in vector space. Modern enterprise architectures are already exceedingly complex, and the addition of another software component (i.e., a distinct vector store) requires carefully weighing costs as well as benefits. The cost is obvious: increased complexity, not only from the introduction of a new component, but also from interactions with existing components. What about the benefits? While vector stores no doubt introduce new capabilities, the critical question is whether these capabilities can be provided via alternative means. Search is a brownfield application. Wikipedia defines this as \u201ca term commonly used in the information technology industry to describe problem spaces needing the development and deployment of new software systems in the immediate presence of existing (legacy) software applications/systems.\u201d Additionally, \u201cthis implies that any new software architecture must take into account and coexist with live software already in situ.\u201d Specifically, many organizations have already made substantial investments in search within the Lucene ecosystem. While most organizations do not directly use the open-source Lucene search library in production, the search application landscape is dominated by platforms that are built on top of Lucene such as Elasticsearch, OpenSearch, and Solr. For example, Elastic, the publicly traded company behind Elasticsearch, reports approximately 20,000 subscriptions to its cloud service as of Q4 FY2023.3 Similarly, in the category of search engines, Lucene dominates DB-Engines Ranking, a site that tracks the popularity of various database management systems.4 There\u2019s a paucity of concrete usage data, but it would not be an exaggeration to say that Lucene has an immense install base. The most recent major release of Lucene (version 9), dating back to December 2021, includes HNSW indexing and search capabilities, which have steadily improved over the past couple of years. This means that differences in capabilities between Lucene and dedicated vector stores are primarily in terms of performance, not the availability of must-have features. Thus, from a simple cost\u2013benefit calculus, it is not clear that vector search requires introducing a dedicated vector store into an already complex enterprise \u201cAI stack\u201d. Our thesis: Lucene is all you need. We empirically demonstrate our claims on the MS MARCO passage ranking test collection, a standard benchmark dataset used by researchers today. We have encoded the entire corpus using OpenAI\u2019s ada2 embedding endpoint, and then indexed the dense vectors with Lucene. Experimental results show that this combination achieves effectiveness comparable to the state of the art on the development queries as well as queries from the TREC 2019 and 2020 Deep Learning Tracks. 3https://ir.elastic.co/news-events/press-releases/press-releases-details/2023/ Elastic-Reports-Fourth-Quarter-and-Fiscal-2023-Financial-Results/default.aspx 4https://db-engines.com/en/ranking/search+engine 3 \fOur experiments are conducted with Anserini (Yang et al., 2018), a Lucene-based IR toolkit that aims to support reproducible information retrieval research. By building on Lucene, Anserini aims to bridge the gap between academic information retrieval research and the practice of building real-world search applications. Devins et al. (2022) showed that capabilities implemented by researchers in Anserini using Lucene can be straightforwardly translated into Elasticsearch (or any other platform in the Lucene ecosystem), thus simplifying the path from prototypes to production deployments. Our demonstration further shows the ease with which state-of-the-art vector search can be implemented by simply \u201cplugging together\u201d readily available components. In the context of the bi-encoder architecture, Lin (2021) identified the logical scoring model and the physical retrieval model as distinct conceptual components. In our experiments, the logical scoring model maps to the OpenAI embedding API\u2014whose operations are no different from any other API endpoint. What Lin calls the physical retrieval model focuses on the top-k retrieval capability, which is handled by Lucene. In Anserini, vector indexing and search is exposed in a manner that is analogous to indexing and retrieval using bag-of-words models such as BM25. Thus, the implementation of the state of the art in vector search using generative AI does not require any AI-specific implementations, which increases the accessibility of these technologies to a wider audience. 3 Experiments Experiments in this paper are relatively straightforward. We focused on the MS MARCO passage ranking test collection (Bajaj et al., 2018), which is built on a corpus comprising approximately 8.8 million passages extracted from the web. Note that since the embedding vectors are generated by OpenAI\u2019s API endpoint, no model training was performed. For evaluation, we used the standard development queries as well as queries from the TREC 2019 and TREC 2020 Deep Learning Tracks. In our experimental setup, we utilized the OpenAI ada2 model (Neelakantan et al., 2022) for generating both query and passage embeddings. This model is characterized by an input limit of 8191 tokens and an output embedding size of 1536 dimensions. However, to maintain consistency with the existing literature (Pradeep et al., 2021; Ma et al., 2022a), we truncated all passages in the corpus to 512 tokens. It is unknown whether OpenAI leveraged the MS MARCO passage corpus during model development, but in general, accounting for data leakage is extremely challenging for large models, especially those from OpenAI that lack transparency. Using tiktoken, OpenAI\u2019s official tokenizer, we computed the average token count per passage in our corpus to be 75.2, resulting in a total of approximately 660 million tokens. In order to generate the embeddings efficiently, we queried the API in parallel while respecting the rate limit of 3500 calls per minute. We had to incorporate logic for error handling in our code, given the high-volume nature of our API calls. Ultimately, we were able to encode both the corpus and the queries, the latter of which are negligible in comparison, in a span of two days. As previously mentioned, all our retrieval experiments were conducted with the Anserini IR toolkit (Yang et al., 2018). The primary advantage of Anserini is that it provides direct access to underlying Lucene features in a \u201cresearcher-friendly\u201d manner that better comports with modern evaluation workflows. Our experiments were based on Lucene 9.5.0, but indexing was a bit tricky because the HNSW implementation in Lucene restricts vectors to 1024 dimensions, which was not sufficient for OpenAI\u2019s 1536-dimensional embeddings.5 Although the resolution of this issue, which is to make vector dimensions configurable on a per codec basis, has been merged to the Lucene source trunk,6 this feature has not been folded into a Lucene release (yet) as of early August 2023. Thus, there is no public release of Lucene that can directly index OpenAI\u2019s ada2 embedding vectors. Fortunately, we were able to hack around this limitation in an incredibly janky way.7 Experimental results are shown in Table 1, where we report effectiveness in terms of standard metrics: reciprocal rank at 10 (RR@10), average precision (AP), nDCG at a rank cutoff of 10 (nDCG@10), and recall at a rank cutoff of 1000 (R@1k). The effectiveness of the ada2 embeddings is shown in the 5https://github.com/apache/lucene/issues/11507 6https://github.com/apache/lucene/pull/12436 7The sketch of the solution is as follows: We copy relevant source files from the Lucene source trunk directly into our source tree and patch the vector size settings directly. When we build our fatjar, the class files of our \u201clocal versions\u201d take precedence, and hence override the vector size limitations. 4 \fdev DL19 DL20 RR@10 R@1k AP nDCG@10 R@1k AP nDCG@10 R@1k Unsupervised Sparse Representations BM25 (Ma et al., 2022a)\u2217 0.184 0.853 0.301 0.506 0.750 0.286 0.480 0.786 BM25+RM3 (Ma et al., 2022a)\u2217 0.157 0.861 0.342 0.522 0.814 0.301 0.490 0.824 Learned Sparse Representations uniCOIL (Ma et al., 2022a)\u2217 0.352 0.958 0.461 0.702 0.829 0.443 0.675 0.843 SPLADE++ ED (Formal et al., 2022)\u2217 0.383 0.983 0.505 0.731 0.873 0.500 0.720 0.900 Learned Dense Representations TAS-B (Hofst\u00e4tter et al., 2021) 0.340 0.975 0.712 0.845 0.693 0.865 TCT-ColBERTv2 (Lin et al., 2021b)\u2217 0.358 0.970 0.447 0.720 0.826 0.475 0.688 0.843 ColBERT-v2 (Santhanam et al., 2022) 0.397 0.984 Aggretriever (Lin et al., 2023)\u2217 0.362 0.974 0.435 0.684 0.808 0.471 0.697 0.856 OpenAI ada2 0.343 0.984 0.479 0.704 0.863 0.477 0.676 0.871 Table 1: Effectiveness of OpenAI ada2 embeddings on the MS MARCO development set queries (dev) and queries from the TREC 2019/2020 Deep Learning Tracks (DL19/DL20), compared to a selection of other models. \u2217indicates results from Pyserini\u2019s two-click reproductions (Lin, 2022) available at https://castorini.github.io/pyserini/2cr/msmarco-v1-passage.html, which may differ slightly from the original papers. All other results are copied from their original papers. last row of the table. Note that due to the non-deterministic nature of HNSW indexing, effectiveness figures may vary slightly from run to run. For comparison, we present results from a few select points of reference, classified according to the taxonomy proposed by Lin (2021); OpenAI\u2019s embedding models belong in the class of learned dense representations. Notable omissions in the results table include the following: the original OpenAI paper that describes the embedding model (Neelakantan et al., 2022) does not report comparable results; neither does Izacard et al. (2021) for Contriever, another popular learned dense representation model. Recently, Kamalloo et al. (2023) also evaluated OpenAI\u2019s ada2 embeddings, but they did not examine any of the test collections we do here. Looking at the results table, our main point is that we can achieve effectiveness comparable to the state of the art using a production-grade, completely off-the-shelf embedding API coupled with Lucene for indexing and retrieval. To complete our experimental results, we provide performance figures on a server with two Intel Xeon Platinum 8160 processors (33M Cache, 2.10 GHz, 24 cores each) with 1 TB RAM, running Ubuntu 18.04 with ZFS. This particular processor was launched in Q3 of 2017 and is no longer commercially available; we can characterize this server as \u201chigh end\u201d, but dated. Indexing took around three hours with 16 threads, with the parameters M set to 16 and efC set to 100, without final segment optimization. Using 32-bit floats, the raw 1536-dimensional vectors should consume 54 GB on disk, but for convenience we used an inefficient JSON text-based representation. Therefore, our collection of vectors takes up 109 GB as compressed text files (using gzip). For vector search, using 16 threads, we were able to achieve 9.8 queries per second (QPS), fetching 1000 hits per query with the efSearch parameter set to 1000. These results were obtained on the MS MARCO development queries, averaged over four separate trials after a warmup run. 4 Discussion Our demonstration shows that it is possible today to build a vector search prototype using OpenAI embeddings directly with Lucene. Nevertheless, there are a number of issues worth discussing, which we cover below. Jank. We concede that getting our demonstration to work required a bit of janky implementation tricks. Even though all the required features have been merged to Lucene\u2019s source trunk, no official release has been cut that incorporates all the patches (at least at the time we performed our experiments in early August, 2023). Quite simply, the complete feature set necessary for production deployment is not, as they say, ready for prime time. However, to use another clich\u00e9, this is a small matter of programming (SMOP). We see no major roadblocks in the near future: the next official release of 5 \fLucene will incorporate the necessary features, and after that, all downstream consumers will begin to incorporate the capabilities that we demonstrate here. Nevertheless, Lucene has been a relative laggard in dense retrieval. Despite this, we believe that recent developments point to substantial and sustained investments in the Lucene ecosystem moving forward. For example, in its Q4 FY 2023 report, Elastic announced the Elasticsearch Relevance Engine, \u201cpowered by built-in vector search and transformer models, designed specifically to bring the power of AI innovation to proprietary enterprise data.\u201d A recent blog post8 from Amazon Web Services explained vector database capabilities in OpenSearch, providing many details and reference architectures. These are just two examples of commitments that help bolster the case for Lucene that we have articulated here. Overall, we are optimistic about the future of the ecosystem. Performance. Lucene still lags alternatives in terms of indexing speed, query latency and throughput, and related metrics. For example, Ma et al. (2023) recently benchmarked Lucene 9.5.0 against Faiss (Johnson et al., 2019). Experiments suggest that Lucene achieves only around half the query throughput of Faiss under comparable settings, but appears to scale better when using multiple threads. Although these results only capture a snapshot in time, it would be fair to characterize Lucene as unequivocally slower. However, Faiss is relatively mature and hence its headroom for performance improvements is rather limited. In contrast, we see many more opportunities for gains in Lucene. Coupled with signs of strong commitment (discussed above), we believe that the performance gap between Lucene and dedicated vector stores will decrease over time. Alternatives. We acknowledge a number of competing alternatives that deserve consideration. Note that the core argument we forward is about cost\u2013benefit tradeoffs: In our view, it is not clear that the benefits offered by a dedicated vector store outweigh the increased architectural complexity of introducing a new software component within an enterprise. From this perspective, we can identify two potentially appealing alternatives: \u2022 Fully managed services. One simple way to reduce architectural complexity is to make it someone else\u2019s problem. Vespa9 is perhaps the best example of this solution, providing both dense retrieval and sparse retrieval capabilities in a fully managed environment, eliminating the need for users to explicitly worry about implementation details involving inverted indexes, HNSW indexes, etc. Vepsa provides a query language that supports a combination of vector search, full-text search, as well as search over structured data. Our main question here concerns traction and adoption: as a brownfield application, we\u2019re not convinced that enterprises will make the (single, large) leap from an existing solution to a fully managed service. \u2022 Vector search capabilities in relational databases. In the same way that vector search grows naturally out of an already deployed and mature text search platform (e.g., Elasticsearch), we can see similar arguments being made from the perspective of relational databases. Despite numerous attempts (spanning decades) at toppling its lofty perch (Stonebraker and Hellerstein, 2005; Pavlo et al., 2009), relational databases remain a permanent fixture in enterprise \u201cdata stacks\u201d. This means that by building vector search capabilities into relational databases, enterprises gain entr\u00e9e into the world of dense retrieval (essentially) for free. A great example of this approach is pgvector,10 which provides open-source vector similarity search for Postgres. We find the case compelling: if your enterprise is already running Postgres, pgvector adds vector search capabilities with minimal additional complexity. It\u2019s basically a free lunch. 5" + }, + { + "url": "http://arxiv.org/abs/2304.01019v1", + "title": "Simple Yet Effective Neural Ranking and Reranking Baselines for Cross-Lingual Information Retrieval", + "abstract": "The advent of multilingual language models has generated a resurgence of\ninterest in cross-lingual information retrieval (CLIR), which is the task of\nsearching documents in one language with queries from another. However, the\nrapid pace of progress has led to a confusing panoply of methods and\nreproducibility has lagged behind the state of the art. In this context, our\nwork makes two important contributions: First, we provide a conceptual\nframework for organizing different approaches to cross-lingual retrieval using\nmulti-stage architectures for mono-lingual retrieval as a scaffold. Second, we\nimplement simple yet effective reproducible baselines in the Anserini and\nPyserini IR toolkits for test collections from the TREC 2022 NeuCLIR Track, in\nPersian, Russian, and Chinese. Our efforts are built on a collaboration of the\ntwo teams that submitted the most effective runs to the TREC evaluation. These\ncontributions provide a firm foundation for future advances.", + "authors": "Jimmy Lin, David Alfonso-Hermelo, Vitor Jeronymo, Ehsan Kamalloo, Carlos Lassance, Rodrigo Nogueira, Odunayo Ogundepo, Mehdi Rezagholizadeh, Nandan Thakur, Jheng-Hong Yang, Xinyu Zhang", + "published": "2023-04-03", + "updated": "2023-04-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Cross-lingual information retrieval (CLIR) is the task of searching documents in one language with queries from a different language\u2014 for example, retrieving Russian documents using English queries. Typically, a CLIR system exists as part of an overall pipeline involving machine translation, related human language technologies, and sometimes human experts, that together help users satisfy information needs with content in languages they may not be able to read. Research on cross-lingual information retrieval dates back many decades [11, 16, 31, 43], but there has been a recent revival of interest in this challenge [13, 49], primarily due to the advent of multilingual pretrained transformer models such as mBERT [10] and XLM-R [6]. A nexus of recent research activity for cross-lingual information retrieval is the TREC NeuCLIR Track, which ran for the first time at TREC 2022 but has plans for continuing in 2023 and perhaps beyond. The track provides a forum for a community-wide evaluation of CLIR systems in the context of modern collections and systems, dominated today by neural methods. NeuCLIR topics (i.e., information needs) are expressed in English, and systems are tasked with retrieving relevant documents from corpora in Chinese, Persian, and Russian. Perhaps as a side effect of the breakneck pace at which the field is advancing, we feel that there remains a lack of clarity in the IR community about the relationship between different retrieval methods (e.g., dense vs. sparse representations, \u201clearned\u201d vs. \u201cheuristic\u201d vs. \u201cunsupervised\u201d, etc.) and how they should be applied in different retrieval settings. Furthermore, the increasing sophistication of today\u2019s retrieval models and the growing complexity of modern software stacks create serious challenges for reproducibility efforts. This not only makes it difficult for researchers and practitioners to compare alternative approaches in a fair manner, but also creates barriers to entry for newcomers. These issues already exist for mono-lingual retrieval, where documents and queries are in the same language. With the added complexity of cross-lingual demands, the design choices multiply (choice of models, training regimes, application of translation systems, etc.), further muddling conceptual clarity and experimental reproducibility. Contributions. Our work tackles these challenges, specifically focused on helping both researchers and practitioners sort through the panoply of CLIR methods in the context of modern neural retrieval techniques dominated by deep learning. Our contributions can be divided into a \u201cconceptual\u201d and a \u201cpractical\u201d component: Conceptually, we provide a framework for organizing different approaches to cross-lingual retrieval based on the general design of multi-stage ranking for mono-lingual retrieval. These architectures comprise first-stage retrievers that directly perform top-\ud835\udc58retrieval over an arbitrarily large collection of documents, followed by one or more reranking stages that refine the rank order of candidates generated by the first stage. Recently, Lin [23] proposed that retrieval techniques can be characterized by the representations that they manipulate\u2014whether dense semantic vectors or sparse lexical vectors\u2014and how the weights are assigned\u2014whether heuristically, as in the case of BM25, or by a neural network that has been trained with labeled data. Translated into the cross-lingual case, this leads naturally to three main approaches to first-stage retrieval: document translation, query translation, and use of language-independent representations. While these approaches date back many decades, there are \u201cmodern twists\u201d based on learned representations that take advantage of powerful pretrained transformer models. arXiv:2304.01019v1 [cs.IR] 3 Apr 2023 \fResults Documents Query Doc Encoder Query Encoder Top-k Retrieval Results Documents f Query f Doc Encoder Query Encoder Top-k Retrieval Query e translation Results Documents e Query e Doc Encoder Query Encoder Top-k Retrieval Documents f translation Results Documents f Query e Doc Encoder Query Encoder Top-k Retrieval (a) (b) (c) (d) Figure 1: Different retrieval architectures: (a) a mono-lingual bi-encoder architecture that captures both dense and sparse retrieval methods; (b) bi-encoder adapted for document translation, where all documents are translated into \ud835\udc52and queries remain in \ud835\udc52; (c) bi-encoder adapted for query translation, where query \ud835\udc52is translated into \ud835\udc53and issued against documents in \ud835\udc53; (d) bi-encoder where the encoders can project content from multiple languages into the same representation space. For mono-lingual retrieval, a standard multi-stage architecture applies rerankers to the output of first-stage retrievers, like those discussed above. In a cross-lingual context, we describe how crosslingual rerankers can be designed and built using existing multilingual models. Results fusion forms the final component of our conceptual framework. Within a multi-stage architecture, there arises a natural question of when fusion should be performed: this manifests in the early vs. late fusion techniques that we examine. Practically, we provide a number of reproducible baselines in the context of the above conceptual framework for the TREC 2022 NeuCLIR test collection, including variants of the highest-scoring runs that were submitted to the evaluation. These reproducible baselines have been incorporated into the Anserini and Pyserini IR toolkits. Our efforts are built on a collaboration of the two teams that submitted the most effective runs to the TREC evaluation. We hope that this work provides a solid foundation for future work, both in terms of offering a conceptual framework and reference implementations that the community can further build on. 2 MONO-LINGUAL RETRIEVAL OVERVIEW Since mono-lingual retrieval architectures provide the starting point for cross-lingual retrieval, it makes sense to begin with an overview of modern mono-lingual methods. Here, we adopt the standard formulation of the (mono-lingual) retrieval task (also called ad hoc retrieval). From a finite but arbitrarily large collection of documents C = {\ud835\udc511,\ud835\udc512 . . . ,\ud835\udc51\ud835\udc5b}, the system\u2019s task, given query \ud835\udc5e, is to return a top-\ud835\udc58ranking of documents that maximizes some metric of quality such as nDCG or average precision. Rerankers. The earliest applications of neural networks to tackle ad hoc retrieval in a data-driven manner date back to the mid 2000s in the context of learning to rank [5]. Since then, search engine design has been dominated by multi-stage ranking architectures [30, 44], where a first-stage retriever (often, just BM25 retrieval) generates candidate documents that are then reranked by one or more stages, typically by machine-learned models. In the \u201ctransformer era\u201d, for example, BERT [32, 34] and T5 [33] can be used in exactly this manner. Use of pretrained transformers for reranking requires feeding the model both the query and the candidate text, and this style of model application is known as a cross-encoder. Bi-encoder architectures. An important recent innovation for passage retrieval was the introduction of so-called dense retrieval models that take advantage of a bi-encoder design (contrasted with the cross-encoder design discussed above): DPR [19] and ANCE [45] are two early examples. With sufficient labeled data, we can learn encoders (typically, transformer-based models) that project queries and documents into a dense (semantic) representation space (e.g., 768 dimensions) where relevance ranking can be recast as nearestneighbor search over representation vectors. After the introduction of dense retrieval models, researchers soon realized that transformer-based encoders could also be coaxed to generate sparse representations, where the vector basis, for example, spans the input vocabulary space. Another way to view these so-called sparse retrieval models is to contrast them with BM25: whereas BM25 term weights are assigned using a heuristic scoring function, sparse retrieval models assign term weights that are learned using pretrained transformers such as BERT. Examples of these learned sparse retrieval models include DeepImpact [29], uniCOIL [24, 53], SPLADE [12], as well as many others. Recently, Lin [23] made the observation that dense retrieval models, sparse retrieval models, and traditional bag-of-words models (e.g., BM25) are all parametric variations of a bi-encoder architecture, which is shown in Figure 1(a). In all three classes of models, \u201cencoders\u201d take queries or documents and generate vector representations. There are two major axes of differences, the first of which lies in the basis of the representation vector: dense retrieval models \fgenerate dense (semantic) representations whereas sparse retrieval models and bag-of-words model ground their representation vectors in lexical space. The other major axis of variation is whether these representations are learned: yes in the case of dense and sparse retrieval models, but no in the case of traditional bag-of-words models. The conceptual framework for mono-lingual retrieval provides us with a basis for organizing cross-lingual retrieval approaches, which we discuss next. 3 CROSS-LINGUAL RETRIEVAL METHODS The cross-lingual information retrieval task is formalized in a similar manner as the mono-lingual retrieval task. We assume a collection of documents C\ud835\udc53in language \ud835\udc53comprised of {\ud835\udc511,\ud835\udc512 . . . ,\ud835\udc51\ud835\udc5b}. The system is given a query \ud835\udc5ein language \ud835\udc52, which we denote \ud835\udc5e\ud835\udc52 for clarity, and its task is to return a top-\ud835\udc58ranking of documents from C\ud835\udc53that maximizes some metric of quality such as nDCG or average precision. Throughout this work, \ud835\udc52refers to English and \ud835\udc53 refers to some non-English language (e.g., Russian), but this need not be the case in general. Building from the design of the mono-lingual retrieval architecture presented in the previous section, our discussions begin with three possible designs for first-stage retrieval: document translation, query translation, and the use of language-independent representations. We then overview cross-encoders for reranking the output of first-stage retrievers and finally conclude with some thoughts about fusion techniques. To further ground cross-lingual retrieval techniques, we provide some details about the TREC 2022 NeuCLIR evaluation. Given English queries, participants are tasked with retrieving from three separate corpora comprising Persian, Russian, and Chinese newswire documents curated from the Common Crawl between August 1, 2016 and July 31, 2021. The corpora are modest in size, with 2.23 million documents in Persian, 4.63 million documents in Russian, and 3.18 million documents in Chinese. Information needs (i.e., topics, in TREC parlance) were developed following a standard process for building retrieval test collections [15, 42]. The organizers released 114 topics, originally developed in English, which were then translated into Persian, Russian, and Chinese\u2014both by humans and automatically by Google Translate. The topics comprise \u201ctitle\u201d and \u201cdescription\u201d fields, where the former are akin to keyword queries and the latter are roughly sentence-long articulations of the information need. By design, all topics are aligned, in the sense that for each topic, we have translations in all three languages. However, it was not the case that all topics were evaluated for all languages: In total, the organizers released relevance judgments for 46 topics in Persian, 45 topics in Russian, and 49 topics in Chinese. 3.1 Document Translation A very simple approach to cross-lingual information retrieval is known as document translation: Given \ud835\udc5e\ud835\udc52in language \ud835\udc52and the corpus C\ud835\udc53in language \ud835\udc53, we can translate the entire corpus into language \ud835\udc52, i.e., {Translate(\ud835\udc51\ud835\udc56)}, and then perform mono-lingual retrieval in language \ud835\udc52. This design is shown in Figure 1(b), where the primary addition is a document translation phase that feeds into the document side of the bi-encoder architecture. While translating the entire corpus can be time-consuming, it only needs to be performed once and can be viewed as an expensive pre-processing step, like other computationally demanding document expansion techniques such as doc2query [35]. Any translation technique can be used, including off-the-shelf MT systems. Generally, since documents are comprised of well-formed sentences, automatic translation output can be quite fluent, depending on the quality of the underlying system. This stands in contrast to query translation (see below), where quality often suffers because queries are usually much shorter (hence lacking context) and systems are not usually trained on such inputs. Once C\ud835\udc53has been translated into C\ud835\udc52, we now have a monolingual retrieval task since queries are also in \ud835\udc52. In our case, the three corpora are in Persian, Russian, and Chinese, and we used the English translations provided by the NeuCLIR Track organizers, generated by the SockEye MT system. From the NeuCLIR topics, we extracted three types of English queries: only the \u201ctitle\u201d field, only the \u201cdescription\u201d field, and both. Our experiments used two retrieval models and pseudo-relevance feedback: BM25. Despite the advent of numerous neural ranking models, this traditional \u201cbag-of-words\u201d model remains a robust baseline. SPLADE. We chose SPLADE++ Ensemble Distil [12] due to its zero-shot capabilities. The SPLADE family of models is a sparse neural retrieval model that learns both document and query expansion controlled by a regularization term. Pseudo-relevance feedback (PRF). On top of results from both BM25 and SPLADE, we apply pseudo-relevance feedback. While RM3 is a popular choice and has been well studied in the context of neural methods [48], in this work we instead apply Rocchio feedback, for two reasons: First, Rocchio feedback has been demonstrated to be an effective pseudo-relevance feedback approach for dense vector representations, and applying Rocchio to lexical representations provides conceptual unity. In contrast, there is no equivalent RM3 variant for dense vectors, which makes comparing sparse and dense PRF more difficult. Second, previous work has shown that Rocchio is at least as effective as RM3 [26], so we gain simplicity and consistency without sacrificing effectiveness. 3.2 Query Translation The flip side of document translation is known as query translation: Given \ud835\udc5e\ud835\udc52in language \ud835\udc52and the corpus C\ud835\udc53in language \ud835\udc53, we can translate the query into language \ud835\udc53, i.e., Translate(\ud835\udc5e\ud835\udc52) = \ud835\udc5e\ud835\udc53, and then perform mono-lingual retrieval in language \ud835\udc53. This design is shown in Figure 1(c), where we add a query translation component that feeds the query side of the bi-encoder architecture. Query translation is much more computationally efficient than document translation, but has the disadvantages already discussed\u2014 queries may be more difficult to translate given that they may not be well-formed sentences. However, this approach enables more rapid experimentation since the introduction of a new translation model does not require re-translation of the entire corpus. One challenge of query translation is that we need a good monolingual retrieval model in \ud835\udc53, which by definition is non-English. While BM25 can provide a baseline (in the bag-of-words space of language \ud835\udc53), effective learned retrieval models are more difficult \fto come by since less manually labeled data are available in nonEnglish languages. Our experiments consider both human and machine translations of the topics provided by the track organizers. From each type of translation, we can create three types of queries: \u201ctitle\u201d, \u201cdescription\u201d, and \u201cboth\u201d (similar to the document translation case above). Thus, we have a total of six variations: {human translation, machine translation} \u00d7 {title, description, both}. With these conditions, we experimented with two different retrieval models as well as pseudo-relevance feedback: BM25. Again, this traditional \u201cbag-of-words\u201d model remains a robust baseline. SPLADE. To build SPLADE models in non-English languages, we first need to start with a good pretrained language model for that language. Thus, the models used here are first trained from scratch with the MLM+FLOPS loss [20] using a corpus concatenation of (i) the NeuCLIR corpus of the target language, (ii) the MS MARCO translations [4] for the target language, and (iii) the Mr. TyDi [51] corpus of the target language (if available). Finally, we fine-tuned on the target language version of MS MARCO, expecting to have similar zero-shot properties as similar experiments in English. A separate model was created for each language.1 Pseudo-relevance feedback. As in the document translation case, we can apply pseudo-relevance feedback on top of either BM25 or SPLADE. For the same reasons discussed above, Rocchio was chosen as the feedback method. 3.3 Language-Independent Representations Starting from the bi-encoder design for mono-lingual retrieval shown in Figure 1(a), one might wonder if it were possible for the document and query encoders to generate some sort of languageindependent semantic representation that would support direct relevance matching across languages. With the advent of pretrained multilingual transformers, this is indeed possible. For example, we can apply the document encoder to documents in C\ud835\udc53(in language \ud835\udc53), and apply the query encoder to a query in \ud835\udc52, and directly conduct relevance ranking on the representations. Thus, we can perform cross-lingual retrieval without explicit query or document translation. This is shown in Figure 1(d). The most straightforward implementation of this approach is to train a DPR model [19], but starting from a multilingual transformer backbone such as mBERT. To our knowledge, Asai et al. [1] was the first to propose such an approach. More recently, Zhang et al. [52] built on this basic design and introduced different approaches to exploit cross-lingual transfer by \u201cpre\u2013fine-tuning\u201d on English data before further fine-tuning on the target languages using non-English data. Although Zhang et al. focused on monolingual retrieval in non-English languages, many of the lessons learned are applicable to the cross-lingual case as well. Specifically, for this work, we pre\u2013fine-tuned a multilingual DPR model initialized from an XLM-R [6] backbone,2 dubbed xDPR. The 1SPLADE and pretrained models are made available at https://huggingface.co/naver/ modelname with modelname = neuclir22-{pretrained,splade}-{fa,ru,zh} 2https://huggingface.co/xlm-roberta-large model was trained on the MS MARCO passage dataset [2], where both query and passage encoders share parameters. With this trained model, we separately encoded the corpora in Persian, Russian, and Chinese. It is perhaps worth emphasizing that the same model was used in all three cases. For query encoding, we have a number of design choices. Similar to document translation and query translation, we can use \u201ctitle\u201d, \u201cdescription\u201d, or \u201cboth\u201d. Furthermore, we can encode queries either in \ud835\udc52or \ud835\udc53. In the first case, we are asking the encoder to directly project \ud835\udc52queries into the semantic space occupied by the \ud835\udc53documents. In the second case, the query starts off in \ud835\udc53, so the model is encoding a sequence in \ud835\udc53 into the semantic space occupied by \ud835\udc53documents. Thus, for each language, we arrive at a total of nine variations: {original query, human translation, machine translation} \u00d7 {title, description, both}. Finally, on top of xDPR retrieved results, we can apply pseudorelevance feedback using Rocchio\u2019s method, following the work of Li et al. [21, 22]. Thus, combined with Liu [26], we are able to implement Rocchio feedback consistently across both dense and sparse retrieval models. 3.4 Reranking In a standard multi-stage ranking architecture, the first-stage retriever generates a ranked list of candidates that are then processed by one or more reranking stages that aim to improve the ranking. Reranking is also applicable in the cross-lingual case, but depending on the first-stage retriever, the candidate query/document pairs may either be in \ud835\udc52or \ud835\udc53. In cases where both the queries and documents are in \ud835\udc52, we can use a mono-lingual English reranker. For the first-stage retrievers based on document translation, our experiments used monoT5, which is based on T5 [37]. Reranking is performed in English with the following prompt: Query: {query_text} Document: {doc_text} Relevant: The model is asked to generate either the \u201ctrue\u201d or \u201cfalse\u201d token, from which we can extract the probability of relevance used to sort the candidates. When the monoT5 model is fine-tuned on the MS MARCO passage dataset, it achieves state-of-the-art results on the TREC Deep Learning Tracks [8, 9], as well as impressive zero-shot effectiveness on BEIR [17] and many other datasets [38\u201340, 50]. For reranking first-stage retrievers based on query translation, we used a variant based on the multilingual version of T5 called mT5, which was pretrained on the multilingual mC4 dataset [46]; otherwise, we use the same reranking approach. To fine-tune mT5 for reranking, we employed a similar strategy as Bonifacio et al. [4] using mMARCO, the multilingual version of the MS MARCO dataset. For our experiments, we used the XXL model with 13B parameters. 3.5 Fusion Researchers have known for many decades that fusion techniques, which combine evidence from multiple individual runs, can improve effectiveness [3, 41]. Fusion works particularly well when the individual runs are based on different underlying techniques, such as in the case of dense vs. sparse retrieval models [14, 27]. Given that our first-stage retrievers are all based on very different approaches, \fwe would expect fusion to yield substantial boosts in effectiveness, although this does not appear to be borne out experimentally. Within a multi-stage architecture, there arises a natural question of when fusion should be performed. One possible approach is to independently rerank the output of each first-stage retriever, and then fuse those results; we call this late fusion. Another possible approach is to first fuse the output of the first-stage retrievers, and then rerank the combined results; we call this early fusion. The effectiveness difference between the two approaches is an empirical question, but late fusion is more computationally intensive because it requires more reranking. 4 IMPLEMENTATION DETAILS All the first-stage and fusion retrieval conditions described in this paper are implemented in Anserini [47] and Pyserini [25]. Anserini is a Java-based toolkit built around the open-source Lucene search library to support reproducible information retrieval research. Pyserini provides a Python interface to Anserini and further augments its capabilities by including support for dense retrieval models. Together, the toolkits are widely adopted by researchers in the IR and NLP communities. For document translation using BM25, our implementation uses Lucene\u2019s default analyzer for English, which performs tokenization, stemming, etc. Retrieval is performed with Pyserini\u2019s default BM25 parameters (\ud835\udc581 = 0.9, \ud835\udc4f= 0.4). For query translation, note that since we are indexing non-English text, analyzers in \ud835\udc53are required. Fortunately, Lucene already has analyzers implemented for all three languages, which we used out of the box. The same BM25 parameters were used. All SPLADE models were implemented in Lucene using the standard \u201cfake documents\u201d trick [28]. Token weights were used to generate synthetic documents where the token was repeated a number of times equal to its weight (after quantizing into integers). For example, if \u201ccar\u201d receives a weight of ten from the encoder, we simply repeat the token ten times. These fake documents are then indexed with Anserini as usual, where the weight is stored in the term frequency position of the postings in the inverted index. Top-\ud835\udc58 retrieval is implemented by using a \u201csum of term frequency\u201d scoring function in Lucene, which produces exactly the same output as ranking by the inner product between query and document vectors. Anserini provides the appropriate abstractions that hide all these implementation details. Support for dense retrieval is provided in Pyserini with the Faiss toolkit [18]; all xDPR runs were conducted with flat indexes. For both BM25 and SPLADE models, Anserini exposes the appropriate bindings for performing retrieval in Python, and Pyserini provides appropriate interfaces that abstract over and unify retrieval using dense and sparse models (i.e., they are merely parametric variations in the command-line arguments). Pyserini additionally provides implementations of reciprocal rank fusion, and thus the entire infrastructure makes mixing-and-matching different experimental conditions quite easy. 5 RESULTS Our results are organized into following progression: first-stage retrievers, reranking, and fusion. We report retrieval effectiveness in terms of nDCG@20, the official metric of the NeuCLIR evaluation, and recall at a cutoff of 1000 hits (recall@1000), which quantifies the effectiveness upper bound of reranking. The organizers also measured mean average precision (MAP) as a supplemental metric; we followed this procedure as well. Overall, the findings from nDCG@20 and MAP were consistent, and so for brevity we omit the MAP results in our presentation. In Section 3, we describe a vast design space for first-stage variants that can feed many reranking and fusion approaches. It is not practical to exhaustively examine all possible combinations, and thus our experiments were guided by progressive culling of \u201cuninteresting\u201d settings, as we\u2019ll describe. Finally, a word on significance testing: We are of course cognizant of its importance, but we are equally aware of the dangers of multiple hypothesis testing. Due to the large number of conditions we examine, a standard technique such as the Bonferroni correction is likely too conservative to detect significant differences, especially given the relatively small topic size of NeuCLIR. For most of our experiments, we did not perform significance testing and instead focused on general trends that are apparent from our large numbers of experimental conditions. We applied significance testing more judiciously, to answer targeted research questions. To be clear, the results we report are the only tests we conducted\u2014that is, we did not cherry-pick the most promising results. In all cases, we used paired \ud835\udc61-tests (\ud835\udc5d\u22640.05) with the Bonferroni correction. 5.1 First-Stage Retrievers We begin by examining the output of individual first-stage retrievers. Tables 1 and 2 present results in terms of nDCG@20 and recall@1000, respectively. Each block of rows is organized by the general approach. The columns show metrics grouped by language, and within each block, we report the results of using queries comprised of the \u201ctitle\u201d field, the \u201cdescription\u201d field, and both. Document translation. Recall that in the document translation condition, we are indexing the machine-translated documents provided by the NeuCLIR organizers, which are in English. The BM25 conditions in rows (1ab) and the SPLADE conditions in rows (2ab) differ only in the retrieval model applied to the translated corpus. For BM25, we see that \u201ctitle\u201d and \u201cboth\u201d query conditions yield about the same effectiveness (both metrics) on Persian and Chinese, but \u201cboth\u201d is worse on Russian. For all languages, it appears that \u201cdescription\u201d queries perform worse. For SPLADE, interestingly, for Persian and Chinese, there does not appear to be much of an effectiveness gap between the three types of queries for both metrics. This is likely because the retrieval model includes query expansion, and so the benefits from having richer descriptions of the information need diminish. The comparisons between (a) vs. (b) rows highlight the impact of pseudo-relevance feedback. We see that, at best, PRF yields a small improvement for BM25 in terms of nDCG@20, and for SPLADE, PRF actually decreases effectiveness. However, looking at the recall figures in Table 2, it does appear that PRF boosts recall. This behavior is expected, as PRF is primarily a recall-enhancing device. Query translation. With BM25, shown in rows (3a)\u2013(3d), we see that \u201ctitle\u201d and \u201cboth\u201d conditions are generally on par for Russian \fnDCG@20 Persian Russian Chinese PRF title desc both title desc both title desc both document translation \u2014 BM25 (1a) official Sockeye translation \u2717 0.3665 0.2889 0.3670 0.3693 0.2060 0.3080 0.3705 0.3070 0.3723 (1b) official Sockeye translation \u2713 0.3532 0.3127 0.3720 0.3589 0.2627 0.3188 0.3802 0.3206 0.3806 document translation \u2014 SPLADE (2a) official Sockeye translation \u2717 0.4627 0.4618 0.4802 0.4865 0.4193 0.4573 0.4233 0.4299 0.4236 (2b) official Sockeye translation \u2713 0.4438 0.4675 0.4645 0.4836 0.4243 0.4604 0.4204 0.4142 0.4206 query translation \u2014 BM25 (3a) human translation (HT) \u2717 0.3428 0.2843 0.3429 0.3668 0.3138 0.3665 0.2478 0.2068 0.2572 (3b) machine translation (MT) \u2717 0.3331 0.2974 0.3700 0.3564 0.2972 0.3605 0.1830 0.1498 0.1754 (3c) human translation (HT) \u2713 0.3356 0.2885 0.3408 0.3572 0.3366 0.3630 0.2544 0.1985 0.2734 (3d) machine translation (MT) \u2713 0.3374 0.3300 0.3612 0.3426 0.3257 0.3764 0.1861 0.1464 0.1785 query translation \u2014 SPLADE (4a) human translation (HT) \u2717 0.4301 0.4413 0.4788 0.4594 0.3922 0.4214 0.3110 0.2935 0.3143 (4b) machine translation (MT) \u2717 0.4437 0.4300 0.4728 0.4452 0.3792 0.4156 0.2843 0.2527 0.2929 (4c) human translation (HT) \u2713 0.4348 0.4232 0.4146 0.4322 0.4133 0.4316 0.3198 0.2926 0.3077 (4d) machine translation (MT) \u2713 0.4193 0.4121 0.4444 0.4337 0.3965 0.4075 0.2920 0.2562 0.3029 language-independent representations \u2014 xDPR (5a) \u27e8d: original corpus, q: English\u27e9 \u2717 0.1522 0.1847 0.1804 0.2967 0.2913 0.2866 0.2200 0.2192 0.2185 (5b) \u27e8d: original corpus, q: HT\u27e9 \u2717 0.2776 0.2900 0.2953 0.3350 0.3276 0.3307 0.3197 0.3129 0.3035 (5c) \u27e8d: original corpus, q: MT\u27e9 \u2717 0.2721 0.2968 0.3055 0.3619 0.3348 0.3542 0.3025 0.2785 0.3013 (5d) \u27e8d: original corpus, q: English\u27e9 \u2713 0.1694 0.1996 0.1993 0.3116 0.3085 0.3045 0.2442 0.2343 0.2312 (5e) \u27e8d: original corpus, q: HT\u27e9 \u2713 0.3083 0.2988 0.3197 0.3349 0.3544 0.3578 0.3376 0.3463 0.3380 (5f) \u27e8d: original corpus, q: MT\u27e9 \u2713 0.3136 0.3012 0.3181 0.3727 0.3690 0.3793 0.3268 0.3041 0.3345 Table 1: Main results table reporting nDCG@20 for various first-stage retrievers. and Chinese for both metrics. For SPLADE, shown in rows (4a)\u2013 (4d), there does not appear to be a consistent finding: in some cases, \u201cboth\u201d beats \u201ctitle\u201d, and the opposite in other cases. However, it does appear that \u201cdescription\u201d alone is generally less effective in terms of nDCG@20. With query translation, there is a natural comparison between human translations and machine translations. In rows (3) and (4), these are the (a) and (c) conditions versus the (b) and (d) conditions. It does not appear that for Persian and Russian, machine-translated queries are consistently less effective than human translations, for both BM25 and SPLADE. In some cases, we actually observe machine-translated queries outperforming their human-translation counterparts. For BM25, note that since the queries are bags of words, the fluency of the translations is not important, so long as the correct content terms are present. For SPLADE, the model appears to be robust to possibly disfluent translations. In Chinese, however, there does seem to be a noticeable gap between human and machine translations, with the human translations generally yielding better results. Finally, consistent with the document translation case, pseudorelevance feedback does not appear to improve nDCG@20, but does improve recall. Once again, this is expected. Language-Independent Representations. The final blocks in Tables 1 and 2 show the effectiveness of xDPR. Recall our experimental design: on the document end, the original corpus in \ud835\udc53is encoded with the model. On the query end, there are three options: directly encode the English query, encode the human-translated (HT) query, or encode the machine-translated (MT) query. These are shown in rows (5a), (5b), and (5c), respectively. We see quite a big difference in effectiveness between row (5a) and row (5b), which indicates that there is a big loss in trying to encode queries in \ud835\udc52directly into the semantic space occupied by documents in \ud835\udc53, compared to encoding queries in \ud835\udc53. Clearly, the model is not able to adequately encode text with the same meaning in different languages (the query translations) into the same semantic space. Regardless of configuration, the dense retrieval models appear to be far less effective than the BM25 and SPLADE models, for both translation types, across both metrics. However, we see that pseudo-relevance feedback does appear to increase effectiveness, which is consistent with previous work [21, 22] on vector PRF. 5.2 Reranking In the previous section, we examined first-stage retrieval settings for 18 \u00d7 3 = 54 different conditions, for each language. It is impractical to report reranking results for every single condition, and thus we made a few choices to focus our attention: We considered only conditions that take advantage of both title and description fields, which appear to be more robust than title-only queries. We \fRecall@1000 Persian Russian Chinese PRF title desc both title desc both title desc both document translation \u2014 BM25 (1a) official Sockeye translation \u2717 0.7335 0.6319 0.7652 0.7409 0.5780 0.7255 0.7567 0.6639 0.7567 (1b) official Sockeye translation \u2713 0.8111 0.7638 0.8248 0.7908 0.6780 0.7798 0.8129 0.7404 0.8011 document translation \u2014 SPLADE (2a) official Sockeye translation \u2717 0.8478 0.8796 0.8860 0.8538 0.8376 0.8513 0.7997 0.7597 0.7922 (2b) official Sockeye translation \u2713 0.8592 0.8735 0.8703 0.8686 0.8238 0.8544 0.8038 0.7623 0.8067 query translation \u2014 BM25 (3a) human translation (HT) \u2717 0.7128 0.7027 0.7373 0.7125 0.6655 0.7421 0.4759 0.4577 0.4940 (3b) machine translation (MT) \u2717 0.7254 0.6815 0.7424 0.7332 0.6210 0.7373 0.3829 0.2989 0.4028 (3c) human translation (HT) \u2713 0.7691 0.7520 0.8092 0.7381 0.7276 0.7770 0.5230 0.5113 0.5327 (3d) machine translation (MT) \u2713 0.7672 0.7033 0.7829 0.7439 0.7136 0.7959 0.4361 0.3748 0.4341 query translation \u2014 SPLADE (4a) human translation (HT) \u2717 0.7652 0.8173 0.8239 0.7739 0.7200 0.7612 0.6803 0.6602 0.6551 (4b) machine translation (MT) \u2717 0.8045 0.8172 0.8437 0.7725 0.7150 0.7669 0.6424 0.5919 0.6312 (4c) human translation (HT) \u2713 0.7897 0.8175 0.8245 0.7946 0.7209 0.7776 0.7100 0.7205 0.7029 (4d) machine translation (MT) \u2713 0.8099 0.8117 0.8350 0.7918 0.7090 0.7590 0.6861 0.6096 0.6535 language-independent representations \u2014 xDPR (5a) \u27e8d: original corpus, q: English\u27e9 \u2717 0.4910 0.5445 0.5393 0.5704 0.5627 0.5834 0.4161 0.4359 0.4386 (5b) \u27e8d: original corpus, q: HT\u27e9 \u2717 0.6288 0.6780 0.7088 0.6196 0.5825 0.6368 0.5773 0.5841 0.6031 (5c) \u27e8d: original corpus, q: MT\u27e9 \u2717 0.6333 0.6453 0.6850 0.6285 0.5649 0.6300 0.5420 0.5382 0.5873 (5d) \u27e8d: original corpus, q: English\u27e9 \u2713 0.4702 0.4981 0.5347 0.6251 0.5971 0.6212 0.4330 0.4714 0.4593 (5e) \u27e8d: original corpus, q: HT\u27e9 \u2713 0.6409 0.6612 0.7212 0.6541 0.5915 0.6346 0.6088 0.5939 0.6310 (5f) \u27e8d: original corpus, q: MT\u27e9 \u2713 0.6686 0.6516 0.7071 0.6784 0.6032 0.6475 0.5744 0.5375 0.6109 Table 2: Main results table reporting recall@1000 for various first-stage retrievers. also focused on runs without PRF, since PRF represents additional computational costs (both latency and index size). For each language, this reduces the number of first-stage retrievers under consideration to nine. We applied reranking on these runs, including the title and description fields in the input template to the reranking models. We informally, but not exhaustively, examined other conditions, but they did not appear to alter our overall findings. For example, we tried reranking the first-stage retrieval results with pseudo-relevance feedback, but the results were not noticeably better (even though they exhibited higher recall). Reranking results are shown in Table 3. Under the effectiveness of the first-stage retriever (\u201c1st\u201d columns), we report (nDCG@20, recall@1000): the first quantifies candidate ranking quality and the second quantifies the upper bound effectiveness of a reranker. We see that reranking improves effectiveness by large margins, but this is expected as the effectiveness of cross-encoders in various settings is well known (see Section 3.4). One interesting observation, however, is that reranking reduces the effectiveness gap between the best and worst first-stage retrievers. For example, starting with BM25, which is clearly less effective than SPLADE, the reranker is able to \u201cmake up\u201d for the lower quality candidates, such that the end-to-end effectiveness is relatively close to reranking SPLADE results (at least in terms of nDCG). In fact, in some cases, reranking xDPR results yields scores that are even higher than reranking BM25 results. While \u201ccoupling effects\u201d between the first-stage retriever and reranker have been previously noted in the literature [14, 36], this finding affirms the need for further explorations. 5.3 Fusion With fusion, the design space of possible combinations is immense and impractical to exhaustively explore. To provide continuity, we focus only on the first-stage retrievers in the reranking experiments. In the space of fusion techniques, we settled on reciprocal rank fusion, which is a simple, effective, and robust approach [7]. With these considerations, we experimented with the following fusion conditions in Table 4: (6a) document translation combining BM25 and SPLADE; (6b) query translation combining BM25 and SPLADE; (6c) combining document and query translation with BM25; (6d) combining SPLADE document and query translation; (6e) combining all lexical approaches; (6f) combining both dense approaches; (6g) combining everything. The top block of Table 4 repeats the effectiveness of the first-stage retrievers for convenience. In the bottom block of the table, cases in which the fusion results are worse than the best input are shown in red. In these cases, fusion provides no value over just selecting the best individual run. \fnDCG@20 Persian Russian Chinese 1st rerank 1st rerank 1st rerank document translation \u2014 BM25 (1a) official Sockeye translation (0.3670, 0.7652) 0.5350 (0.3080, 0.7255) 0.5662 (0.3723, 0.7567) 0.4955 document translation \u2014 SPLADE (2a) official Sockeye translation (0.4802, 0.8860) 0.5545 (0.4573, 0.8513) 0.5714 (0.4236, 0.7922) 0.5026 query translation \u2014 BM25 (3a) human translation (HT) (0.3429, 0.7373) 0.5346 (0.3665, 0.7421) 0.5745 (0.2572, 0.4940) 0.4300 (3b) machine translation (MT) (0.3700, 0.7424) 0.5551 (0.3605, 0.7373) 0.5742 (0.1754, 0.4028) 0.3831 query translation \u2014 SPLADE (4a) human translation (HT) (0.4788, 0.8239) 0.5722 (0.4214, 0.7612) 0.5823 (0.3143, 0.6551) 0.4980 (4b) machine translation (MT) (0.4728, 0.8437) 0.5932 (0.4156, 0.7669) 0.5767 (0.2929, 0.6312) 0.5132 language-independent representations \u2014 xDPR (5a) \u27e8d: original corpus, q: English\u27e9 (0.1804, 0.5393) 0.4630 (0.2866, 0.5834) 0.5305 (0.2185, 0.4386) 0.4440 (5b) \u27e8d: original corpus, q: HT\u27e9 (0.2953, 0.7088) 0.5614 (0.3307, 0.6368) 0.5617 (0.3035, 0.6031) 0.5008 (5c) \u27e8d: original corpus, q: MT\u27e9 (0.3055, 0.6850) 0.5644 (0.3542, 0.6300) 0.5337 (0.3013, 0.5873) 0.5087 Table 3: Results of reranking various first-stage retrievers (nDCG@20). Under the column \u201c1st\u201d we repeat the (nDCG@20, Recall@1000) metrics from the first-stage retriever for convenience. In all cases we used both titles and descriptions as queries in first-stage retrieval (with no pseudo-relevance feedback) and reranking. nDCG@20 Recall@1000 Persian Russian Chinese Persian Russian Chinese (1a) DT\u2013BM25 0.3670 0.3080 0.3723 0.7652 0.7255 0.7567 (2a) DT\u2013SPLADE 0.4802 0.4573 0.4236 0.8860 0.8513 0.7922 (3b) QT\u2013BM25 0.3700 0.3605 0.1754 0.7424 0.7373 0.4028 (4b) QT\u2013SPLADE 0.4728 0.4156 0.2929 0.8437 0.7669 0.6312 (5a) dense\u2013e 0.1804 0.2866 0.2185 0.5393 0.5834 0.4386 (5c) dense\u2013f 0.3055 0.3542 0.3013 0.6850 0.6300 0.5873 (6a) RRF(1a, 2a): DT\u2013BM25, DT\u2013SPLADE 0.4462 0.4180 0.4189 0.8936 0.8670 0.8536 (6b) RRF(3b, 4b): QT\u2013BM25, QT\u2013SPLADE 0.4610 0.4598 0.2981 0.8703 0.8368 0.6692 (6c) RRF(1a, 3b): DT\u2013BM25, QT\u2013BM25 0.3795 0.3635 0.2736 0.7901 0.7686 0.7366 (6d) RRF(2a, 4b): DT\u2013SPLADE, QT\u2013SPLADE 0.5165 0.4921 0.4178 0.9009 0.8508 0.7938 (6e) RRF(1a, 2a, 3b, 4b): DT, QT 0.4897 0.4857 0.4397 0.9285\u2020 0.8880 0.8637\u2020 (6f) RRF(5a, 5c): dense 0.2640 0.3469 0.2731 0.6814 0.6493 0.5693 (6g) RRF(1a, 2a, 3b, 4b, 5a, 5c): DT, QT, dense 0.4926 0.5142\u2020 0.4541 0.9291\u2020 0.8818 0.8704\u2020 Table 4: Results of different fusion combinations. Scores of individual first-stage retrievers are repeated for convenience. In all cases we used both titles and descriptions as queries, with no pseudo-relevance feedback. Red shows cases where fusion performed worse than the best single input run. \u2020 represents a significant improvement over (2a). From these results, it appears that for Persian and Russian, the best effectiveness can be achieved by fusing both document translation and query translation SPLADE models, row (6d), although for Chinese, the same fusion is a bit worse than just document translation SPLADE. Fusing all the lexical runs, row (6e), is a bit worse than fusing just SPLADE runs in Persian and Russian, but it improves Chinese. Finally, incorporating evidence from the languageindependent dense retrieval techniques appears to provide value over simply fusing the lexical results, as we see comparing (6g) and (6e). This is surprising given that by themselves, the dense retrieval runs are quite poor. Overall, we were somewhat surprised by the finding that fusion did not improve effectiveness as robustly as we had hoped. In Table 4, the figures in red represent all the cases in which fusion actually hurt effectiveness, i.e., fusion performed worse than the best single input run. We attribute this finding to the large differences in effectiveness between the runs, in that RRF does not work as well if one of the fusion inputs is much better than the others. \fPersian Russian Chinese 1st early late 1st early late 1st early late (4a) QT\u2013SPLADE = best single 0.4728 0.5932 0.4156 0.5767 0.2929 0.5132 (6c) RRF(1a, 3b): DT\u2013BM25, QT\u2013BM25 0.3795 0.5869 0.5723 0.3635 0.5788 0.5890 0.2736 0.5257\u2020 0.4150 (6d) RRF(2a, 4b): DT\u2013SPLADE, QT\u2013SPLADE 0.5165 0.5823 0.6122 0.4921 0.5729 0.5915 0.4178 0.5379 0.5272 (6e) RRF(1a, 2a, 3b, 4b): DT, QT 0.4897 0.5901 0.5911 0.4857 0.5728 0.5853 0.4397 0.5394 0.5058 (6f) RRF(5a, 5c): dense 0.2640 0.5621\u2020 0.4573 0.3469 0.5438 0.5162 0.2731 0.5077\u2020 0.4470 (6g) RRF(1a, 2a, 3b, 4b, 5a, 5c): DT, QT, dense 0.4926 0.5893 0.5626 0.5142 0.5676 0.5840 0.4541 0.5340 0.5295 Table 5: Comparisons between early and late fusion. \u2020 represents a significant improvement over late fusion. To more rigorously test this observation, we performed significance testing comparing the document translation SPLADE model, row (2a) in Table 4, against fusion of SPLADE models, row (6d), fusion of all lexical models, row (6e), and fusion of all lexical and dense models, row (6g). These comparisons answer the following questions, starting from the single best first-stage retriever: Does SPLADE fusion provide any additional value? What about BM25? Dense retrieval? The conclusion, reported in Table 4 with the symbol \u2020, is that most of the fusion combinations are not statistically significantly better than document translation with SPLADE, the single best first-stage retriever. For nDCG@20, the largest ensemble is significantly better than DT\u2013SPLADE only on Russian; for recall@1000 we see more significant improvements, but only on Persian and Chinese. Notably, combining evidence from both document and query translation with SPLADE, row (6d), is not significantly better than DT\u2013SPLADE alone. In our final set of experiments, we compared the effectiveness between early and late fusion for a subset of the conditions in Table 4. These results are reported in Table 5. In this case, we use QT\u2013SPLADE as the point of comparison, which appears to provide the best single-stage retriever and reranking combination. For Persian, late fusion appears to be either about the same or slightly better, with the exception of (6f); this appears to be the case for Russian also, although the late fusion margin of improvement seems to be smaller. Chinese results are a bit more mixed, with early beating late in some cases. To more rigorously compare early vs. late fusion, we performed significance tests comparing all pairs. Only a few of these differences are significant, and they only happen for cases where early fusion is better than late fusion. Two of the three cases, however, occurred for the dense models, which are less effective to begin with. Overall, these experiments are inconclusive with respect to the question of which fusion strategy is better. To provide additional context, the best runs from the NeuCLIR 2022 evaluation were from members of our group, but were generated under the time pressure of deadlines and thus it was not possible to carefully consider all configurations as we did in Table 5. The best runs were (nDCG@20 scores): (i) Persian: p2.fa.rerank, 0.588; (ii) Russian: p3.ru.mono, 0.567; (iii) Chinese: p2.zh.rerank, 0.516. Comparing those runs to the best conditions reported here, we verify that just by carefully studying the various effects of different system components, improvements are possible across all languages, achieving new state-of-the-art effectiveness with (i) Persian: 6d late-fusion 0.612 (+0.024); (ii) Russian: 6d late-fusion 0.592 (+0.025); (iii) Chinese: 6e early-fusion 0.539 (+0.023). 6" + }, + { + "url": "http://arxiv.org/abs/2212.13534v1", + "title": "Building a Culture of Reproducibility in Academic Research", + "abstract": "Reproducibility is an ideal that no researcher would dispute \"in the\nabstract\", but when aspirations meet the cold hard reality of the academic\ngrind, reproducibility often \"loses out\". In this essay, I share some personal\nexperiences grappling with how to operationalize reproducibility while\nbalancing its demands against other priorities. My research group has had some\nsuccess building a \"culture of reproducibility\" over the past few years, which\nI attempt to distill into lessons learned and actionable advice, organized\naround answering three questions: why, what, and how. I believe that\nreproducibility efforts should yield easy-to-use, well-packaged, and\nself-contained software artifacts that allow others to reproduce and generalize\nresearch findings. At the core, my approach centers on self interest: I argue\nthat the primary beneficiaries of reproducibility efforts are, in fact, those\nmaking the investments. I believe that (unashamedly) appealing to self\ninterest, augmented with expectations of reciprocity, increases the chances of\nsuccess. Building from repeatability, social processes and standardized tools\ncomprise the two important additional ingredients that help achieve\naspirational ideals. The dogfood principle nicely ties these ideas together.", + "authors": "Jimmy Lin", + "published": "2022-12-27", + "updated": "2022-12-27", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CY" + ], + "main_content": "Introduction I am passionate about making research reproducible by building and sharing, together with members of my research group and our collaborators around the world, software artifacts that others can use to recreate both our own work and the work of other researchers. In my experience, reproducibility is an ideal that no researcher would dispute \u201cin the abstract\u201d: like puppies, kittens, and rainbows, who could argue against it? So why aren\u2019t we \u201cdoing better\u201d when it comes to reproducibility? To be fair, the state of affairs has much improved, compared to, say, a decade ago. Today, it\u2019s fairly standard practice (and even expected?) in arti\ufb01cial intelligence and deep learning research that each paper is accompanied by a code repository that could (in theory) be used to reproduce the results. Many researchers make model checkpoints publicly available (e.g., on the Huggingface model hub). Yet, we are still falling short. For example, Voorhees et al. (2016) examined 79 so-called \u201cOpen Runs\u201d to TREC 2015\u2014de\ufb01ned as a TREC submission self-identi\ufb01ed as being backed by a software repository that can be used to recreate the exact run\u2014and found that none of them were reproducible. With improved tooling, one might expect the situation to be better today. For example, computational notebooks are often touted as a solution to reproducibility because of their shareable and self-documenting nature. However, in a large-scale study encompassing over 800k executions of publicly accessible valid Python notebooks on GitHub, Pimentel et al. (2019) found that only 24% executed without errors and only 4% produced the same results. The truth is that while reproducibility is a noble goal, it is aspirational, not obligatory. Unless, for example, publication venues enforce reproducibility (which would be very dif\ufb01cult to operationalize) competing priorities will take precedence. Work on a new paper or create a reproduction package arXiv:2212.13534v1 [cs.IR] 27 Dec 2022 \ffor an already published paper? The choice is usually clear for most. When aspirations meet the cold hard reality of the academic grind, it\u2019s almost inevitable that they lose. Hence the (sad) state of reproducibility today. The goal of this essay is to offer a possible path forward towards building a culture of reproducibility. My approach can be summarized in these two high-level bits of advice: \u2022 Appeal to self interest instead of altruism. \u2022 Engineer social processes to promote virtuous cycles and build standardized tools to reduce technical barriers. The \ufb01rst point tackles the incentive structure of \u201cwhy should I do this?\u201d Once that\u2019s been addressed, efforts should focus on promoting virtuous cycles and reducing technical barriers to make it easier to put reproducibility into practice. In a way, both these points can be subsumed under the software development principle of \u201ceat your own dog food\u201d. Before proceeding any further, it is necessary to circumscribe the scope (and possible limitations) of the advice I\u2019m offering. At a high level, my research has been driven by the quest to develop techniques and build tools that connect users to relevant information. Most of my focus has been on text, and so, cast into academic silos, my work lies at the intersection of natural language processing (NLP) and information retrieval (IR). Nearly all the work from my research group can be characterized as applied and empirical in nature. My latest interests lie in using pretrained transformer models to tackle text ranking and related challenges (Lin et al., 2021b), particularly from the perspective of representation learning (Lin, 2021). Despite differences in research agendas, I do believe that much of my advice is applicable to empirical research across many sub-disciplines in computer science beyond NLP and IR, for example, computer vision and data mining. Finally, it is worth noting that this essay represents an academic perspective: speci\ufb01cally, one of my most important roles is to mentor students. Although much of what I write doesn\u2019t really apply to employees in a corporate context, some lessons can still be adapted. This essay is organized around three main questions: \u2022 Why? (Section 2) Why should I care and why should I do it?1 \u2022 What? (Section 3) Okay, you\u2019ve convinced me. But what does it actually mean to make research reproducible? \u2022 How? (Section 4) Okay, you\u2019ve shown me the end goal. But how do I get there? Before concluding, Section 5 discusses a sm\u00f6rg\u00e5sbord of related issues. One \ufb01nal preamble before getting underway. To be precise, I use the term reproducibility in the sense articulated by the ACM in its Artifact Review and Badging Policy,2 characterized as \u201cdifferent team, same experimental setup\u201d. Speci\ufb01cally, \u201cthis means that an independent group can obtain the same result using the author\u2019s own artifacts.\u201d In contrast, replicability can be characterized as \u201cdifferent team, different experimental setup\u201d, and operationally, \u201cthis means that an independent group can obtain the same result using artifacts which they develop completely independently.\u201d Finally for completeness, repeatability can be characterized as \u201csame team, same experimental setup\u201d, i.e., \u201ca researcher can reliably repeat her own computation.\u201d I use these terms in the way prescribed by this ACM policy.3 2 Why? Broadly speaking, there are two main categories of arguments for why reproducibility is important and a worthwhile goal: the \ufb01rst is \u201cgood science\u201d and the second is \u201cgood citizenship\u201d. Both appeal to one\u2019s sense of altruism. 1The \u201cI\u201d in this case mostly refers to students. The real challenge is: How does an advisor convince students to actually do these things? 2https://www.acm.org/publications/policies/artifact-review-and-badging-current 3A confusing detail: A previous version of the same ACM policy swapped the meaning of reproducibility and replicability. 2 \fThe \ufb01rst broad category of arguments, \u201cgood science\u201d, goes something like this: Science represents a systematic attempt to accumulate and organize knowledge about the world in the form of testable explanations and predictions (paraphrased from Wikipedia). In the computational sciences, reproducibility and replicability are the mechanisms by which researchers can build on each other\u2019s results to accumulate knowledge. Reproducible and replicable \ufb01ndings increase the veracity of the underlying scienti\ufb01c claims. Conversely, the inability to reproduce or replicate a \ufb01nding casts doubts over whether it can be reliably built on or extended. If science is metaphorically the process of standing on the shoulder of giants in order to see further, reproducibility and replicability are the processes by which we test the stability of those shoulders we\u2019re attempting to climb on. This is the general sentiment expressed in Sonnenburg et al. (2007), but see Drummond (2009) for counterpoints. The second broad category of arguments, \u201cgood citizenship\u201d, goes something like this: A large part of the funding for research is provided by various governments via tax dollars, and thus, it behooves researchers to share their results in the broadest way possible. This usually entails making publications and associated research data publicly accessible\u2014because, after all, it belongs to the people who ultimately supported the research (i.e., taxpayers). In Canada, this is enshrined in policy: the tri-agencies, which are federal granting agencies that promote and support research, hold the of\ufb01cial position \u201cthat research data collected through the use of public funds should be responsibly and securely managed and be, where ethical, legal and commercial obligations allow, available for reuse by others.\u201d4 For research data management, the agencies support the guiding principles commonly known as FAIR, which stands for Findable, Accessible, Interoperable, and Reusable (Wilkinson et al., 2016). The U.S. National Science Foundation5 and the European Commission6 hold similar positions. While the original policies were formulated speci\ufb01cally with research data in mind, increasingly, software artifacts are given similar treatments (Lamprecht et al., 2020; Barker et al., 2022), and hence there is an increasing emphasis by funders (and by extension the researchers they support) on reproducibility within the broad umbrella of data management and stewardship. While both categories of arguments are undeniably persuasive, the unfortunate downside is that they appeal primarily to the researcher\u2019s altruism. Even in the relatively narrow cases where there are clear mandates and directives (e.g., from funding agencies), unless there is alignment with self interest, researchers will tend to do the minimum required. When \u201cnoble aspirations\u201d come into competition with the daily grind of the academic existence with its numerous other priorities, the (sad) truth is that reproducibility often takes a back seat. When faced with the choice of working on a new paper or cleaning up a paper that\u2019s already been published for reproducibility purposes, what do you think a student would do? To better help researchers prioritize competing demands, I offer another motive for reproducibility that instead appeals to self interest: \u2022 I say to a student: you\u2019re not doing reproducibility for others; you\u2019re doing it for yourself. \u2022 I say to myself: effort invested in reproducibility will help my research group iterate more rapidly and thus become more productive overall. To be precise, according to the ACM Artifact Review and Badging Policy, \u201cself-reproducibility\u201d is more accurately called repeatability, so I will use this term in the discussions below. In my nearly two decades as a faculty, I have had the following scenario happen countless times: A student is no longer able to recreate the experimental results obtained a short while ago.7 That is, the results are not repeatable. Perhaps there was a bug in the original implementation? A parameter setting that \u201cleaked\u201d the test data? An \u201coracle\u201d assumption that was later removed? There are numerous reasons why this could be so. If the results have not been made public (i.e., part of ongoing work that has not yet been published), then the student can (and should) \u201cstart over\u201d. A common case is a rejected paper where reviewers suggested new experimental conditions (e.g., ablation study). If the old experiments are not repeatable, then the implementation should be checked again carefully and all experiments should be run anew. 4https://www.science.gc.ca/eic/site/063.nsf/eng/h_97610.html 5https://www.nsf.gov/bfa/dias/policy/dmp.jsp 6https://open-research-europe.ec.europa.eu/for-authors/data-guidelines 7Here, I\u2019m not talking about \u201cnoise\u201d that can be attributed to, for example, non-determinism in GPU execution. These are cases where something clearly \u201cworked\u201d, but now doesn\u2019t. 3 \fAnd if the previously observed gains have now disappeared, it just means that the innovation was never real to begin with. However, more vexing is the situation where the results have already been reported in a published paper. That is, a student cannot repeat an experiment that has already been \u201censhrined\u201d in the literature. What to do then? A common scenario is that the student is trying to repeat the previously published experiment in order to perform follow-up work or to use those results as the basis of new research. Another common scenario is when the student is integrating several threads of already published work into a thesis, and the additional experiments are critical to knitting the otherwise disparate threads together. Whatever the case, proceeding without attempting to rectify the repeatability failure is scienti\ufb01cally dubious. In empirical research, experimental results always need points of reference for meaningful comparison (e.g., baselines, ablations, etc.), and referencing results that cannot be recreated is just bad science. At this point, I typically urge in the strongest possible way that the student \ufb01rst resolve the repeatability failure, which can, unfortunately, involve quite a bit of effort. The original experiments may have been conducted months ago, and in the frantic dash to a paper deadline, the student may not have been meticulous taking notes in a lab notebook.8 So, the starting point of the repeatability effort may be a directory containing \ufb01les with names like config3-bugfix-trial2.yaml, or worse yet, just a bunch of poorly named result \ufb01les (created by complex command-line invocations that weren\u2019t properly recorded). Thus, I explain to students the importance of repeatability today to prevent future frustration: the consequences can range from a missed opportunity for another paper to a graduation roadblock. As the saying goes, \u201cfuture you will thank you!\u201d Note that, critically, the motivation for the student is self interest, not altruism. Repeatability is a good \ufb01rst step towards reproducibility. In fact, I would characterize the combination along these lines: repeatability + social processes + standardized tools = reproducibility (1) Given repeatability as a starting point, we can \u201cget to\u201d reproducibility by engineering social processes to promote virtuous cycles and building standardized tools to reduce technical barriers. These I argue are the key ingredients to building a culture of reproducibility. Section 4 explains these two elements in much more detail. 3 What? Having covered the \u201cwhy\u201d, I\u2019ll move on to cover the \u201cwhat\u201d. Speci\ufb01cally, I\u2019ll try to answer the question: What does \u201cgood reproducibility\u201d look like? That is, I\u2019ll start at the end by describing my \u201caspirational ideal\u201d of what the end result of a reproducibility effort should be. The broader context of this discussion is what information retrieval researchers call the \u201ccore\u201d ranking problem (also called ad hoc retrieval). I\u2019ll just wholesale lift the de\ufb01nition from Lin (2021): Given an information need expressed as a query q, the task is to return a ranked list of k documents9 {d1, d2 . . . dk} from an arbitrarily large but \ufb01nite collection of documents D = {di} that maximizes a metric of interest, for example, nDCG, AP, etc. These metrics vary, but they all aim to quantify the \u201cgoodness\u201d of the results with respect to the information need. The retrieval task is also called top-k retrieval (or ranking), where k is the length of the ranked list. Generically, a retrieval model is a software artifact that addresses the core ranking problem, and one main focus of many researchers is to build better such models. This is primarily an empirical endeavor, as the dominant way of demonstrating (i.e., in academic publications) that one model is better than another is based on measurements using test collections. In the context of retrieval (or ranking) models, I believe that reproducibility efforts should yield easy-to-use, well-packaged, and self-contained software artifacts with clearly de\ufb01ned scope that allow the broadest possible audience to reproduce research \ufb01ndings by recreating experimental results. Ideally, the artifact should support what I call \u201ctwo-click reproductions\u201d, where a user can reproduce a reported result (for example, from a paper) with only two clicks: one click to copy a command-line 8To students: You do have a lab notebook, right? 9Consistent with parlance in information retrieval, I use \u201cdocument\u201d in a generic sense to refer to the unit of retrieved text, even though in truth it may be a passage, a web page, a PDF, or some arbitrary span of text. 4 \finvocation from a source (for example, a documentation page) and another click to paste the command into a shell. How to ensure that the two-click reproductions \u201cwork as advertised\u201d will be discussed in Section 4. I believe that my research group\u2014with the help of external collaborators\u2014has achieved this \u201caspirational ideal\u201d in many parts of our two IR toolkits, Anserini (Yang et al., 2017, 2018) and Pyserini (Lin et al., 2021a). Anserini is built around the open-source Lucene search library, the most widely adopted solution for tackling search problems in deployed real-world applications (typically, via platforms such as Elasticsearch). Our goal in building a research toolkit around Lucene is to facilitate a two-way exchange between academia and industry (Devins et al., 2022). Pyserini provides Python bindings to the capabilities offered in Anserini and integration with neural retrieval models built on industry-standard packages such as Huggingface Transformers, PyTorch, and Faiss. Pyserini includes many dense and sparse retrieval models built on transformer-based encoders as well as traditional \u201cbag-of-words\u201d models such as BM25 and relevance feedback techniques. To provide a speci\ufb01c example with Pyserini, experiments applying our uniCOIL model (Lin and Ma, 2021) to the development queries of the MS MARCO passage ranking task (Bajaj et al., 2018) can be accomplished with the following command: python -m pyserini.search.lucene \\ --index msmarco-passage-unicoil-d2q \\ --topics msmarco-passage-dev-subset \\ --encoder castorini/unicoil-msmarco-passage \\ --output runs/run.msmarco-passage.unicoil.tsv \\ --output-format msmarco \\ --batch 36 --threads 12 \\ --hits 1000 \\ --impact In terms of ease of use, a researcher who wishes to reproduce the results reported in our paper can do so with a single command, via two clicks: copy and paste (i.e., \u201ctwo-click reproduction\u201d). The above command calls the main driver program pyserini.search.lucene in a package that is published in the Python Package Index (PyPI),10 thus making the software artifact well-packaged, since it can be installed with standard tools such as pip. The two-click reproduction described above is self-contained because it has no other dependencies\u2014 many details are handled \u201cbehind the scenes\u201d. For example: \u2022 The option --index speci\ufb01es an inverted index of a commonly used corpus in IR research (the MS MARCO passage corpus) that Pyserini already \u201cknows about\u201d, along with dozens of other common corpora. On \ufb01rst invocation, Pyserini downloads a copy of the index from a known location (servers at the University of Waterloo) and caches it locally. \u2022 The option --topics speci\ufb01es a standard set of queries that is already included in Pyserini, so the user doesn\u2019t need to visit a separate website to download them. \u2022 The option --encoder refers to a transformer model for encoding the queries, which is hosted on the Huggingface model hub; Pyserini downloads and caches the model locally. The execution of the above command yields a run \ufb01le in the MS MARCO document format that can then be fed into the of\ufb01cial MS MARCO scoring script to arrive at the of\ufb01cial evaluation metric, reciprocal rank at cutoff 10. This scoring script is also conveniently packaged in Pyserini: python -m pyserini.eval.msmarco_passage_eval \\ msmarco-passage-dev-subset \\ runs/run.msmarco-passage.unicoil.tsv The output should be the \ufb01gure that appears in Table 2 of Lin and Ma (2021). Very small differences are sometimes observed due to the inherent non-determinism associated with neural inference (e.g., CPU vs. GPU inference, and even across different GPUs). Let me try to further unpack this ideal of \u201ctwo-click reproduction\u201d. The high-level goal is to reduce friction for users who wish to reproduce a particular result, for example, a \ufb01gure that is reported in 10https://pypi.org/project/pyserini/ 5 \fthe results table of a paper. Even if code associated with the paper is available, there\u2019s no easy way to separate different phases of the experiment, e.g., training the model from scratch vs. inference using a publicly shared model checkpoint. Even focusing on inference (i.e., ranking), the complexity of modern IR evaluation methodology means that there are a gazillion tiny details to keep track: Where do I get the index? Where do I get the topics? Where do I get the model itself? What versions of each, exactly? Is this the right version that goes with that? Sorting through these details is not intellectually challenging, but can be confusing for a novice (e.g., a student just getting into IR) or even a seasoned IR researcher who\u2019s never worked with this speci\ufb01c collection before. As a simple example, there are often different versions of a particular set of queries: what everyone calls the 6,980 queries in the MS MARCO passage ranking development set is actually only a subset of the \u201creal\u201d full development set. My other favorite example is TREC-COVID (Roberts et al., 2020), which has no less than a dozen different sets of relevance judgments. All of them are useful, but for answering different research questions. Which one do you use? Quite simply, the goal of \u201ctwo-click reproduction\u201d is to relieve the user of all these burdens via a simple, self-contained command that can be copied and pasted into a shell to reproduce an experimental result of interest. Providing a fully self-contained mechanism with a well-packaged artifact reduces friction; this is about as \u201ceasy to use\u201d as you can get. We even provide two-click reproductions for work by others. For example, the following command reproduces the results of DPR (Karpukhin et al., 2020) on the Natural Questions (NQ) dataset: python -m pyserini.search.faiss \\ --topics dpr-nq-test \\ --index wikipedia-dpr-multi-bf \\ --encoded-queries dpr_multi-nq-test \\ --output runs/run.dpr.nq-test.multi.bf.trec \\ --batch-size 36 --threads 12 See Ma et al. (2022b) for additional explorations. Perhaps the best testament to our efforts is that Pyserini is referenced by Karpukhin et al. (2020) in their of\ufb01cial repo11 as the preferred implementation to replicate their work. The software artifact should have a clearly de\ufb01ned scope, in terms of what it does and, just as importantly, what it doesn\u2019t do. In this case, Pyserini allows a researcher to reproduce a run on the MS MARCO passage corpus using a speci\ufb01c ranking model. Retrieval is performed using a speci\ufb01c model checkpoint: training a model from scratch is out of scope (although we\u2019ve shared other code to enable reproducible model training). Retrieval uses a pre-built index: building an index from scratch or searching another corpus is also out of scope in this speci\ufb01c instance (although Pyserini does provide tools for indexing and searching arbitrary corpora). However, this reproduction command does provide generalizability to different queries, since the encoder model can be applied to arbitrary queries for retrieval. The scope of a reproducibility effort can often be de\ufb01ned in terms of abstractions encoded in software artifacts. The application of neural networks can be divided into the training of the retrieval models and inference using those models. Quite explicitly, Pyserini does not provide any code for training neural models; it is focused on neural inference at search time. Related to the issue of scope is the explicit acknowledgement in this two-click reproduction ideal that the software artifact and the reproduction commands comprise an abstraction barrier. The contract is simply that, \u201cif you run this command, you\u2019ll be able to reproduce these results.\u201d No promises are made about the quality of the code behind the scenes, which may be a pile of spaghetti. This, I believe, is a feature, not a bug. Internally, it would be desirable that, once the cover is lifted, the internals of the artifact are beautifully engineered, but this should not be a barrier to reproducibility. I can\u2019t count the number of times I\u2019ve heard something along the lines of \u201cthe code is really messy, I want to clean it up \ufb01rst before I open source it.\u201d Mentally, that translates in my mind into \u201cit\u2019ll never happen\u201d, and usually I\u2019m right. It is dif\ufb01cult to tell if a researcher is using this line as an excuse or if it\u2019s uttered in good faith. In the latter case, other priorities usually intervene, and the net effect is the same. Code never sees the light of day. 11https://github.com/facebookresearch/DPR 6 \fThe high-level point is that messy code should not be an impediment to reproducibility, as long as the right abstractions are established\u2014in this case, a PyPI artifact. Of course, clean internal implementations will make the packaging easier, but janky code that generates the correct behavior is much preferred to elegant code that doesn\u2019t work or not having any open-source code at all. With a messy but functional implementation, there exists a starting point for refactoring down the road if so desired. There are, literally, entire tomes written about best practices for doing so in a sane manner; for example, I recommend the recent book by Riccomini and Ryaboy (2021) for practical advice and an entry point into this vast literature. 4 How? Having covered \u201cwhy\u201d and \u201cwhat\u201d, I move on to cover \u201chow\u201d. Repeating from the introduction, my approach can be summarized in two high-level bits of advice: (1) motivate the importance of reproducibility by appealing to self interest instead of altruism, and (2) engineer social processes to promote virtuous cycles and build standardized tools to reduce technical barriers. The implementation of these two points should be guided by the dogfood principle, or the directive of \u201ceat your own dog food\u201d, which refers to the colloquialism of using one\u2019s own \u201cproduct\u201d. In the context of a research group, it means that members of the group should be actively using the software artifacts developed by the group. Quite simply, software artifacts that are used tend to become re\ufb01ned over time, or at the very least, bugs get \ufb01xed, because otherwise research would grind to a standstill. My group uses Pyserini, Anserini, and a few other packages we\u2019ve developed as the foundation for ongoing work. Many new research ideas build on, hook into, or otherwise depend on Pyserini. In turn, improved capabilities in Pyserini spur further advances. To provide an example, the two-click reproductions described in the previous section solve a number of problems for the community, including ourselves (invoking self interest again). Speci\ufb01cally, they provide competitive baselines for comparisons and a solid foundation for \ufb01rst-stage retrieval in a multi-stage ranking architecture. For example, students focused on building better rerankers need not waste time worrying about the proper setup of the \ufb01rst-stage ranker. They simply follow the prescriptions in our two-click reproductions as the starting point. Across the research group, this ensures consistency and reduces the possibilities of bugs: We can be con\ufb01dent that every reranker implementation is consuming exactly the same set of candidates and thus the comparisons are fair. More generally, Pyserini makes it easy to run experiments on standard IR test collections; the toolkit handles much of the boilerplate, such as bindings to query sets, relevance judgments, and evaluation scripts. Many capabilities come for \u201cfree\u201d, for example, general techniques such as rank fusion and pseudo-relevance feedback can be applied with little effort. This means that (a) the student writes less code (appealing to self interest again) and (b) the veracity of results increases due to greater consistency in experimental design. Thus, with the dogfood principle, it\u2019s clear to see that reproducibility is driven by self interest. It allows my students and collaborators to more easily build on each other\u2019s results and iterate more rapidly, enhancing their productivity and thus leading to more publications. We are the primarily bene\ufb01ciaries, and the community bene\ufb01ts as a nice side effect. Here\u2019s a sketch of how all these ideas \u201ctie together\u201d, with repeatability as a starting point. I\u2019ll \ufb01rst describe the social processes that I\u2019ve engineered to promote virtuous cycles and then move on to discuss infrastructure that reduces the technical barriers to reproducibility. 4.1 Social Processes: From Repeatability to Reproducibility As a starting point, a student is motivated to make experiments repeatable, for all the reasons already discussed in Section 2. This involves documenting all the steps necessary to produce experimental results, including con\ufb01guration settings, command-line invocations, etc. The documentation that describes how to repeat an experiment (written by the student who initially ran the experiment), when shared, becomes what I call a reproducibility guide (or a \u201crepro guide\u201d for short). In many cases, these guides are just markdown \ufb01les in the docs/ directory of the GitHub repository that contains the code. They contain, at a minimum, the sequence of command-line invocations that are necessary to reproduce a particular set of experimental results, with accompanying descriptions in prose. The goal 7 \fis that copying and pasting commands from the guide into a shell should succeed in reproducing the same experimental results (modulo issues like non-determinism in GPU execution). The \ufb01nal step is to actually get another person to \u201ctry out\u201d the guide, i.e., follow exactly the prescribed steps and make sure that they work as expected. Who does this and why would they?12 I have two tools: again, appealing to self interest, but augmented this time with reciprocity, and a new trick\u2014providing an onboarding path to new students. For students who are already actively contributing to shared code, there are multiple incentives. Assisting with a reproduction gives the student \ufb01rst access to a new feature, one that could potentially serve as the basis of follow-up work. Additionally, the social pressures of reciprocity can be an effective motivation: students are the bene\ufb01ciaries of previous group members who \u201cpaved the way\u201d and thus it behooves them to write good documentation to support future students. Self interest and reciprocity intersect as well, because students know that at some future point in time, I will demand a check on their reproduction guide. It\u2019s nice to be able to say, \u201cPlease help me out here, since I did the same for you the last time.\u201d The other appeal is that reproduction guides provide onboarding paths. For prospective students who wish to become involved in our group\u2019s research, performing reproductions offers exposure to our work and an introduction to our codebase. These exercises are particularly suitable for undergraduates as their \ufb01rst step in learning about research. Students are quite motivated (out of self interest) and the group bene\ufb01ts from more people checking the quality of our work. Sometimes, I try out the reproduction guides myself, and this, too, is motivated by self interest. The typical scenario is documentation written by a graduating student as a \ufb01nal \u201cwrap up\u201d of the work. From experience, I know that if I ever want another student to build on this work, I had better make sure it works, because once a student graduates, broken code becomes much harder to \ufb01x. I can\u2019t emphasize how important it is to actually have someone try out and follow the reproducibility guide, as opposed to just passively reading the document. A very common scenario: A: \u201cThis doesn\u2019t work, I get the following error...\u201d B: \u201cOh, sorry about that, I forget to check in this \ufb01le.\u201d There is simply no substitute for a hands-on attempt to catch bugs, spot missing details, and uncover hidden assumptions. Many of these reproduction guides are associated with a \u201creproduction log\u201d at the bottom of the page, which contains a record of individuals who have successfully reproduced the results and the commit id of the code version used. With these reproduction logs, if some functionality breaks, it becomes much easier to debug, by rewinding the code commits back to the previous point where it last worked. The social processes that I have described promote and sustain a virtuous cycle, and here we have made the jump from repeatability to reproducibility. Students begin by recognizing the value of \u201cpackaging\u201d up their research so that they can repeat their own experiments. These reproduction guides are made publicly accessible, and are independently con\ufb01rmed to be functional. Code that works provides a solid foundation to build on\u2014by both the original authors as well as others. This in turn accelerates experimental iterations and facilitates rapid explorations of novel ideas built using existing components, ultimately leading to greater productivity. Students see the payoff of reproducibility efforts and are inclined to sustain their contributions. And around and around we go. 4.2 Standardized Tools: From Reproducibility to Two-Click Reproductions At a high-level, we can divide reproducibility into the social and technical aspects. I believe the \ufb01rst is much more important, because any tool to support reproducibility will either be ignored or circumvented unless there are social processes to promote their usage. The previous section focused on exactly these, and that gets us from repeatability to reproducibility. Here, I discuss tooling to further lower barriers. If social processes stimulate \u201cthe will\u201d, standardized tools provide \u201cthe way\u201d. The core idea is to make investments in tooling to automate the process of \u201cgoing through\u201d the reproduction guides. In Anserini and Pyserini, we have built extensive regression tests with elaborate 12Faculty can offer carrots or wave sticks; generally, the former is far more effective. 8 \ftest harnesses (sometimes called jigs). These regressions are tightly integrated with the two-click reproductions discussed in the previous section: in fact, in many cases, the regression tests simply wrap the execution of the two-click reproduction commands and verify that the outputs match stored speci\ufb01cations. The periodic execution of these regressions ensures that the two-click reproductions continue to \u201cwork as advertised\u201d. In Anserini, we have implemented a test harness called run_regression.py that takes as input a YAML con\ufb01guration \ufb01le for a set of experimental conditions on a standard IR test collection, for example, the MS MARCO V2 passage test collection (Craswell et al., 2021; Ma et al., 2022a).13 Here\u2019s a sample invocation: python src/main/python/run_regression.py \\ --index --verify --search --regression msmarco-v2-passage The script runs through the following steps: It builds the index from scratch (i.e., the raw corpus), veri\ufb01es index statistics (e.g., number of documents and terms processed), performs retrieval runs using different retrieval models (e.g., BM25 with Rocchio feedback), evaluates the outputs (e.g., with trec_eval), and checks effectiveness \ufb01gures against expected results (stored in the con\ufb01guration). That is, the execution of this script veri\ufb01es the reproducibility of a set of experimental conditions in a fully automatic manner. Upon each successful execution, the regression script generates a documentation page from an associated template, populating the results (e.g., average precision) from the trial.14 All of this happens without any human intervention. The script depends on having access to the raw corpus, which is stored on our group\u2019s servers at known \ufb01le system locations. However, the corpus path is a con\ufb01gurable parameter, so anyone can run the same regression test if they have a copy of the corpus. There are currently around three hundred such tests, which take several days to run end to end (orchestrated by yet another script). The largest of these builds a 10 TB index on all 733 million pages of the ClueWeb12 collection. Although it is not practical to run these regression tests for each code change, we try to run them as often as possible, resources permitting, to catch new commits that break existing functionalities early so they are easier to debug. These regression tests are always run before a formal release of the toolkit, prior to publishing an artifact on Maven central, to ensure that released jars produce the expected experimental results. On top of the regression framework in Anserini, further tests in Pyserini compare its output against Anserini\u2019s output to verify that the Python interface does not introduce any bugs. These are written as Python unit tests and, for example, check different parameter settings from the command line, ensure that single-threaded and multi-threaded execution yield identical results, that pre-built indexes can be successfully downloaded. In Pyserini, experimental conditions are gathered together and organized into what we call a reproduction matrix, an example of which is shown in Figure 1 for the MS MARCO V2 passage corpus. Each row illustrates a particular experimental condition, while the columns show evaluation metrics with respect to different sets of queries. A row can be expanded to reveal the commands necessary to generate those evaluation \ufb01gures. These are exactly the two-click reproductions described in Section 3, organized in an easy-to-consume format. Finally, the reproduction matrix is backed by another script that programmatically iterates through all rows (experimental conditions), performs retrieval with the speci\ufb01ed invocations, and veri\ufb01es that the evaluation scores are as expected (i.e., checks each cell in the table). In this case, the command is: python scripts/repro_matrix/run_all_msmarco.py --collection v2-passage In fact, the reproduction matrix webpage is automatically generated by the above script upon successful completion. Once again, these regression experiments are quite computationally expensive, and collectively they take several days to run. Nevertheless, these checks are performed before every artifact release on the Python Package Index. Thus, we are con\ufb01dent of the reproducibility of various retrieval models implemented in Pyserini, to the extent that they are covered by these tests. 13https://github.com/castorini/anserini/blob/master/src/main/resources/regression/ msmarco-v2-passage.yaml 14https://github.com/castorini/anserini/blob/master/docs/regressions-msmarco-v2-passage. md 9 \fFigure 1: The reproduction matrix for the MS MARCO V2 passage corpus, which is available at https://castorini.github.io/pyserini/2cr/msmarco-v2-passage.html. Each row represents an experimental condition and is associated with two-click reproduction commands. As I\u2019ve previously argued, reproducibility is a continual process, not a \u201cone and done\u201d deal (Lin and Zhang, 2020). The testing infrastructure described here ensures that our two-click reproductions continue to work even as the entire codebase evolves and gains new features. What I\u2019ve described here can be characterized as a custom continuous integration/continuous delivery (CI/CD) framework, adapted to the unique characteristics of research. This might all sound like a lot of work to set up initially, and indeed it was. However, all the upfront engineering costs have already been \u201cpaid for\u201d. For a student building a test case for a new experimental condition, the effort is relatively modest, and the process consists mainly of writing con\ufb01guration \ufb01les and hooking into the test infrastructure. Once connected, the student can be con\ufb01dent that the code will continue to generate the expected retrieval results. Down the road, when it is time to write up the thesis, there\u2019s no need to \u201cdust off\u201d the code to make sure it \u201cstill works\u201d. The regressions tests ensure that it never stopped working. In summary, reproducibility has become ingrained as a shared norm in our group, operationalized in social processes and facilitated by technical infrastructure. I think that this has allowed us to nicely balance the demands of reproducibility with the ability to iterate rapidly. 5 Other Considerations As promised in the introduction, this section discusses a sm\u00f6rg\u00e5sbord of issues that don\u2019t \ufb01t neatly into the \u201cwhat\u201d, \u201cwhy\u201d, and \u201chow\u201d narrative. 5.1 Scoping and Timing of Reproducibility Efforts In truth, written reproduction guides and automated regression testing lie along a spectrum of \u201creproduction rigor\u201d, with different cost/bene\ufb01t tradeoffs. Although we aim to have reproduction guides for every paper, only a relatively small fraction of our group\u2019s research becomes \u201censhrined\u201d in automated regressions that are maintained over time. 10 \fWe currently do not have clear-cut criteria as to which retrieval models or experimental results receive the regression treatment, but as a rough heuristic, we use the following question as a guide: Is this work we\u2019d like to extend further? If so, then we would go about building an appropriate regression. Similarly, if we \ufb01nd a paper by others that we\u2019d like to build on (as in the case of DPR), we would make the investment to replicate the work and to build regressions into our codebase. As already mentioned in Section 3, scoping the effort is an important part of the reproducibility discussion. Consider the common case of a modeling advance that is described in a paper, i.e., we proposed a novel retrieval model that appears to be better than previous work, and the contribution represents a fruitful line of inquiry that the group hopes to push further. In this case, building an appropriate regression makes sense. However, to balance cost and reward, we do not construct regression tests for every experimental result reported in the paper. Instead, we are guided by the question: In a follow-up paper, which of the existing experimental conditions from this paper would serve as the baseline for comparison? That becomes the target for integration into our regression framework. We \ufb01nd that building tests for ineffective contrastive settings or ablation conditions provides little value relative to the amount of effort required. Part of the scoping exercise is to determine what aspects of the proposed model should be included in which codebase. If the original experiments were performed with Pyserini to begin with, then the answer is straightforward: Model checkpoints are made public (e.g., on the Huggingface model hub) and two-click reproductions are directly integrated into Pyserini. However, since our toolkit (by design) does not include code for model training, reproduction guides for that aspect of the work must go elsewhere, typically in another code repository. In some cases, a novel retrieval model does not neatly \ufb01t into the design of Pyserini. These model implementations (both training and inference) usually begin their lives in a separate repository, but as part of the reproducibility planning exercise, we debate whether it is worthwhile to import the model inference code into Pyserini so that end-to-end retrieval experiments can be conducted alongside all the other available models in a seamless manner (as part of a reproduction matrix, see Section 4.2). These decisions are made on a case-by-case basis. It is worth explicitly noting that any inclusion to the constantly growing test suites in Pyserini and Anserini represents an open-ended maintenance commitment for the life of the project. Any addition, in essence, incurs a permanent liability on the group (and as I\u2019ll discuss in Section 5.3, this burden usually falls on me). There\u2019s no point in adding a model to the regression framework unless there\u2019s the intention of keeping the code functional in the long term. Once added, we almost never abandon a regression test, except in very rare circumstances, for example, a failure due to changes in underlying code that we depend on but have no control over. Operationally, continual expansion of test suites means that the complete set of regressions takes longer and longer to run, which has the practical effect of slowing down release iterations (e.g., on PyPI). However, I don\u2019t think this has impacted the iteration speed of individual students since components in the codebase are largely decoupled. Nevertheless, servers continue to get faster and more powerful, so I think our current operations are sustainable. I\u2019ve found that the best time to make investments in long-term reproducibility is the window between the acceptance noti\ufb01cation of a paper and the \ufb01nal camera-ready deadline. This provides an opportunity to perform a \u201c\ufb01nal check\u201d on the results and to plan for the long-term maintenance of the model. Work during this time window also ensures that the evaluation results reported in the \ufb01nal paper version match the \ufb01gures that can be recreated with our two-click reproductions. In some cases, the journey from reproduction guides to automated tests is circuitous. For example, we might not have found a particular thread of work suf\ufb01ciently promising to have integrated it into our regression framework, but subsequent developments changed our minds. It is never too late, but I have encountered cases of \u201creproducibility debt\u201d, much like the notion of \u201ctechnical debt\u201d in software engineering. The complexities of modern software stacks create hidden dependencies that often break retrieval models in subtle ways as code evolves. Especially if someone has not tried out a reproduction guide in a while, it might be discovered later that the results have changed. Repeatability is a \ufb01ckle beast. 11 \f5.2 Bootstrapping Reproducibility What about the cold start process? The foregoing discussions describe the operational aspects of reproducibility in my group in \u201csteady state\u201d, where virtuous cycles have already been established. Existing software artifacts are already functional and the bene\ufb01ts of using them are evident. Processes and shared norms are in place, and tools to simplify the routine have been built. Once again, motivated by self interest, the value that can be extracted by participating in, for example, our Pyserini reproducibility ecosystem is greater than the costs. What if this isn\u2019t the case? How can a research group start the \ufb02ywheel spinning from a standstill?15 Before addressing this point, it is important to recognize that the reproducibility narrative I\u2019ve articulated here does not work for everyone. Only certain \u201cstyles\u201d of systems-oriented research organized around software artifacts are conducive to the treatment described in this essay. However, to anyone who wishes to replicate a similar culture of reproducibility: I admit that getting the \ufb02ywheel spinning is hard, and the truthful answer is: I don\u2019t really know how, at least in a replicable manner. I began my academic career as an assistant professor in 2004 and have started countless research projects that involve building and sharing software artifacts. Only recently have I successfully pulled together the elements that sustain a culture of reproducibility. Of course, I could construct a story of our success, but it would merely be a post-hoc narrative. The casual factors are too complex and the training examples are too few to build an explanatory model. Nevertheless, I will share some ideas that are independently worthwhile, regardless of their actual contributions to reproducibility. First, adopt software engineering best practices. A research environment is of course different from a production shop geared towards delivering products and services to customers, but this doesn\u2019t mean that research groups should ignore mature and well-established processes. Pyserini and Anserini both adopt standard best practices in open-source software development. The code is available on GitHub, issues are used to describe proposed feature enhancements and bugs, and code changes are mediated via pull requests that are code reviewed. Second, look for opportunities where a long-term research agenda aligns with the construction of software artifacts that can be generalized into a toolkit, library, or platform. Reproducibility efforts have substantial upfront costs that need to be amortized over several years to achieve a net gain. The (planned) software toolkit should ideally provide the basis for several related research projects that yield multiple publications. Without this long-term vision and commitment to a shared codebase, the group might never reap the rewards of the initial investment. With a plan in place, it is possible to make progress incrementally. For example, the multi-layered regression frameworks in Anserini and Pyserini evolved over many years. However, the commitment to build a toolkit for tackling the core ranking problem in information retrieval was made on day one. How does one identify these opportunities? For junior faculty, their own research statements provide the inspiration! As an integral part of the application process for academic positions, the research statement should contain a coherent, multi-year research agenda. Look there for possible alignment, as the vision should have already been articulated clearly. Third, the richness of the modern software ecosystem means that opportunities for contributing software artifacts can happen at many different layers in the stack, and specialized niches abound. For example, Pyserini relies on PyTorch and Anserini relies on Lucene. It\u2019d make little sense for our group to try and build the equivalent of either PyTorch or Lucene from scratch. Similarly, it might not make sense for another research group to build an independent IR toolkit from scratch. Instead, join us and build on top of our toolkits to handle a niche that is not currently well served. We\u2019d welcome your contributions! Finally, leadership is critical and deserves a dedicated section. I turn to this next. 5.3 The Critical Role of Leadership I am the overall architect of Pyserini and Anserini. I am the person who usually runs the regression tests, shepherds the release of software artifacts, and generally keeps tabs on everything that is going on. In software development terms, I am not only the engineering manager but also the tech lead. 15To use an analogy attributed to Jeff Bezos: Virtuous cycles are like \ufb02ywheels; they hold a lot of energy and are dif\ufb01cult to slow down, but they\u2019re even harder to spin up initially. 12 \fThis makes sense from multiple perspectives, as I am in the best position to serve as the long-term institutional memory of the research group. Students come and go, but my presence (hopefully) remains constant. Many of the retrieval models in Pyserini were built before the arrival of my most recent cohort of students, and some models will continue to be re\ufb01ned even after they graduate. I have the most complete view of what everyone is working on, and this allows me to coordinate multiple overlapping research projects. For example, I regularly introduce students to existing features in Anserini and Pyserini that can expedite their research. In many cases, showing students how to use a model is as simple as pointing them to the corresponding reproduction guide and asking them to go through it, or even better, directing them to the documentation that provides the two-click reproduction commands. Another common pattern is that I arrange \u201chandoffs\u201d from a graduating student to a new student who wishes to continue pursing a related line of work. If the practices described here are faithfully executed, this is a relatively seamless process. It is important for the group leader to assume the roles described above\u2014in simple terms, serving as both the engineering manager and the tech lead. Having this mindset in my opinion is one key to sustaining a culture of reproducibility. For example, I have written most of the test harness code in Pyserini and Anserini, and in many cases, I end up writing unit tests for students. This can be characterized as a \u201cservant leadership\u201d style: writing testing frameworks certainly isn\u2019t glamorous, but it\u2019s critically important. Working on these bits of code is the best use of my time from the perspective of bene\ufb01ting the entire group\u2014as investments in reproducibility pay dividends for everyone using the codebase\u2014and ful\ufb01lls my personal desire to stay technically engaged with students. Starting an academic research group has been analogized to running a startup, with the faculty member as the CEO. In the context of the North American academic system, this analogy is apt, as faculty members generally lead their own groups. In the beginning, they must do everything, including assuming the roles of an engineering manager (e.g., hiring and mentoring students) as well as the tech lead (e.g., guiding students\u2019 technical progress and examining their implementations). However, as a faculty rises through the academic ranks and grows a research group, a common organizational pattern is to cede the role of the tech lead to a senior graduate student. While certainly workable, this comes with associated risks. Students (eventually) graduate (hopefully!) and the next stage of their careers may no longer have room for the role. Arranging for succession \u201chandoffs\u201d become more dif\ufb01cult if the group leader is not intimately involved. This is not unlike what happens in a corporate environment: When a tech lead departs, management needs to \ufb01nd a replacement, typically elevating another member from the same project. In the academic context, this is likely another senior graduate student working on similar research problems, provided one exists. Needless to say, sustaining research momentum is much easier if there is continuity, as in the case where the group leader also serves as the tech lead. 6" + }, + { + "url": "http://arxiv.org/abs/2211.00734v1", + "title": "On the Interaction Between Differential Privacy and Gradient Compression in Deep Learning", + "abstract": "While differential privacy and gradient compression are separately\nwell-researched topics in machine learning, the study of interaction between\nthese two topics is still relatively new. We perform a detailed empirical study\non how the Gaussian mechanism for differential privacy and gradient compression\njointly impact test accuracy in deep learning. The existing literature in\ngradient compression mostly evaluates compression in the absence of\ndifferential privacy guarantees, and demonstrate that sufficiently high\ncompression rates reduce accuracy. Similarly, existing literature in\ndifferential privacy evaluates privacy mechanisms in the absence of\ncompression, and demonstrates that sufficiently strong privacy guarantees\nreduce accuracy. In this work, we observe while gradient compression generally\nhas a negative impact on test accuracy in non-private training, it can\nsometimes improve test accuracy in differentially private training.\nSpecifically, we observe that when employing aggressive sparsification or rank\nreduction to the gradients, test accuracy is less affected by the Gaussian\nnoise added for differential privacy. These observations are explained through\nan analysis how differential privacy and compression effects the bias and\nvariance in estimating the average gradient. We follow this study with a\nrecommendation on how to improve test accuracy under the context of\ndifferentially private deep learning and gradient compression. We evaluate this\nproposal and find that it can reduce the negative impact of noise added by\ndifferential privacy mechanisms on test accuracy by up to 24.6%, and reduce the\nnegative impact of gradient sparsification on test accuracy by up to 15.1%.", + "authors": "Jimmy Lin", + "published": "2022-11-01", + "updated": "2022-11-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR" + ], + "main_content": "INTRODUCTION 2 e\ufb00ect is more pronounced in image-classi\ufb01cation models than text classi\ufb01cation models. \u2022 In the presence of noise, gradient compression in smaller models can sometimes recover some of the accuracy lost to noise. \u2022 The use of aggressive gradient compression when training smaller models can result in a reduced sensitivity to gradient noise. We explain these observations by analyzing the error which DPSGD and compression introduces to the average gradient estimation. In particular, we observe the following: \u2022 Reduced accuracy in training can be explained through mean-squared error estimating the average gradient. \u2022 The error is mostly made up of variance, and a small amount of bias. \u2022 The variance component of the error is mostly introduced by the noise addition in the Gaussian mechanism as part of implementing di\ufb00erential privacy guarantees. \u2022 Both sparsi\ufb01cation and rank reduction leads to a large reduction of variance in exchange for a small amount of bias, but leading to an overall decrease in mean-squared error, hence it can lessen the reduction in test accuracy in private training. 1.2 Background In this section, we describe the de\ufb01nition of di\ufb00erential privacy mechanisms and gradient compression algorithms used in our study. 1.2.1 Di\ufb00erential Privacy To test the e\ufb00ects of di\ufb00erential privacy, we adopt the Opacus [25] library. It implements a combination of the Gaussian Mechanism [15] for (\u03f5, \u03b4)-di\ufb00erentially private queries, DPSGD [26] for di\ufb00erentially private deep learning, and Renyi di\ufb00erential privacy accountant [11] for privacy accounting. (\u03f5, \u03b4)-di\ufb00erential privacy [15] is de\ufb01ned so: Let function M : X \u2192Y be a mapping from domain X to range Y. De\ufb01ne the adjacency of to any two sets of data d, d\u2032 \u2208X to mean that max(|d \u2212d\u2032|, |d\u2032 \u2212d|) \u22641 (i.e. d and d\u2032 di\ufb00er by at most a single sample.) M is de\ufb01ned to be (\u03f5, \u03b4)-di\ufb00erential privacy if it follows the following statement \u2200d, d\u2032 \u2208X s.t. max(|d \u2212d\u2032|, |d\u2032 \u2212d|) \u22641 \u2200S \u2286Y Pr[M(d) \u2208S] \u2264e\u03f5Pr[M(d\u2032) \u2208S] + \u03b4 (1.1) An intuitive understanding of this de\ufb01nition is: any observation made about the output M(d) \u2208S can be alternatively explained by M(d\u2032) \u2208S with a lower-bounded probability. Giving d (and by symmetry, d\u2032) plausible deniability when an output in S is observed. The likelihood of any alternative \fCHAPTER 1. INTRODUCTION 3 explanation Pr[M(d\u2032) \u2208S] is lower-bounded relative to original explanation Pr[M(d) \u2208S] the bounded by a factor e\u03f5 and constant \u03b4. As \u03f5 and \u03b4 approaches 0, the likelihood of alternative explanations become just as likely the original explanation and perfect anonymity is guaranteed. To guarantee that any arbitrary mapping M satis\ufb01es such an inequality, the most widely accepted implementation is to replace M with a new function M\u2032 that applies the following post-processing steps to the outputs of M M\u2032(x) = projectC(M(x)) + N(0, \u03c3C) (1.2) The \ufb01rst step is to project all outputs of M into a spherical region centered at the origin with a bounded radius C \u22650. All projections are then perturbed by adding independently sampled noise from a Gaussian distribution N(0, \u03c3C) with a standard deviation \u03c3C proportional to C. The proportionality factor \u03c3 controls the level of privacy guarantee available per observation, with larger values guaranteeing higher privacy. The resulting M\u2032 satis\ufb01es an in\ufb01nite set of (\u03f5, \u03b4)-di\ufb00erential privacy where \u03b4 \u22654 5e \u2212(\u03c3\u03f5)2 2 and \u03f5 < 1. This particular post-processing is referred to as a Gaussian Mechanism in the di\ufb00erential privacy literature. The choice of C doesn\u2019t impact the privacy guarantee, however it does e\ufb00ect the accuracy of training. In our work we tried a few di\ufb00erent values for each task and picked the one that gives the best accuracy after a single epoch. More advanced strategies exist such as adaptive clipping [14] work which proposes the an adaptive clipping radius C dynamically set to a di\ufb00erentially private estimate of a \ufb01xed quantile of the gradient norms. Traditionally, the Gaussian mechanism is applied to queries on a data base. DPSGD [26] introduced this method to deep learning by applying it to each gradient computation with respect to each individual data sample in the training data set. This guarantees a quanti\ufb01able amount of privacy between participating data samples and, by extension, the individuals supplying those data samples. To account for the accumulating privacy cost during training, they implement a privacy accountant to keep track of the increasing values of \u03f5. The de\ufb01nition of di\ufb00erential privacy can be changed in many ways. For example, de\ufb01ning the set of data belonging to an individual user as a units to anonymize [21], as opposed to individual samples being treated as units. 1.2.2 Gradient Compression In this work we focus on two di\ufb00erent approaches to gradient compression: Deep Gradient Compression [3] and PowerSGD [7] Deep Gradient Compression is an algorithm that produces a layer-wise sparse representation of the gradient vector by representing only the elements with relatively larger magnitudes in each layer. The unrepresented elements are not communicated, with the receiver interpreting them as zeros. This algorithm further compresses all represented elements by removing the low-order bits in their \ufb02oating point representation. The receiver also interprets the removed bits to be zero. This algorithm works under the assumption that most elements of a gradient vector are close to zero, and that the loss function is smooth. Under this assumption, approximating many near-zero values as zero produces a permissibly small change to the model update which produces a permissibly small change in loss for a smooth loss function. There exists other variants of sparsi\ufb01cation such as one using an entropy-based criteria [24] for selecting coordinates to approximate as zero. However, \fCHAPTER 1. INTRODUCTION 4 we focus on using Deep Gradient Compression as an example of sparsi\ufb01cation. PowerSGD is an algorithm that reshapes the layer-wise gradient vector into a square matrices and learns a low-rank factorization of these matrices. The resulting factors are communicated in lieu of the original matrices when it would result in a lower bandwidth usage. This algorithm works under the assumption that the rows of each square matrix are coordinates that span a much smaller set of dimensions than the number of columns in the square matrix. Under this assumption, an approximate low-rank factorization of the square matrix can be produced and used to reconstruct the square matrix without large amounts of error. Notably, for this assumption to be true, there would have to be high correlation between the coordinates of the original gradient vector, which implies an approximate low-rank representation for all gradient vectors. This is similar to the assumption of near-zero values in gradient vectors, however the assumption is generalized to assume near-zero projections along a large set of directions in the parameter space. This reveals that rank reduction can be viewed as a non-axis-aligned generalization of sparsi\ufb01cation. Due to this connection between sparsi\ufb01cation and rank reduction, both sparsi\ufb01cation and rank reduction tend to bias the gradients towards the origin and reduce their variance.. Coordinate-wise quantization is another approach to gradient compression however, it is usually not used in isolation since its compression rate is upper-bounded by 64 (\ufb02oating point values are generally represented in 64 bits, and the minimum coordinate-wise representation is 1 bit.) For this reason, coordinate-wise quantization is not the most competitive approach in literature. While it is possible quantization may interact with di\ufb00erential privacy di\ufb00erently than sparsi\ufb01cation and rank reduction, we leave this direction as an area for future study. 1.3 Related Work In this section, we discuss related works which involve both di\ufb00erential privacy and gradient compression. There exists a growing body of work that is focused on improving the e\ufb03ciency of di\ufb00erential privacy and compression. These include testing the e\ufb00ectiveness of various compression algorithms in di\ufb00erentially private training. DP-SCAFFOLD [4] applies the work of SCAFFOLD [20] to DPSGD, and \ufb01nd that the control variates designed to reduce the impact of non-IID data partitions can also reduce the variance introduced by di\ufb00erential privacy mechanisms. Q-DPSGD [12] explores the e\ufb00ectiveness of gradient quantization applied before and after the Gaussian mechanism and benchmarks it to be computationally faster than SDM-DSGD [27] which applies a randomized unbiased sparsi\ufb01cation after the Gaussian mechanism. FL-CS-DP [2] explores the use of compressive sensing, where they view the gradient vector as a time series that can be transformed into frequency space, keeping only the low-frequency values. They propose a novel formulation of the compression optimization to improve upon traditional DCT (Discrete Cosine Transform) compression. The works above attempt to \ufb01nd combinations of di\ufb00erential privacy and compression mechanisms that achieve the greatest resource e\ufb03ciency, with the resource being time, bandwidth, or privacy budget. Our work\u2019s main focus is to o\ufb00er insight on the relationship between compression, di\ufb00erential privacy, and accuracy. We hope that these insights inspire novel ideas that result in greater resource e\ufb03ciency. \fCHAPTER 1. INTRODUCTION 5 There also exists a number of works that explore compression mechanisms which already introduce noise. These mechanisms can be modi\ufb01ed to provide di\ufb00erential privacy guarantees on top of the pre-existing compression capabilities. Count-Sketch [17, 8] is one such mechanism that inherently introduces randomness through random hash functions. Dithered quantization [5] is another approach which adds noise before quantization. MVU [13] adds this noise after quantization by sampling from discrete distribution. They also formulation an optimization that minimizes the distribution variance while satisfying di\ufb00erential privacy guarantees. These work explore the potential of re-purposing pre-existing randomness in compression algorithms towards di\ufb00erential privacy. Similarly, their goal is to prove the e\ufb00ectiveness of this approach against a baseline, less so to provide a deep analysis of how their compression interacts with di\ufb00erential privacy mechanisms. Additionally, there is research in di\ufb00erential privacy and compression in contexts other than deep learning such as data base queries [23]. While the same privacy and compression mechanisms can often used across many contexts, their interaction can be dependent on the type of information being protected and compressed. The assumptions one can make regarding gradient vectors in deep learning can\u2019t be generally made about arbitrary data base queries. We study the speci\ufb01c context of deep learning in hopes of \ufb01nding unique insights that would otherwise be hidden in a more general context. \fChapter 2 Methods In this section, we describe the tasks, models, and hyperparameter settings used to conduct our experiments. We also de\ufb01ne some metrics used in our results. Refer to the following repository for an exmaple of how to run these experiments: https://github.com/Jimmy-Lin/privacy-ml-systems 2.1 Tasks and Models To evaluate the generality of our insight across di\ufb00erent tasks in deep learning, we train 4 models on 4 di\ufb00erent tasks: Surnames, CIFAR-10, SNLI, CIFAR-100 2.1.1 Surnames Task The goal of this task is to classify the language associated with an alphabetic surname, given a choice of 18 di\ufb00erent languages. We train with a learning rate of 2.0 and a batch size of 32 for 100 epochs. Refer to the following URL to \ufb01nd a copy of the data set https://github.com/spro/ practical-pytorch/tree/master/data/names. The model we train is a 256-character-set LSTM model with a single LSTM layer of 64 embedding dimensions and 128 output dimensions, followed by a fully connected layer of 18 classes and a softmax activation. Refer to [16] for details on the LSTM cell architecture. 2.1.2 CIFAR-10 Task The goal of this task is to classify the object at the centre of a 32x32 coloured image, given a choice of 10 di\ufb00erent object classes. We train with a learning rate of 0.1 and a batch size of 128 for 100 epochs. Refer to [9] for more information on this data set. The model we train is a 3-block CNN, each block containing a biased convolution layer of kernel size 3 and stride 1. Each convolution is followed by an instance normalization with a momentum value of 0.1, a ReLU activation, an average pooling with a pool size of 2, and a spatial dropout with probability 0.1. The 3 blocks di\ufb00er only by their number of output \ufb01lters: 32, 64, and 128. After the 3 blocks, we follow with 2 biased hidden layers of 256 and 512 units respectively, each using ReLU activation and dropout with probability 0.25. Lastly, the model \ufb01nishes with a fully connected layer into 10 classes and a softmax activation. 6 \fCHAPTER 2. METHODS 7 2.1.3 SNLI Task The goal of this task is to classify the logical relation between a pair of English sentences, the relation can be either \u201dentailment\u201d, \u201dcontradiction\u201d, or \u201dneutral\u201d. We train with a learning rate of 0.05 and a batch size of 32 for 1 epoch. Refer to [10] for more information on this data set. We \ufb01ne-tune a pre-trained \u201dbert-based-case\u201d model which can be found at https://huggingface. co/bert-base-cased. We freeze all parameters except for the classi\ufb01er, pooling, and \ufb01nal layer of the encoder. Refer to [22] for details on the BERT architecture. 2.1.4 CIFAR-100 The goal of this task is to classify the object at the centre of a 32x32 coloured image, given a choice of 100 di\ufb00erent object classes. We train with a learning rate of 1.0 and a batch size of 64 for 100 epochs. Refer to [9] for more information on this data set. We train a modi\ufb01ed version of ResNet-18. Speci\ufb01cally, we set the global average pooling that follows the stack of residual blocks to output 2x2 channels instead of 1x1. We \ufb01nd this modi\ufb01cation results in much better accuracy on this data set. Refer to [19] for details on the ResNet architecture. 2.2 Di\ufb00erential Privacy and Compression Hyperparameter Settings In this section, we describe how we con\ufb01gure the di\ufb00erential privacy mechanism and compression algorithms. 2.2.1 Di\ufb00erential Privacy Mechanism Settings For the Surnames task, we use a clipping radius of 3.0 and a \u03b4 value of 0.00008. For the CIFAR-10 task, we use a clipping radius of 5.0 and a \u03b4 value of 0.00001. For the SNLI task, we use a clipping radius of 21.0 and a \u03b4 value of 1 549361. For the CIFAR-100 task, we use a clipping radius of 1000.0 and a \u03b4 value of 0.00001. To vary the privacy level, we use noise multiplier values of 0.0, 0.4, and 0.8. Note that we clip the gradients even in the non-private training so that the changes in privacy guarantee is solely attributed to the noise addition. Gradient clipping in non-private training is common practice for the purpose of limiting the impact of exploding gradients. Clipping radius is selected based on an approximate median of the gradient norm at the \ufb01rst iteration. This is based on the observation of adaptive clipping [14] that clipping approximately 50% of the gradients in a batch appear to work well. However, we keep a \ufb01xed value instead of adjusting it over the course of training. While this selection may be unlikely in practice, it serves as a good way to standardize across tasks that exhibit di\ufb00erent gradient norms. We select \u03b4 values by picking a value roughly on the order of 1 n where n is the number of training samples in the training data set. This is the recommended upper bound on \u03b4 [15] for (\u03f5, \u03b4)-di\ufb00erential privacy in literature. 2.2.2 Compression Algorithm Settings To vary the compression rate of Deep Gradient Compression (DGC) we con\ufb01gure the DGC algorithm to compression rates of 1, 16, and 256. To vary the compression rate of PowerSGD, we con\ufb01gure \fCHAPTER 2. METHODS 8 the PowerSGD algorithm to use approximation ranks of 1 and 16. In practice, PowerSGD doesn\u2019t o\ufb00er low compression rates as approximation ranks above 16 tend to incur severely large compute overhead. Due to this computational overhead and the large size of layers in ResNet, we set the approximation ranks to 1 only for the CIFAR-100 tasks instead of 1 and 16. 2.3 Metric De\ufb01nitions 2.3.1 Accuracy We measure the test accuracy of a model at the end of every epoch and use a average of the last 10 epochs to represent the \ufb01nal model accuracy. In the case of tasks which train for only 1 epoch, we simply take the single measurement of test accuracy. 2.3.2 Bandwidth Usage Since upstream network capacity is generally far more scarce in wide area networks than downstream network capacity, we measure only the upstream network usage which consists of mainly the gradient vectors uploaded to parameter servers per client. We assume all vectors are transmitted in COO format when estimating the number of bytes sent. 2.3.3 Privacy Bound For measurements of privacy bound, we defer to the Opacus library\u2019s implementation of Renyi di\ufb00erential privacy accountant to track the \u03f5 value. We \ufb01x \u03b4 as a hyperparameter and quantify di\ufb00erences in privacy guarantee solely through \u03f5. Smaller values of \u03f5 indicate a stronger di\ufb00erential privacy guarantee. \fChapter 3 Results In this section, we discuss the results of training each task at di\ufb00erent combinations of di\ufb00erential privacy guarantees, compression algorithms, and compression rates. We acknowledge that higher test accuracy may be achievable through state-of-the-art architectural designs. Since the goal of this study is to characterize the relationship between di\ufb00erent con\ufb01gurations and well-studied architectures, it is unnecessary to \ufb01nd the most optimal architecture and con\ufb01guration for each data set. 3.1 E\ufb00ects on Test Accuracy In this section, we focus on the e\ufb00ects we observe on test accuracy. 3.1.1 Surnames Task Figure 3.1 (top) shows the accuracy measurements grouped by their di\ufb00erential privacy bound (\u03f5). While most groups experience a small decrease in accuracy of a few percent, we observe that the group with \u03f5 = 50.92 experiences a slight increase in accuracy of 5.1%. Figure 3.1 (bottom) shows the accuracy measurements grouped by their upstream network usage (Gb). Within each group we see a large decrease in accuracy when the noise multiplier is increased. The uncompressed group which used 44.22Gb experiences an accuracy drop of 40.4%. The compressed group experiences an accuracy drop of 34.0%. In \ufb01gure 3.2, we observe very similar patterns to \ufb01gure 3.1. Remarkably, the accuracy increase in the group with \u03f5 = 50.92 is even larger at a 10.4% increase. Overall we observe that this task is relatively robust to gradient compression, losing only a few percent in accuracy. Surprisingly, an increase is accuracy is sometimes observed with increasing compression rate. This observation was more noticeable when using the PowerSGD compression 3.1.2 CIFAR-10 Task Figure 3.3 (top) shows the accuracy measurements grouped by their di\ufb00erential privacy bound (\u03f5). We observe that this task is noticeably more sensitive to compression than the Surnames task. Compression can decrease the accuracy by as much as 25.0%. Once again, we observe an increase in 9 \fCHAPTER 3. RESULTS 10 Figure 3.1: Test accuracy averaged over last 10 epochs after 100 epochs of training the Surnames task using DGC. (Top) grouped by di\ufb00erential privacy bound (\u03f5). (Bottom) grouped by upstream network usage (Gb). accuracy with higher compression rate, but this time it occurs within the group with \u03f5 = 4.86 and the increase in accuracy is 11.6%. Figure 3.3 (bottom) shows the accuracy measurements grouped by their upstream network usage (Gb). We notice that the in-group range of accuracy decreases as the amount of bandwidth used decreases. This starts at a range of 54.5% (bandwidth = 105.25Gb) to a range of 17.9% (bandwidth = 1.22Gb). In \ufb01gure 3.4, we observe very similar patterns to \ufb01gure 3.3. Overall we observe that this task is relatively more sensitive to compression than the Surnames task. The the increase in accuracy in private training when compression rate is increased is observed again, similar to what we observe in the Surnames task. This time the increase is similar between compression algorithms 3.1.3 SNLI Task Figure A.1 (top) shows the accuracy measurements grouped by their di\ufb00erential privacy bound (\u03f5). We observe that this task, in the non-private \u03f5 = \u221ecase, is very robust to gradient compression. It is inconclusive whether this can be said about the private training cases, since the models produced are of similar accuracy to a random prediction. This is due to this task being very sensitive to the noise added by the di\ufb00erential privacy mechanism. We do see a very slight increase in accuracy of 1.9% when \u03f5is0.89, however this is not a very signi\ufb01cant amount. Figure A.1 (bottom) shows the accuracy measurements grouped by their upstream network usage (Gb). In every group, the accuracy lost due to noise is the dominant factor in changes to accuracy. We do see a slight decrease \fCHAPTER 3. RESULTS 11 Figure 3.2: Test accuracy averaged over last 10 epochs after 100 epochs of training the Surnames task using PowerSGD. (Top) grouped by di\ufb00erential privacy bound (\u03f5). (Bottom) grouped by upstream network usage (Gb). in sensitivity to noise from 48.9% to 44.7% but this amount is not very conclusive. In \ufb01gure A.2, we observe very similar patterns to \ufb01gure A.1. We observe that this task is very sensitive to noise, with it being the dominant factor in observable loss in accuracy. We do observe the increase in accuracy correlated with compression and a reduction in noise sensitivity when compression is added. However, the amount is much smaller this time and it is hard to use this as conclusive evidence. We attribute this to the fact that noise is so dominant in it\u2019s e\ufb00ect on accuracy for this task. 3.1.4 CIFAR-100 Task Figure A.3 (top) shows the accuracy measurements grouped by their di\ufb00erential privacy bound (\u03f5). We observe that this task is very sensitive to noise and compression. Similar to the SNLI task, the model\u2019s accuracy is no better than random prediction when noise is added by the di\ufb00erential privacy mechanism. In the non-private case, we see a 22.2% decrease in accuracy after compression. Figure A.3 (bottom) shows the accuracy measurements grouped by their upstream network usage (Gb). We observe that noise dominates the decrease in accuracy in the non-compressed group with bandwidth = 417.5Gb. However, compression also plays a role in decreasing the accuracy in non-private training. In \ufb01gure A.4, we observe very similar patterns to \ufb01gure A.3. We observe that this task is very sensitive to noise, but also compression. No interesting pattern can be observed from the experiments ran on this task, due to most trials resulting in minimal \fCHAPTER 3. RESULTS 12 Figure 3.3: Test accuracy averaged over last 10 epochs after 100 epochs of training the CIFAR-10 task using DGC. (Top) grouped by di\ufb00erential privacy bound (\u03f5). (Bottom) grouped by upstream network usage (Gb). accuracy. 3.1.5 General Observations We observe that the larger models used in the SNLI and CIFAR-100 tasks are more sensitive to noise than the smaller models. While smaller models such as the LSTM and CNN do experience loss of accuracy due to noise, they aren\u2019t immediately rendered par with random predictions. Additionally, the image classi\ufb01cation tasks are noticeably more sensitive to compression than the text classi\ufb01cation tasks. This could be attributed to the data type being classi\ufb01ed, or potentially common architectural components in image classi\ufb01cation vs text classi\ufb01cation (eg. convolution, normalization, pooling vs embedding, LSTM, self-attention). We observe that in the tasks involving smaller models (Surnames and CIFAR-10), at some levels of di\ufb00erential privacy guarantees (\ufb01nite \u03f5 value), increasing the compression rate can increase the model accuracy. We also observe that compressing the gradient appears to reduce the sensitivity of accuracy to noise. We hypothesize that compression has a way of reducing the negative impact of noise. In the tasks involving larger models (SNLI, CIFAR-100), we observe either a very weak form of the trend or no such trend at all. We attribute this to their relatively higher sensitivity to noise. \fCHAPTER 3. RESULTS 13 Figure 3.4: Test accuracy averaged over last 10 epochs after 100 epochs of training the CIFAR10 task using PowerSGD. (Top) grouped by di\ufb00erential privacy bound (\u03f5). (Bottom) grouped by upstream network usage (Gb). 3.2 Convergence Analysis In this section, we analyze the changes in test accuracy over the course of training which is measured after ever epoch. We omit the SNLI task from this analysis since it is trained for only 1 epoch, and thus has no further information to show when visualized as a time series. 3.2.1 Surnames Task Figure B.1 shows the progression of test accuracy over the course of training the Surnames task. We observe that all trials actually reach their plateau within the \ufb01rst 10 epochs. The accuracy of private training trials exhibit very large variability over time, but their compressed counterparts appear to reduce this variability. Finally, the non-compressed, non-private training trial loses test accuracy after an initial peak within the \ufb01rst 10 epochs. It\u2019s compressed counterpart doesn\u2019t exhibit this behaviour but it also doesn\u2019t exceed it in \ufb01nal accuracy. Figure B.2 provides a smoothed view of the time series for better comparison of the private training trials. To achieve this smoothing, we use a mean convolution over the time axis with width \fCHAPTER 3. RESULTS 14 20 and no padding at end points. The same observations can be made in B.4 and B.3 when using the PowerSGD compression algorithm. 3.2.2 CIFAR-10 Figure 3.5: Test Accuracy over 100 Epochs of the CIFAR-10 task. Figure 3.5 shows the progression of test accuracy over the course of training the CIFAR-10 task. We observe that the variability in test accuracy over time is consistently small for all trials. This time non-compressed trials in private training are the ones that decrease in test accuracy after an initial peak within the \ufb01rst 10 epochs. Furthermore, we \ufb01nd that this drop in accuracy appears to explain why the non-compressed trials show lower accuracy than their compressed accuracy in \ufb01gure 3.3. When we look at the accuracy in the \ufb01rst 10 epochs, the accuracy is higher when compression is lower. When we look at the last 10 epochs, the accuracy is higher when the compression is higher. We see a similar e\ufb00ect when applying the PowerSGD compression algorithm in \ufb01gures B.5 and 3.4. 3.2.3 CIFAR-100 Figures B.6 and B.7 show the progression of test accuracy over the course of training the CIFAR-100 task. We observe that the variability over time is very small for this task as well, and accuracy is mostly increasing steadily over the course of training (if increasing at all). \fCHAPTER 3. RESULTS 15 3.2.4 General Observations We observe that the Surnames task show much higher variability in accuracy over time when noise is added. Additionally, trials not using compression sometimes experience a peak early in training, followed by gradual loss of accuracy. This e\ufb00ect is lessened by compression. In some instances, this leads to the non-compressed trial \ufb01nishing with lower accuracy than the compressed trial. 3.3 Gradient Error In this section, we analyze the correlation between gradient error and test accuracy and break down the error to better understand what contributes to our observed decrease in accuracy. We de\ufb01ne gradient error as the mean squared error between a gradient vector average prior to an application of the di\ufb00erential privacy mechanism and gradient compression and it\u2019s counterpart after the di\ufb00erential privacy mechanism and gradient compression. Speci\ufb01cally, we measure the gradient error at the beginning of training. We target empirical average gradient as follows: Let {Xi, yi}B i=1 denote a set of B input-output pairs (Xi, yi) randomly sampled from the training data samples. Let L denote a loss function we wish to optimize with respect to \u03b8, an m-dimensional vector of parameters. We de\ufb01ne the mdimensional vector g as the empirical average gradient over B training data samples as a target we wish to estimate through possibly noisy and/or biased samples. g = 1 B B X i=1 \u2207\u03b8L(Xi, yi) (3.1) We de\ufb01ne a mechanism F : Rb,m \u21d2Rm as a function that takes as input the set of B gradient samples and outputs an estimate of \u02c6 g. This mechanism is allowed to be stochastic. We view the composition of our di\ufb00erential privacy mechanism and gradient compression as one such mechanism. \u02c6 g = F({\u2207\u03b8L(Xi, yi)}B i=1) (3.2) Since F can be stochastic, it may produce a di\ufb00erent estimate each time. For this reason, we measure the mean-squared di\ufb00erence between an estimate \u02c6 g and the target g for n independent instances of \u02c6 g. We measure mean-squared error of the average gradient estimate as follows: MSE(F, g) = 1 n n X i=1 ||\u02c6 gi \u2212g||2 2 (3.3) In the results that follow we use n = 100 as the sample size for estimating the gradient error. 3.3.1 Correlation between Gradient Error and Test Accuracy In this section, we show scatter plots between test accuracy and gradient error for each task and compression algorithm. In \ufb01gure 3.6, we observe a trend that decrease in test accuracy coincides with increase in gradient error. The gradient error has been plotted on a log scale to better illustrate this. This correlation exists in all other tasks which we demonstrate in \ufb01gures C.1 to C.7. \fCHAPTER 3. RESULTS 16 Figure 3.6: Test accuracy vs gradient error (log base-10 scale) for the CIFAR-10 task with DGC. 3.3.2 E\ufb00ects on Gradient Error In this section, we analyze how gradient error relates to the level of noise added by di\ufb00erential privacy mechanisms and gradient compression algorithms. In \ufb01gure 3.7, we observe that the gradient error is mostly contributed by the di\ufb00erential privacy mechanism\u2019s addition of noise (up to 353.0 units). While the compression algorithm can contribute to gradient error (up to 0.2 units), it often reduces the gradient error already contributed by the di\ufb00erential privacy mechanism (from 353.0 units down to 14.0 units). We believe this to be related to the correlation between compression rate and test accuracy in private training of small models. This can also be observed in other tasks in \ufb01gures D.1 to D.7. Speci\ufb01cally, if compression reduces gradient error, and lower gradient error correlates with higher test accuracy, then it isn\u2019t unreasonable that compression correlates with higher test accuracy through the lowering of gradient error. We do see the same error reduction in large models, but no increase in test accuracy. It\u2019s possible that the error reduction is simply not strong enough to overcome the e\ufb00ect of noise. 3.3.3 Gradient Error Breakdown: Bias vs Variance In this section, we analyze a breakdown of the gradient error into bias and variance. It is well known that the expected squared error of an estimator, in this case the expected gradient, can be decomposed into a bias component and a variance component. With su\ufb03ciently large n, this is also true of the empirical average of squared error. E \u0002 || \u02c6 g \u2212g ||2 2 \u0003 = || E \u0002\u02c6 g \u0003 \u2212g ||2 2 + E \u0002 || \u02c6 g \u2212E \u0002\u02c6 g \u0003 ||2 2 \u0003 (3.4) The bias component is simply the deviation caused by the mechanism F, and the variance is the variability across di\ufb00erent instances of estimates due to the stochasticity of F. \fCHAPTER 3. RESULTS 17 Figure 3.7: Gradient Error vs Privacy Bound (\u03f5) and Bandwidth (Gb) for the CIFAR-10 task with DGC. Here we observe through \ufb01gures 3.8, 3.9 the following: Clipping (left bar in each subplot) at the current con\ufb01guration introduces relatively minimal gradient error, and when it does it tends to introduce bias (orange) not variance (blue). Noising (middle bar in each subplot) introduces a very signi\ufb01cant amount of error and when it does it is overwhelmingly variance (blue) and not bias (orange). Compression (right bar in each subplot) introduces some amount of bias (orange) but not variance (blue) and it is relatively small compared to the variance introduced by noising. Furthermore, compression reduces the variance introduced by noising. This is also supported in other tasks shown in \ufb01gures E.1 to E.6. A high-level way of interpreting this is that the two compression algorithms we tested introduce a bias towards the origin, but in doing so they reduce the variance of our average gradient estimation. In the context of di\ufb00erentially private training where noise contributes a large amount of variance, reducing a large amount of variance in exchange for a small amount of bias can reduce the overall error when estimating the average gradient. It can be said that the compression has a regularizing e\ufb00ect on our estimation of the average gradient. \fCHAPTER 3. RESULTS 18 Figure 3.8: Gradient Error vs Privacy Bound (\u03f5) and Bandwidth (Gb) for the Surnames task with DGC. 3.4 Optimizing the Bias-Variance Trade-O\ufb00 In this section, we show the suboptimality of selecting clipping values based on the 50th percentile of gradient norms. We demonstrate why it makes sense to drastically reduce the clipping value for larger models, and that the clipping value has an optimal value related to the shrinkage coe\ufb03cient of the James-Stein estimator [18]. 3.4.1 Minimal-Error Clipping In this section, we empirically test the clipping value that minimizes gradient error as well as provide an approximate theoretical model for guessing the optimal clipping value. Figures 3.10 and 3.11 demonstrate that the strategy of setting the clipping radius to be the median of gradient norms [14] does not minimize the gradient error. We propose the following theoretical model of the gradient error. More examples of this are shown in \ufb01gures F.1 and F.2. Approximate Error = max(0, (||g||2 \u2212C)2) + mC2\u03c32 (3.5) We further simplify this model into a convex di\ufb00erential form as follows to allow us to directly \fCHAPTER 3. RESULTS 19 Figure 3.9: Breakdown of gradient error into bias and variance at di\ufb00erent stages (after clipping, after noising, and after compression) for the SNLI task with DGC. solve for the minimum by di\ufb00erentiating with respect to C. Di\ufb00erentiable Approximate Error = (||g||2 \u2212C)2 + mC2\u03c32 (3.6) C\u2217= argminC \u0000(||g||2 \u2212C)2 + mC2\u03c32\u0001 = ||g|| 1 + m\u03c32 (3.7) Shown in \ufb01gures 3.10, and 3.11, the minimum for this model generally leads to at least a couple orders of magnitude of decrease in gradient error. Of course, the quality of such a model depends on the knowledge of the median gradient norm ||g||2. The advantage this model provides is a better utilization of the knowledge of ||g||2, should it be available either exactly or approximately with di\ufb00erential privacy guarantees in the case of [14]. It is a better strategy than simply setting C = ||g||2, as it takes into account the e\ufb00ect of dimensionality m and the noise multiplier \u03c3. More examples of this are shown in \ufb01gures F.1 and F.2. Worth noting in \ufb01gure 3.11 (left), is that the assumed knowledge of the median gradient norm ||g||2 appears to be produce a bad theoretical model. This results in the theoretical minimum being much larger than the empirical model. We believe this is due to a large di\ufb00erence between the norm \fCHAPTER 3. RESULTS 20 Figure 3.10: Relationship between gradient error and clipping value for SNLI task. of the average gradient and the median of the gradient norms, and that perhaps the norm of the average gradient would be better at informing the theoretical model. Another use for this model is providing e\ufb03cient clipping values that are well-tuned across di\ufb00erent levels of di\ufb00erential privacy. With the use of this model, it can be shown that our methodology of keeping the same clipping threshold for di\ufb00erent levels of di\ufb00erential privacy is actually suboptimal. However, to our best knowledge, there\u2019s no work in the literature of di\ufb00erential privacy that currently suggests the optimal clipping threshold depends on the dimensionality and privacy level. While it is possible to manually tune the clipping parameter with many repeated trials, this can be infeasible in the context of di\ufb00erentially private training. This is because training a model to convergence, even if it uses a suboptimal clipping value, can incur a large privacy cost. It is much more e\ufb03cient to perform a di\ufb00erentially private query of the average gradient norm, and compute a reasonable clipping radius directly. Figure 3.12 shows that the there is some bene\ufb01t to optimizing the clipping value in most cases with gains of up to 6.5% accuracy. However, not all tasks bene\ufb01t from this. The CIFAR-10 task experienced worse accuracy after the change in clipping. 3.4.2 Bias-Variance Optimization through Intermediate Processing In this section, we investigate the e\ufb00ectiveness reducing the error by processing the di\ufb00erentially private gradient estimates after the privacy mechanism as opposed to directly adjusting the privacy mechanism. Our proposed mechanism requires no prior knowledge about the gradients before the privacy mechanism. So there is no need to perform di\ufb00erentially private queries for information such \fCHAPTER 3. RESULTS 21 Figure 3.11: Relationship between gradient error and clipping value for CIFAR-100 task. Figure 3.12: Result of optimizing clipping based on theoretical model as the median of gradient norms. We propose this new algorithm which functions reduce the error introduced by noise addition and compression: Denoise. In this algorithm, both the sender and receiver keep track of a velocity term which is the an exponential average of the past average gradients. At each iteration, the sender updates it\u2019s velocity using the new average gradient. We de\ufb01ne this change in velocity as acceleration. The sender then performs a top-k sparsi\ufb01cation of both the acceleration and the velocity vector, and compares the norm of the error produced by sparsi\ufb01cation in both cases. The sender sends either the sparse velocity or the sparse acceleration to the receiver along, whichever results in the least compression error, along with a single-bit \ufb02ag to signal whether the message contains a velocity vector or acceleration vector. The sender accumulates a residual term to feed back into the next message, but the residual is decayed by some factor. The receiver updates it\u2019s velocity by replacing it with a new velocity vector or adding the acceleration vector. The key to our compression algorithm \fCHAPTER 3. RESULTS 22 is the use of temporal averaging to reduce noise and the option to choose between either velocity or acceleration, one of which may incur less compression error at any given round. Details of this algorithm are written in 1 Figure 3.13: E\ufb00ectiveness of introducing our algorithm to variance combinations of di\ufb00erential privacy guarantees and gradient compression (sparsi\ufb01cation). Figures 3.13 to 3.16 show the impact of di\ufb00erentially private training with sparsi\ufb01cation (DGC) compared to di\ufb00erentially private training with model-wise sparsi\ufb01cation and denoise applied in between. In the Surnames, CIFAR-10, and SNLI tasks, the addition of denoise improves accuracy for every combination of privacy and compression except for CIFAR-10 with \u03f5 = 62.09 and bandwidth = 1.23Gb. In the non-private cases, we see that allowing the compression algorithm to choose between two di\ufb00erent messages allows us to compress more aggressively with less reduction in accuracy (3.5% less in Surnames, 15.1% in CIFAR-10, 1.4% in SNLI, and 12.4% in CIFAR-100). In the non-compressed cases, we see that the second clipping reduces the variance enough to enable stronger di\ufb00erential \fCHAPTER 3. RESULTS 23 Figure 3.14: E\ufb00ectiveness of introducing our algorithm to variance combinations of di\ufb00erential privacy guarantees and gradient compression (sparsi\ufb01cation). privacy guarantees without as much reduction in accuracy (23.2% in Surnames, 24.6% in CIFAR10, 14.6% in SNLI, and 0.0% in CIFAR-100). Unfortunately, this e\ufb00ect was not observable in the CIFAR-100, due to it being particularly sensitive to noise. In the CIFAR-100 task, we observe greater compressibility with denoise, but the noise sensitivity did not see observable di\ufb00erences. We show results of running with a lower clipping radius of 70.0 in \ufb01gure 3.16 (suggested as the empirical optimum from testing minimal-error clipping) but saw the same results. The function of our algorithm is orthogonal to di\ufb00erential privacy and compression. For this reason, it can be used in conjunction with any gradient noising mechanism and any gradient compression algorithm, and is not in direct competition with the prior in either area. 3.5" + }, + { + "url": "http://arxiv.org/abs/2110.01529v2", + "title": "A Proposed Conceptual Framework for a Representational Approach to Information Retrieval", + "abstract": "This paper outlines a conceptual framework for understanding recent\ndevelopments in information retrieval and natural language processing that\nattempts to integrate dense and sparse retrieval methods. I propose a\nrepresentational approach that breaks the core text retrieval problem into a\nlogical scoring model and a physical retrieval model. The scoring model is\ndefined in terms of encoders, which map queries and documents into a\nrepresentational space, and a comparison function that computes query-document\nscores. The physical retrieval model defines how a system produces the top-$k$\nscoring documents from an arbitrarily large corpus with respect to a query. The\nscoring model can be further analyzed along two dimensions: dense vs. sparse\nrepresentations and supervised (learned) vs. unsupervised approaches. I show\nthat many recently proposed retrieval methods, including multi-stage ranking\ndesigns, can be seen as different parameterizations in this framework, and that\na unified view suggests a number of open research questions, providing a\nroadmap for future work. As a bonus, this conceptual framework establishes\nconnections to sentence similarity tasks in natural language processing and\ninformation access \"technologies\" prior to the dawn of computing.", + "authors": "Jimmy Lin", + "published": "2021-10-04", + "updated": "2021-12-28", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "Introduction For the past half a century, information retrieval has been dominated by bag-of-words exact-match scoring models such as BM25 executed at scale using inverted indexes and ef\ufb01cient query-at-atime retrieval algorithms. Even in the context of feature-based learning to rank and, more recently, neural models, these bag-of-words models remain of fundamental importance because they provide potentially relevant texts for downstream reranking in the context of multi-stage pipelines. This role is usually referred to as \ufb01rst-stage retrieval or candidate generation. Multi-stage ranking architectures have been studied extensively by academic researchers [Matveeva et al., 2006, Cambazoglu et al., 2010, Wang et al., 2011, Tonellotto et al., 2013, Asadi and Lin, 2013, Capannini et al., 2016, Clarke et al., 2016, Chen et al., 2017, Mackenzie et al., 2018] and there is substantial documentation that many commercial applications are designed in this manner [Pedersen, 2010, Liu et al., 2017, Huang et al., 2020, Zou et al., 2021]. There has, of late, been much interest and excitement surrounding so-called \u201cdense retrieval\u201d techniques, or ranking with learned dense representations. This general approach, often called a bi-encoder design [Humeau et al., 2020], is perhaps best exempli\ufb01ed by DPR [Karpukhin et al., 2020] and ANCE [Xiong et al., 2021], but other examples abound [Gao et al., 2021b, Hofst\u00e4tter et al., 2020, Qu et al., 2021, Hofst\u00e4tter et al., 2021, Qu et al., 2021, Zhan et al., 2021, Lin et al., 2021c]. Dense retrieval is formulated as a representational learning problem where the task is to learn (nowadays, transformer-based) encoders that map queries and documents into dense \ufb01xed-width vectors (768 dimensions is typical). The goal is to maximize inner products between queries and relevant documents and to minimize inner products between queries and non-relevant documents. This is framed as a supervised machine learning problem, with relevance signals coming from a large dataset such arXiv:2110.01529v2 [cs.IR] 28 Dec 2021 \fas the MS MARCO passage ranking test collection [Bajaj et al., 2018]. Lin et al. [2021b] provide a recent survey of this general approach within the broader context of text ranking using BERT and other pretrained transformer-based language models. Experiments have shown that dense retrieval methods outperform \u201csparse retrieval\u201d methods, usually referring to bag-of-words exact-match methods such as BM25.1 This appears to be a robust and widely replicated \ufb01nding, and dense retrieval models are known to have been deployed in real-world search applications, for example, by Bing [Xiong et al., 2021] and Facebook [Huang et al., 2020]. Scaling such methods requires infrastructure that is very different from sparse retrieval: instead of relying on inverted indexes for query evaluation, as BM25 does, dense retrieval typically relies on approximate nearest neighbor (ANN) search; one standard technique exploits hierarchical navigable small world graphs (HNSW) [Malkov and Yashunin, 2020]. Thus, recent literature appears to have established a contrast between dense retrieval and sparse retrieval. The standard portrayal is that they represent fundamentally different approaches, requiring different problem formulations, different models, and different software infrastructures for ef\ufb01cient execution at scale. I argue, however, that this is not the case. Aspects of the ideas and observations presented here were originally captured in two previous papers [Lin and Ma, 2021, Lin et al., 2021a]. I build on both, with additional analysis and synthesis. The goal of this paper is to provide a conceptual framework that unites dense and sparse retrieval by demonstrating that they, in fact, have the same functional form, just with different parameterizations. This framework adopts a representational approach and breaks the core text retrieval problem into a logical scoring model and a physical retrieval model, allowing a researcher to separate how document relevance scores are computed from how retrieval is performed at scale. In terms of scoring models, dense and sparse retrieval can be characterized along two dimensions: the contrast between dense vs. sparse vector representations, and the contrast between supervised (learned) vs. unsupervised approaches. The main contribution of this conceptual framework is that it provides abstractions to help researchers make sense of the panoply of recently proposed retrieval models that, at \ufb01rst glance, defy orderly categorization. The proposed framework suggests a number of open research questions, providing a roadmap for future research, potentially tying together multiple sub-\ufb01elds within information retrieval. As a bonus, this conceptual framework establishes interesting connections to sentence similarity tasks in natural language processing and information access \u201ctechnologies\u201d prior to the dawn of computing. 2 A Conceptual Framework The formulation of text retrieval (alternatively, text ranking)\u2014what information retrieval researchers more precisely call ad hoc retrieval\u2014is typically de\ufb01ned as follows: Given an information need expressed as a query q, the text retrieval task is to return a ranked list of k documents2 {d1, d2 . . . dk} from an arbitrarily large but \ufb01nite collection of documents D = {di} that maximizes a metric of interest, for example, nDCG, AP, etc. These metrics vary, but they all aim to quantify the \u201cgoodness\u201d of the results with respect to the information need; in some cases, metrics can be understood more formally in terms of the utility that a user would derive from consuming the results. The retrieval task is also called top-k retrieval (or ranking), where k is the length of the ranked list (also known as the retrieval or ranking depth). We can break the text retrieval problem down into two distinct components, as follows: Logical Scoring Model Let us de\ufb01ne \u03b7q(q) and \u03b7d(d) as two arbitrary functions that take a query and a document (both sequences of terms), respectively, and map each into a \ufb01xed-width vector representation. As will become clear below, I will call these two functions \u201cencoders\u201d. 1Referring to bag-of-words exact-match methods as \u201csparse retrieval\u201d is a relatively new invention, primarily to establish contrast with dense retrieval methods. Nevertheless, I will use this terminology throughout the paper. 2Consistent with parlance in information retrieval, I use \u201cdocument\u201d throughout this paper in a generic sense to refer to the unit of retrieved text, even though in truth it may be a passage, a web page, a PDF, or some arbitrary span of text. 2 \fLet us further de\ufb01ne a comparison function \u03c6 that takes these \ufb01xed-width vector representations and computes a score. We have: s(q, d) \u2206 = \u03c6(\u03b7q(q), \u03b7d(d)) (1) We can interpret the score s as quantifying the degree to which d is relevant to query q, i.e., the basis for ranking a set of documents with respect to a query. For example, we desire to maximize scores for queries and their relevant documents and minimize scores for queries and non-relevant documents (note how this statement can be straightforwardly operationalized into a loss function). For dense retrieval methods, this design is commonly called a bi-encoder [Humeau et al., 2020]. More intuitively, we can understand the score s as capturing the probability of relevance: P(Relevant = 1|d, q) \u2206 = s(q, d). (2) Note that the domain of \u03b7q comprises arbitrary sequences of terms, including sequences that have never been encountered before. In contrast, the domain of \u03b7d is typically D, since we are retrieving from a given collection of documents (i.e., the corpus). The logical scoring model, as de\ufb01ned in Eq. (1), nicely captures why I characterize this proposed conceptual framework as a \u201crepresentational approach\u201d, since it focuses on matching representations derived from queries (information needs) and documents (texts to be searched). In the context of bag-of-words representations, this formulation puts the vocabulary mismatch problem [Furnas et al., 1987]\u2014overcoming the fact that information seekers and authors use different words to express the same concepts\u2014front and center in the design of retrieval models. As I will discuss in detail later, neural models are simply the source of (better) representations\u2014the structure of the ad hoc retrieval problem remains the same. In fact, across many diverse formulations of retrieval models, \u03c6 is de\ufb01ned as the inner product. Physical Retrieval Model Given the setup above, top-k retrieval can be de\ufb01ned as: arg top-k d\u2208D \u03c6(\u03b7q(q), \u03b7d(d)) (3) That is, given q, we wish to identify from D the k documents d1 . . . dk that have the highest scores s1 . . . sk. These {(di, si)}k i=0 pairs are usually referred to as the ranked list of results (sometimes called the \u201chits\u201d). If s is interpreted as a probability of relevance, as per Eq. (2), then the physical retrieval model represents a direct realization of the Probability Ranking Principle [Robertson, 1977], which states that documents should be ranked in decreasing order of the estimated probability of relevance with respect to the query. We might think of the logical scoring model and the physical retrieval model as providing what I argue to be the \u201cright\u201d abstractions for the text retrieval problem. So far, however, nothing in the presentation above captures information that isn\u2019t already common knowledge. I have simply adopted notation that may seem slightly peculiar, compared to how the text retrieval problem is usually presented (for example, in standard textbooks). Nevertheless, I will attempt to convince the reader that this isn\u2019t a pointless symbol manipulation exercise, but rather this framing of the problem provides a conceptual framework that bridges dense and sparse retrieval methods. 2.1 Applications to Dense and Sparse Retrieval Let us consider DPR [Karpukhin et al., 2020], a popular representative dense retrieval model, and see how it can be understood within this conceptual framework. DPR uses separate transformerbased encoders for queries and documents, \u03b7q and \u03b7d, respectively. Both encoders take the [CLS] representation from BERT [Devlin et al., 2019] as its output representation. In other words, the DPR encoders project queries and documents into \ufb01xed-width vector representations in some latent semantic space (by default, 768 dimensions). Relevance between query representations and document representations\u2014the comparison function \u03c6\u2014is de\ufb01ned in terms of inner products: \u03c6(\u03b7q(q), \u03b7d(d)) = \u03b7q(q)\u22ba\u03b7d(d) (4) 3 \fThe model is trained as follows: let R = {\u27e8qi, d+ i , d\u2212 i,1, d\u2212 i,2, . . . d\u2212 i,n\u27e9}m i=1 be the training set comprising m instances. Each instance contains a query q, a relevant passage d+, and n non-relevant passages d\u2212 1 , d\u2212 2 , ...d\u2212 n . DPR is trained with the following loss function: L(q, d+, d\u2212 1 , d\u2212 2 , ...d\u2212 n ) = \u2212log exp [\u03c6(\u03b7q(q), \u03b7d(d+))] exp [\u03c6(\u03b7q(q), \u03b7d(d+))] + Pn j=1 exp \u0002 \u03c6(\u03b7q(q), \u03b7d(d\u2212 j )) \u0003. (5) Non-relevant passages for a query are selected via in-batch negative sampling [Henderson et al., 2017], from examples associated with other queries in the same training batch. However, this is a technical detail and other models select negative examples in different ways. For example, ANCE [Xiong et al., 2021] searches for \u201chard negatives\u201d based on an earlier version of the document encoder itself. I have just described DPR in terms of the proposed conceptual framework outlined above. Now let\u2019s try to recast BM25 [Robertson et al., 1995] in the same framework. In fact, the mapping is pretty straightforward: The query encoder \u03b7q and the document encoder \u03b7d both generate sparse bag-of-words vector representations of dimension |V |, where V is the vocabulary of the corpus. For the output of the document encoder \u03b7d, as with any bag-of-words representation, each dimension corresponds to a term in the vocabulary, and each term is assigned a weight according to the BM25 scoring function. The query encoder \u03b7q uses a multi-hot representation, with a weight of one if the term is present in the query, and zero otherwise.3 The comparison function \u03c6 is, like DPR, de\ufb01ned in terms of the inner product. Viewed in this manner, we can clearly see that BM25 and DPR have the same functional form, parameterized by \u03b7q, \u03b7d, and \u03c6, and in fact, \u03c6 is the inner product in both cases. Explained in terms of abstractions such as interfaces in programming languages, by analogy the logical scoring model de\ufb01nes the abstract methods (\u03b7q, \u03b7d, and \u03c6) that speci\ufb01c retrieval models override with custom implementations, and here I have demonstrated that the abstraction covers both BM25 and DPR. This framework can be applied to the recent panoply of proposed dense retrieval methods in the literature, as well as nearly all families of bag-of-words exact-match models beyond BM25\u2019s probabilistic formulation, e.g., tf\u2013idf, query likelihood, divergence from randomness, etc. This conceptual framework allows us to draw a direct connection between dense retrieval and sparse retrieval as parametric variations of the same underlying logical scoring model. Finally, what about cross-encoders? Typical of this design is the monoBERT model [Nogueira and Cho, 2019, Lin et al., 2021b], where a query and a document are fed into a pretrained transformer as part of an input template, and the contextual representation of the [CLS] token is used for relevance classi\ufb01cation. Here, we can say that the comparison function \u03c6 is de\ufb01ned in terms of the transformer, and thus cross-encoders are still captured by the logical scoring model de\ufb01ned in Eq. (1). \u201cHiding\u201d transformer inference in the comparison function \u03c6 might seem like a sleight of hand, but the PreTTR reranking model proposed by MacAvaney et al. [2020] connects a \u201cfull\u201d crossencoder like monoBERT on the one hand to \u03c6-as-inner-product methods like DPR on the other hand. MacAvaney et al. began with the simple observation that query\u2013document attention prevents document representations from being computed of\ufb02ine; recall that in DPR, \u03b7d(\u00b7) does not depend on the query. Yet, it is precisely query\u2013document attention that allows cross-encoders to obtain high levels of effectiveness. PreTTR was designed with this insight: What if we limited query\u2013document attention to only the upper layers of the transformer? In such a design, document representations in the lower layers could be precomputed (and hence cached to accelerate inference). At one extreme end of the PreTTR design space, if all query\u2013document attention is eliminated, then we have essentially \u201ccleaved\u201d monoBERT into two disconnected networks, and the result looks quite similar to DPR, where each of the disconnected networks serves as an encoder (and all document representations can be precomputed and indexed for low-latency retrieval). At the other extreme, if no query\u2013document attention is eliminated, we have monoBERT. Thus, PreTTR provides the conceptual linkage that allows us to understand bi-encoders and cross-encoders as the two extreme cases of a single underlying design: it\u2019s all in the de\ufb01nition of the comparison function \u03c6. 3This is a slight simpli\ufb01cation; the original formulation of BM25 [Robertson et al., 1995] included a query weighting component, but this term is usually omitted in modern implementations [Kamphuis et al., 2020]. 4 \fDense Sparse Supervised DPR, ANCE DeepImpact, uniCOIL Unsupervised LSI, LDA BM25, tf\u2013idf Table 1: A taxonomy of logical scoring models. 2.2 Generalization of Logical Scoring Models Dense retrieval models such as DPR are often compared against sparse retrieval models such as BM25 in experimental evaluations, as Karpukhin et al. [2020] did in their paper. Not surprisingly, results show that dense retrieval models obtain higher effectiveness. This, however, is not a fair comparison. Dense retrieval methods represent an instance of representational learning\u2014the key here is learning. The output of the encoders are learned representations that bene\ufb01t from (large amounts of) training data under a standard supervised machine learning paradigm. In contrast, BM25 is unsupervised.4 Comparing a supervised method to an unsupervised method is fundamentally an apples-to-oranges juxtaposition; it should not be surprising that a supervised technique is more effective. As previously argued in Lin and Ma [2021], the encoders \u03b7\u00b7 should be organized along two distinct dimensions or properties: The \ufb01rst dimension contrasts dense vs. sparse vector representations for queries and documents. The second dimension distinguishes between supervised (learned) and unsupervised representations. Table 1 illustrates this taxonomy. DPR (along with nearly all dense retrieval methods today) are instances of learned dense representations. BM25 is an instance of an unsupervised sparse representation. This taxonomy immediately points to the existence of two other classes of logical scoring models. In fact, they correspond to models described in the literature that we can now categorize and unify in a single conceptual framework: Learned sparse representations The existence of learned dense representations such as DPR and unsupervised sparse representations such as BM25 suggests that there should exist a class of learned sparse representations. Learning sparse representations is by no means a new idea. If we \ufb01x the dimensions of the output representation to be the vocabulary (i.e., retaining a bag-of-words assumption), models for learned sparse representations become term weighting models\u2014that is, a supervised machine learning approach to learning term weights. The earliest example I am aware of is Gordon [1988], who applied (what we might today call) representational learning on boolean vectors of descriptors using genetic algorithms, based on a small set of relevance judgments. These experiments might today be characterized as \u201ctoy\u201d, but all the key elements of learned sparse retrieval models (quite amazingly!) are present. Another example along these lines is the work of Wilbur [2001], who attempted to learn global term weights using TREC data. A bit later, Trotman [2005] used genetic programming to discover better BM25-like scoring functions. Quite simply, there is plenty of evidence that learned sparse representations aren\u2019t new. The \ufb01rst example of learned sparse representations in the \u201cBERT era\u201d is DeepCT [Dai and Callan, 2019], which uses a transformer to learn term weights based on a regression model, with the supervision signal coming from the MS MARCO passage ranking test collection. DeepCT has an interesting \u201cquirk\u201d: in truth, it only learns the term frequency (tf) component of term weights, but still relies on the remaining parts of the BM25 scoring function via the generation of pseudodocuments. The method also has a weakness: it only assigns weights to terms that are already present in the document, which limits retrieval to exact match. More generally, if we retain a bag-of-words assumption, term weighting models cannot address the vocabulary mismatch problem (more below). Note that dense representations do not have this issue since the dimensions of the vector representation capture some latent semantic space, not speci\ufb01c terms in the corpus vocabulary, and thus are able to capture what researchers call \u201csemantic matching\u201d. The exact-match weakness of DeepCT discussed above was resolved by the DeepImpact model [Mallia et al., 2021], which brought together two key ideas: the use of document expansion to 4Leaving aside simple tuning of parameters such as k1 and b. 5 \fidentify dimensions in the sparse bag-of-words representation that should have non-zero weights and a term weighting model based on a pairwise loss between relevant and non-relevant documents with respect to a query. Expansion terms are identi\ufb01ed by doc2query\u2013T5 [Nogueira and Lin, 2019], a sequence-to-sequence model for document expansion that predicts queries for which a text would be relevant. Since DeepImpact directly predicts term weights that are then quantized, it would be more accurate to call these weights learned impacts, since query\u2013document scores are simply the sum of weights of document terms that are found in the query. Furthermore, calling these impact scores draws an explicit connection to a thread of research in information retrieval dating back two decades [Anh et al., 2001]. Many other retrieval models can also be understood as instances of learned sparse representations, which allow for different parameterizations. Lin and Ma [2021] argued that another recent model called COIL [Gao et al., 2021a] is an instance of learned sparse representations, where the scoring model assigns each term a vector \u201cweight\u201d, stored in standard inverted lists. Lin and Ma demonstrated this connection by introducing a degenerate version of COIL called uniCOIL, where the weight vectors are collapsed down into a single dimension, thus yielding scalar weights. In this proposed conceptual framework, we might implement document expansion differently: uniCOIL originally used doc2query\u2013T5 for document expansion, but this was replaced by Zhuang and Zuccon [2021a] with an alternative model based on TILDE [Zhuang and Zuccon, 2021b]. They demonstrated that expansion using TILDE achieves comparable effectiveness on the MS MARCO passage ranking task, but with substantially lower inference costs. As another interesting variation, note that the query and document encoders need not be based on transformers (e.g., Zamani et al. [2018]), or even neural networks at all! For example, the retrieval model of Boytsov and Nyberg [2020], which exploits translation probabilities learned from query\u2013passage pairs, can be considered a (non-neural) learned sparse model. Synthesizing recent literature, there are three important observations about retrieval using learned sparse representations, which were originally noted by Lin and Ma [2021]: \u2022 Choice of basis. When contrasting learned dense representations with learned sparse representations, we see that nearly all recent proposals take advantage of transformers (Boytsov and Nyberg [2020] being a notable exception), so that aspect of the design is not a salient distinction. The critical difference is the basis of the vector representations: In nearly all current sparse approaches, the basis of the vector space remains \ufb01xed to the corpus vocabulary, i.e., they retain the bag-of-words assumption, even though in principle one could imagine sparse representations that abandon this assumption. In dense approaches, the model is given the freedom to \u201cchoose\u201d a new basis derived from transformer representations. This change in basis allows the encoder to represent the \u201cmeaning\u201d of texts in relatively small \ufb01xed-width vectors (say, 768 dimensions, compared to sparse vectors that may have millions of dimensions). This leads us to the next important observation: \u2022 Expansions for sparse representations. Without some form of expansion, learned sparse representations remain limited to (better) exact matching between queries and documents. The nature of sparse representations means that it is computationally impractical to consider non-zero weights for all elements in the vector (i.e., the vocabulary space). Thus, document expansion serves the critical role of proposing a set of candidate terms that should receive non-zero weights; since the number of candidate terms is small compared to the vocabulary size, the resulting vector remains sparse. Without some form of expansion, learned sparse representations cannot address the vocabulary mismatch problem [Furnas et al., 1987], because document terms not present in the query cannot contribute any score. This leads us to the third important observation: \u2022 Expansion and Term Weighting. The upshot of the above analysis is that retrieval methods based on learned sparse representations can be decomposed into an expansion and a term weighting component. For example, DeepCT performs no expansion and uses a regression-based scoring model. DeepImpact performs document expansion with doc2query\u2013T5, and as discussed above, the doc2query\u2013T5 model can be replaced with the TILDE document expansion model [Zhuang and Zuccon, 2021a]. Although many learned sparse models today have distinct expansion and weighting components, one can certainly imagine an integrated end-to-end model that jointly performs both. Nevertheless, such models will still need to tackle these distinct challenges: overcoming vocabulary mismatch and predicting term importance. 6 \fI will examine the impact of different design decisions for learned sparse representations in Section 3, drawing on recent experimental results from the literature. Unsupervised dense representations. The juxtaposition of DPR and BM25 suggests the existence of learned sparse representations. Establishing dense vs. sparse and supervised (learned) vs. unsupervised as the relevant dimensions of contrast suggests a class of unsupervised dense methods. While there is little work in this space of late, this label does describe techniques such as LSI [Deerwester et al., 1990, Atreya and Elkan, 2010] and LDA [Wei and Croft, 2006], which have been previously explored. I don\u2019t have much to say here, except that perhaps this gap might highlight a research direction worth renewed investigation. Based on this discussion, we see that all quadrants in the taxonomy of logical scoring models shown in Table 1 are populated with known examples from the literature. Furthermore, I demonstrate (hopefully, in a convincing manner) that all of these methods can be viewed as different \u03b7q, \u03b7d, and \u03c6 parameterizations of the logical scoring model captured in Eq. (1). 2.3 Logical/Physical Separation The logical scoring model in Eq. (1) describes how query\u2013document scores are to be computed with respect to an arbitrary (query, document) pair. The text retrieval problem, however, requires a system to produce a top-k ranking from an arbitrarily large collection of documents; this is the goal of what I\u2019ve called the physical retrieval model, Eq. (3). In other words, the end-to-end problem requires the execution of the logical scoring model at scale. The simplest physical retrieval model is to brute-force compute, given a query, the query\u2013document score for every document in the collection. In fact, for research experiments, this remains a common approach for dense retrieval methods, for example, using so-called \u201c\ufb02at\u201d indexes in Facebook\u2019s Faiss library [Johnson et al., 2021]. For sparse retrieval, in the early days of information retrieval prior to the development of inverted indexes and associated query evaluation algorithms (see Perry and Willett [1983]), this was also a common approach. Obviously, a brute-force scan of sizeable collections is impractical for low-latency querying, with the exception of a few specialized cases [Lempel et al., 2007, Wang and Lin, 2015]. For dense vector representations, the top-k retrieval problem is often called nearest neighbor (NN) search, and for a small set of \u03c6 comparison functions (inner products, L1 distance, and a few others), there exist ef\ufb01cient, scalable solutions. This problem has been studied for over two decades, with early solutions relying on locality-sensitive hashing [Indyk and Motwani, 1998, Gionis et al., 1999]. Recently, approaches based on hierarchical navigable small-world graphs (HNSW) [Malkov and Yashunin, 2020] have emerged as the preferred solution, and are implemented in a variety of open-source libraries. Note that these techniques solve the approximate nearest neighbor (NN) search problem, which means that the top-k they generate are not exact; see, for example, Indyk and Motwani [1998] for how this approximation is typically formalized. For sparse retrieval, nearly all models adopt the inner product as the comparison function \u03c6, and the top-k retrieval problem is solved using ef\ufb01cient query evaluation algorithms (mostly document-ata-time techniques) operating over inverted indexes. There has, literally, been decades of work on ef\ufb01cient implementations; see Tonellotto et al. [2018] for a survey. With respect to the design of physical retrieval models, there are two important points worth explicitly discussing: \u2022 De\ufb01ning \u03c6 as inner products. Although the comparison function \u03c6 can be arbitrarily de\ufb01ned in the logical scoring model, for both dense and sparse representations, de\ufb01ning \u03c6 in terms of inner products (and a small number of other functions) leads to ef\ufb01cient scalable solutions for the top-k retrieval problem. That is, an inner product formulation of \u03c6 is privileged or \u201cspecial\u201d. If a researcher \ufb01xes \u03c6 to be the inner product and only rede\ufb01nes \u03b7q and \u03b7d to create a new logical scoring model, then existing software infrastructure for ef\ufb01cient top-k retrieval (implemented in various software libraries) can be reused. In the sparse retrieval space, the development of different scoring models such as tf\u2013idf, BM25, query-likelihood, divergence from randomness, etc., can be characterized as such, as well as most recent work in the dense retrieval space. In other words, ef\ufb01cient physical retrieval comes \u201cfor free\u201d. 7 \f\u2022 Tight logical/physical coupling. The current state of affairs can be characterized as follows: for sparse representations, top-k retrieval is almost always performed using inverted indexes, typically with document-at-a-time scoring. For dense representations, the same role is usually \ufb01lled by HNSW, implemented in Faiss or some other toolkit. In other words, we observe tight coupling between the logical scoring model and the physical retrieval model. Thus, dense and sparse representations use completely different \u201csoftware stacks\u201d. The separation of the physical retrieval model from the logical scoring model espoused in this paper represents an explicit attempt to move away from the tight coupling discussed above. Why can\u2019t we perform nearest neighbor search using inverted indexes? Similarly, why can\u2019t we perform BM25 retrieval using HNSW? There is no reason why not, and in fact, both have already been tried! Teo\ufb01li and Lin [2019] evaluated a number of \u201ctricks\u201d for performing top-k ranking on dense vectors with inverted indexes using the open-source Lucene search library. Tu et al. [2020] and Lin et al. [2021a] explored using HNSW for BM25 ranking. As it turns out, dense retrieval using inverted indexes doesn\u2019t work very well, and sparse retrieval using HNSW appears to be attractive only in limited settings. In terms of both ef\ufb01ciency and effectiveness, using the \u201cother\u201d physical technique to execute the logical scoring model is worse than its \u201cnatural\u201d counterpart. Thus, it might be fair to say that sparse representations have an af\ufb01nity with inverted indexes and dense representations with HNSW. While possible in principle, there doesn\u2019t seem to be a compelling case at present to adopt a decoupled approach. So what\u2019s the point? At a high level, tight coupling presents optimizations opportunities, while loose coupling promotes \ufb02exibility\u2014and I argue that this is exactly what\u2019s happened here. Over the course of many decades, researchers have devised numerous optimizations speci\ufb01cally targeted at ef\ufb01cient query evaluation using inverted indexes for sparse retrieval models [Tonellotto et al., 2018]. Thus, it is entirely believable (and perhaps even expected) that HNSW\u2014a much newer technique that has received far less attention\u2014cannot compete. However, it is also plausible that as HNSW receives more attention for different use cases and hence more optimization efforts over time, the performance gap closes. Explicitly promoting logical/physical separation in a loosely-coupled approach, I argue, increases the range of usage scenarios in which HNSW (and future techniques) may be applied, and thus might hasten these developments. Even more interesting to consider are representations that are not really dense, but not sparse either. For such a design, the ability to \u201cmix and match\u201d logical scoring models and physical retrieval models presents an interesting future direction. I come back to discuss this point in more detail in Section 4. The other major bene\ufb01t of the logical/physical separation is that it allows us to understand multistage ranking as practical physical realizations of expensive logical scoring models. For example, in Section 2.1, I argued that cross-encoders like monoBERT are covered by the functional form presented in Eq. (1), where the comparison function \u03c6 is de\ufb01ned in terms of transformers. Due to query\u2013document attention, the monoBERT logical scoring model can only be faithfully realized by computing the scores of all (q, d) pairs, \u2200d \u2208D. This is obviously impractical, and thus one solution to the physical retrieval problem is to adopt a multi-stage design with a \u201ccheap\u201d \ufb01rst-stage retrieval.5 It seems a bit silly to phrase as follows, given the obviousness and triviality of the observation, but de\ufb01ning \u03c6 in terms of transformers does not admit an ef\ufb01cient top-k retrieval solution over large corpora. The transformer is not one of those privileged functional forms of \u03c6 discussed above. Supporting evidence for this view comes from an experimental result presented in Lin et al. [2021b] (Section 3.2.2), who began with a standard BM25 + monoBERT reranking design [Nogueira and Cho, 2019] and successively increased the reranking depth. They performed experiments that applied monoBERT to rerank increasingly larger candidate sets from \ufb01rst-stage retrieval on the MS MARCO passage corpus. On the associated passage ranking task, Lin et al. discovered that effectiveness increases (and then plateaus) as the reranking depth increases, out to 50k hits per query. Given the resource requirements of such an experiment, the authors did not increase reranking depth any further. These results can be interpreted as follows: As the reranking depth increases, the \ufb01nal ranking becomes increasingly closer to a brute-force scan over the entire collection (and, critically, in this method, the \ufb01nal ranking score does not take into account the BM25 retrieval score). This interpretation is consistent with the arguments I made above. To be more precise, multi-stage ranking is an approximate physical retrieval realization of the monoBERT logical scoring model, since empirically, 5Using bag-of-words (unsupervised) sparse retrieval, with \u03c6 de\ufb01ned in terms of the inner product, no less! 8 \fa smaller k in \ufb01rst-stage top-k retrieval degrades effectiveness. In the limit, if k = |D|, then we\u2019re back to a brute-force computation of query\u2013document scores for all documents in the collection. So, in summary, decoupling the logical scoring model from the physical retrieval model offers two conceptual advances: unifying retrieval with dense and sparse representations, and providing a new perspective for understanding multi-stage ranking. 2.4 Connections to Natural Language Processing Lin et al. [2021b] argued that relevance, semantic equivalence, paraphrase, entailment, and a host of other \u201csentence similarity\u201d tasks are all closely related, even though the \ufb01rst is considered an IR problem and the remainder are considered to be problems in NLP. What\u2019s the connection? Cast in terms of the conceptual framework proposed in this paper, I argue that these problems all share in the formalization of the logical scoring model, but NLP researchers usually don\u2019t care about the physical retrieval model. For example, supervised paraphrase detection is typically formalized as a \u201cpointwise\u201d estimation task of the \u201cparaphrase relation\u201d: P(Paraphrase = 1|s1, s2) \u2206 = r(s1, s2). (6) That is, the task is to induce some scoring function based on training data that provides an estimate of the likelihood that two texts (sentences in most cases) are paraphrases of each other. In the popular transformer-based Sentence-BERT model [Reimers and Gurevych, 2019], the solution is formulated in a bi-encoder design: r(s1, s2) \u2206 = \u03c6(\u03b7(s1), \u03b7(s2)), (7) which has exactly the same functional form as the logical scoring model in Eq. (1)! The main difference, I argue, is that paraphrase detection for the most part does not care where the texts come from. In other words, there isn\u2019t an explicitly de\ufb01ned physical retrieval model. In fact, comparing Sentence-BERT with DPR, we can see that although the former focuses on sentence similarity tasks and the latter on passage retrieval, the functional forms of the solutions are identical. Both are captured by the logical scoring model in Eq. (1); the de\ufb01nitions of the encoders are also quite similar, both based on BERT, but they extract the \ufb01nal representations in slightly different ways. Of course, since DPR was designed for a question answering task, the complete solution requires de\ufb01ning a physical retrieval model, which is not explicitly present in Sentence-BERT. Pursuing these connections further, note that there are usage scenarios in which a logical scoring model for paraphrase detection might require a physical retrieval model. Consider a community question answering application [Srba and Bielikova, 2016], where the task is to retrieve from a knowledge base of (question, answer) pairs the top-k questions that are the closest paraphrases of a user\u2019s question. Here, there would be few substantive differences between a solution based on Sentence-BERT and DPR, just slightly different de\ufb01nitions of the encoders. One immediate objection to this treatment is that relevance differs from semantic equivalence, paraphrase, entailment, and other sentence similarity tasks in fundamental ways. For example, the relations captured by sentence similarity tasks are often symmetric (with entailment being an obvious exception), i.e., r(s1, s2) = r(s2, s1), while relevance clearly is not. Furthermore, queries are typically much shorter than their relevant documents (and may not be well-formed natural language sentences), whereas for sentence similarity tasks, the inputs are usually of comparable length and represent well-formed natural language. I argue that these differences are primarily features of the annotation process for the training data and are captured in parametric variations of the logical scoring model de\ufb01ned in Eq. (1). In practical terms, these task distinctions affect implementation design choices. Is the relation we\u2019re trying to model symmetric? In that case, let\u2019s just use the same encoder for both inputs. Otherwise, having separate encoders makes more sense. Interestingly, results from the dense retrieval model ANCE [Xiong et al., 2021], which uses the same encoder for both queries and documents (despite obvious differences between the inputs), has been shown to work well empirically. Maybe these design choices aren\u2019t so important anyway? The goal of this discussion is to illustrate that the conceptual framework proposed in this paper establishes connections between information retrieval and natural language processing, with the hope 9 \fthat these connections can lead to further synergies in the future. Lin et al. [2021b] (Chapter 5) argued that until relatively recently, solutions to the text retrieval problem and sentence similarity tasks have developed in relative isolation in the IR and NLP communities, respectively, despite the wealth of connections. In fact, both communities have converged on similar solutions in terms of neural architectures (in the pre-BERT days). The proposed conceptual framework here makes these connections explicit, hopefully facilitating a two-way dialogue between the communities that will bene\ufb01t both. 2.5 Historical Connections Civilizations have grappled with the challenges of accessing stored information shortly after the invention of writing, when humankind\u2019s collective knowledge outgrew the memory of its elders. We can imagine some ancient scribe, perhaps somewhere in Mesopotamia, scrawling on clay tablets, wondering where he6 put those records from last month. Libraries and archives, of course, have existed for millennia, created precisely to tackle this challenge. In contrast, our conceptualization of information retrieval using computers is less than a century old. Although the technologies have evolved over millennia, from clay tablets to scrolls to books, and now digital information, the underlying goals have changed little. Interestingly, it is possible to apply the conceptual framework proposed in this paper to describe information retrieval in the eras that pre-dated computers. For centuries, human librarians have been assigning content descriptors to information objects (books, scienti\ufb01c articles, etc.). These descriptors (also known as \u201cindex terms\u201d) were usually selected by human subject matter experts and drawn from thesauri, \u201csubject headings\u201d, or \u201ccontrolled vocabularies\u201d\u2014that is, a prede\ufb01ned vocabulary. This process was known as \u201cindexing\u201d or \u201cabstracting\u201d; the original sense of the activity involved humans, and thus, an indexer was a human who performed indexing, not unlike the earliest uses of computers to refer to humans who performed computations by hand! In other words, a human indexer served the role of the document encoder \u03b7d, and the output can be viewed as a multi-hot vector where each of the dimensions represents a content descriptor. Searching required the assistance of librarians who \u201cinterviewed\u201d the information seeker to understand the parameters of the request, to translate the information need into the same representational space of these content descriptors. Thus, librarians served the role of the query encoder \u03b7q. What about \u03c6? Since the query and document representations are best characterized as multi-hot vectors, representation matching occurs in a boolean fashion. In fact, the logical/physical separation applies to this human-mediated approach as well! To \u201cexecute\u201d retrieval in the simplest case of one-hot representations of content descriptors, the librarian consults a guide that maps these content descriptors into physical shelf locations, and then walks with the information seeker directly over to that location. More sophisticated physical retrieval models include the use of card catalogues.7 In the early days of computing, \u03c6 was implemented via the processing of punch cards,8 each of which encoded the representation of an information object (i.e., the output of the document encoder \u03b7d). Thus, as a bonus, the conceptual framework proposed in this paper can help us understand information retrieval through the ages, even prior to the advent of computing. 3 Experimental Results We can apply the conceptual framework proposed in this paper to organize various dense and sparse retrieval methods that have been proposed in the literature. This structure can facilitate comparisons across different classes of methods, and analyzing models in a common framework can perhaps help us better draw generalizations. Table 2 shows the effectiveness of various models on the development queries of the MS MARCO passage ranking test collection [Bajaj et al., 2018], which has emerged in recent years as the most prominent dataset for training and benchmarking retrieval models. As a baseline, row (1) shows the effectiveness of BM25, which can be characterized as an unsupervised sparse retrieval method. Learned sparse retrieval methods are shown in the second main block of Table 2, from row (2) to row (8c): per the discussion in Section 2.3, I break out term weighting and 6As yes, very likely a male. 7Millennials and even younger readers ask, \u201cWhat are those?\u201d 8Anyone other than boomers asks, \u201cWhat are those?\u201d 10 \fUnsupervised Sparse Representations MRR@10 Source (1) BM25 0.184 Nogueira and Lin [2019] Learned Sparse Representations MRR@10 Source Term Weighting Expansion (2) BM25 doc2query\u2013T5 0.277 Nogueira and Lin [2019] (3) DeepCT None 0.243 Dai and Callan [2019] (4) SparTerm MLM-based 0.279 Bai et al. [2020] (5) DeepImpact doc2query\u2013T5 0.326 Mallia et al. [2021] (6a) COIL-tok (d = 32) None 0.341 Gao et al. [2021a] (6b) COIL-tok (d = 32) doc2query\u2013T5 0.361 Lin and Ma [2021] (7a) uniCOIL None 0.315 Lin and Ma [2021] (7b) uniCOIL doc2query\u2013T5 0.352 Lin and Ma [2021] (7c) uniCOIL TILDE 0.349 Zhuang and Zuccon [2021b] (8a) SparTerm/SPLADE none 0.290 Formal et al. [2021b] (8b) SPLADE MLM-based 0.322 Formal et al. [2021b] (8c) DistilSPLADE-max MLM-based 0.368 Formal et al. [2021a] Learned Dense Representations MRR@10 Source (9) ColBERT 0.360 Khattab and Zaharia [2020] (10) ANCE 0.330 Xiong et al. [2021] (11) DistillBERT 0.323 Hofst\u00e4tter et al. [2020] (12) RocketQA 0.370 Qu et al. [2021] (13) TAS-B 0.347 Hofst\u00e4tter et al. [2021] (14) ADORE + STAR 0.347 Zhan et al. [2021] (15) TCT-ColBERTv2 0.359 Lin et al. [2021c] Dense\u2013Sparse Hybrids MRR@10 Source (16) CLEAR 0.338 Gao et al. [2021b] (17) COIL-full 0.355 Gao et al. [2021a] (18a) TCT-ColBERTv2 (15) + BM25 (1) 0.369 Lin et al. [2021c] (18b) TCT-ColBERTv2 (15) + doc2query\u2013T5 (2) 0.375 Lin et al. [2021c] (18c) TCT-ColBERTv2 (15) + DeepImpact (5) 0.378 Lin and Ma [2021] (18d) TCT-ColBERTv2 (15) + uniCOIL (7b) 0.378 Lin and Ma [2021] Table 2: Results on the development queries of the MS MARCO passage ranking task. document expansion components. BM25 with doc2query\u2013T5 document expansions [Nogueira and Lin, 2019], row (2), can be understood as using a neural sequence-to-sequence model for expansion, but retaining the BM25 weighting scheme; thus, learning is only applied in the expansion component. DeepCT [Dai and Callan, 2019], row (3), uses a regression-based term weighting model without any expansion. SparTerm [Bai et al., 2020], row (4), uses the masked language model (MLM) layer of BERT to generate expansion terms on which term weights are learned. DeepImpact [Mallia et al., 2021], row (5), combines the use of doc2query\u2013T5 for expansion with a term weighting model trained using pairwise loss. Rows (6a) and (6b) present a contrastive condition comparing the same term weighting model\u2014COIL [Gao et al., 2021a]\u2014with and without an expansion model; adding document expansion yields a two-point gain in effectiveness. With uniCOIL [Lin and Ma, 2021], which builds on COIL, the literature reports three contrastive conditions: without expansion, row (7a), and with two different expansion methods, doc2query\u2013T5 in row (7b) and TILDE [Zhuang and Zuccon, 2021b] in row (7c). These results af\ufb01rm the importance of document expansion, but suggest that the exact choice of the model might not matter so much, at least in the uniCOIL design, since the expansion model simply provides a candidate list of terms for the term weighting model to consider during training. Finally, row group (8) reports the effectiveness of a family of models called SPLADE, v1 [Formal et al., 2021b] and v2 [Formal et al., 2021a], both of which build on SparTerm [Bai et al., 2020]. These results corroborate the importance of term expansions in learned sparse representations. In the third main block of Table 2, I summarize the effectiveness of a number of learned dense retrieval models on the development queries of the MS MARCO passage ranking test collection. 11 \fNote that ColBERT [Khattab and Zaharia, 2020] uses the more expressive MaxSim operator to compare query and document representations (more discussion in Section 4); all other models use inner products. Comparing dense vs. sparse learned representations, there does not appear to be any discernible pattern that can be identi\ufb01ed. While earlier proposals for learned sparse models under-perform learned dense models, it is likely because researchers have been investigating learned dense representations for a longer period of time. From the perspective of effectiveness, the latest dense and sparse methods appear to be on par with each other. The \ufb01nal block of Table 2 shows the results of dense\u2013sparse hybrids. In particular, rows (18a\u2013d) present results of the TCT-ColBERTv2 dense retrieval model [Lin et al., 2021c] with different learned sparse retrieval models using a simple linear combination of scores. The only point I wish to make here is that dense and sparse representations appear to offer complementary relevance signals, such that combining evidence from both sources yields further increases in effectiveness compared to ranking with each individually. However, it appears that hybrid fusion is less sensitive to the effectiveness of the individual models\u2014for example, DeepImpact is less effective than uniCOIL, but both achieve the same effectiveness in a fusion context, as shown in row (18c) vs. row (18d). Furthermore, fusion with doc2query\u2013T5 achieves nearly the same level of effectiveness, shown in row (18b), even though the method alone is far less effective. Overall, I believe that dense\u2013sparse hybrids represent the state of the art in single-stage retrieval models today (i.e., what can be achieved without reranking). 4 Discussion The conceptual framework described in this paper clari\ufb01es the relationship between recently proposed dense and sparse retrieval methods, and experimental results presented in the previous section begin to help us understand the impact of different design choices. Furthermore, this proposed framework suggests a number of open research questions, which provide a roadmap for future work. I discuss these below: Out-of-distribution inference In the logical scoring model, explicitly establishing a contrast between supervised (learned) vs. unsupervised representations makes it obvious why DPR is more effective than BM25. However, in a supervised machine-learning paradigm, we are immediately led to the obvious follow-up question: What happens if the trained models are applied to out-of-distribution data? Phrased differently, what is the effectiveness of learned representations in a zero-shot setting? Cast into the same parlance for comparison purposes, BM25 is always applied in a \u201czero-shot\u201d manner (although admittedly, such a statement sounds odd). In the information retrieval context, since training data typically comprise (query, relevant document) pairs, out of distribution could mean a number of different things: (1) the document encoder is fed text from a different domain, genre, register, etc. than the training documents, (2) the query encoder is fed queries that are different from the training queries, (3) the relationship between input query\u2013document pairs at inference time differs from the relationship captured in the training data (e.g., task variations), or (4) a combination of all of the above. In fact, we already know the answer, at least in part: learned representations often perform terribly in out-of-distribution settings when applied in a zero-shot manner. Evidence comes from the BEIR benchmark [Thakur et al., 2021], which aims to evaluate the effectiveness of dense retrieval models across diverse domains. Results show that, in many cases, directly applying a dense retrieval model trained on one dataset to another dataset sometimes yields effectiveness that is worse than BM25. Complementary evidence comes from Li et al. [2021], who found that for passage retrieval in question answering, training DPR on one dataset and testing on another can lead to poor results. In their experiments, the corpus was \ufb01xed (Wikipedia articles), but the questions are generated in different ways; the end result is that the trained encoders often generalize poorly across datasets. In contrast to BM25, which \u201cjust works\u201d regardless of the corpus and queries in a \u201czero-shot\u201d manner, learned representations may perform poorly in out-of-distribution settings. This immediately suggests one important research direction, to better cope with these issues. For example, Li et al. [2021] proposed model uncertainty fusion as a solution. The BEIR benchmark [Thakur et al., 2021] provides a resource to evaluate progress, and the latest results show that learned sparse representations are 12 \fable to outperform BM25 [Formal et al., 2021a]. At a high level, there are at least three intertwined research questions: 1. What are the different ways in which models can be applied in an out-of-distribution manner and what is the impact of each? The four ways I\u2019ve sketched above provide a starting point, but could be further re\ufb01ned with experimental support. For example, is effectiveness degradation more severe with out-of-distribution documents or queries? Can we more formally characterize \u201cout-of-distribution\u201d\u2013ness? 2. Given the answers to the above questions, how do we then detect when an input instance is out of distribution? 3. And once we identify a potentially \u201cproblematic\u201d instance, what mitigation techniques can we bring to bear? In other words, we must understand the scope of the problem, identify when the problem occurs, and then \ufb01nally mitigate the problem. Without addressing these challenges, the real-world deployment of learned representations will be hampered by their inability to generalize to arbitrary information retrieval scenarios, in the way that BM25 isn\u2019t.9 I am heartened to see that the community has already begun to explore these interesting and important research questions, but there remains much more work to be done. Quality\u2013Space\u2013Time\u2013Cost tradeoffs By situating dense and sparse retrieval models in a uni\ufb01ed conceptual framework, comparisons between different methods become more meaningful. There are four dimensions along which different retrieval models should be compared: quality (e.g., retrieval effectiveness), space (e.g., index size), time (e.g., query latency), and cost (e.g., dollars per query). Naturally, most papers today focus on output quality, but the space requirements of dense vector representations have drawn interest from researchers as well. Retrieval models that depend on dense vector representations consume a large amount of space, which often translates into large memory requirements since many approximate nearest neighbor search libraries require memory-resident index structures for ef\ufb01cient querying. For example, a minimal Lucene index in Anserini [Yang et al., 2018], suf\ufb01cient to support bag-of-words querying on the MS MARCO passage corpus (8.8M passages), takes up only around 660 MB. A comparable HNSW index with 768-dimensional vectors in Faiss occupies 42 GB (with typical parameter settings), which is many times larger. As another example, Ma et al. [2021a] reported that the size of the original DPR (\ufb02at) vector index on the Wikipedia corpus is about 61 GB,10 compared to 2.4 GB for a comparable Lucene inverted index. This 25\u00d7 increase in space only yields an average gain of 2.5% in top-100 accuracy across \ufb01ve datasets [Ma et al., 2021b]. While researchers have begun to explore different techniques for reducing the space requirements for dense representations, for example, via dimensionality reduction or quantization [Izacard et al., 2020, Yamada et al., 2021, Ma et al., 2021a], there is much more work to be done. I am optimistic that the community will make headway here because, as already mentioned above, the comparisons to sparse representations are \u201cnot fair\u201d because inverted indexes have bene\ufb01ted from many decades of optimizations, particularly in the coding of sparse integer sequences, whereas researchers have only begun to tackle the impractically large space requirements associated with dense retrieval models. Finally, speed (more generally, performance characterized in terms of query latency, throughput, etc.) and cost (of hardware, power consumption, amount of CO2 generated, etc.) are issues that have received comparatively little attention, but are obviously important in real-world applications. I mention these considerations in tandem because there are many examples where, holding everything else \ufb01xed, speed and cost can be traded off for each other. A simple example is GPU vs. CPU inference for retrieval models that require neural inference on queries, which must be performed at search time. Since queries are usually short, CPU inference, even with transformer models, can be tolerable, but obviously, GPU inference can reduce query latency but incur additional hardware costs. As another example, in many real-world search applications, query latency can be controlled 9Another way to say this: Suppose we\u2019re faced with a completely new retrieval task in a highly specialized and obscure domain. I think most researchers and practitioners would unequivocally suggest using BM25 as the baseline, and would be con\ufb01dent of obtaining \u201creasonable\u201d results. I don\u2019t think we have that same con\ufb01dence with any learned representations at present. 10An HNSW index suitable for low-latency querying would be even larger. 13 \fin partitioned architectures by adjusting the size of each partition (also called a shard): the smaller each partition, the lower the query latency, but at the cost of needing more hardware (and hence cost) for a given corpus size. While there have been some discussions of these issues in blog posts11 and on social media, these considerations have not attracted much attention from researchers. Moving forward, I believe that an accurate characterization of dense and sparse retrieval methods requires clearly evaluating quality\u2013space\u2013time\u2013cost tradeoffs. This to me is exciting because it provides an opportunity for collaborations between \u201cmodeling\u2013minded\u201d, \u201calgorithm\u2013minded\u201d, and \u201cef\ufb01ciency\u2013minded\u201d researchers.12 \u201cMixing and matching\u201d logical scoring models and physical retrieval models Dense and sparse representations are not discrete categories, but rather lie on a continuum with many variations. Currently, the size (in terms of the number of dimension) of (most) sparse representations equals the vocabulary size of the corpus, and dense representations typically have hundreds of dimensions (768 being a common setting). What if we \u201cdensify\u201d sparse representations and \u201csparsify\u201d dense representations\u2014to yield, say, vectors that are on the order of a few thousand dimensions? We might characterize these vectors as \u201cnot really dense, but not sparse either\u201d. For such a logical scoring model, what physical retrieval model makes the most sense in terms of different tradeoffs? In Section 2.3, I advocated for the separation of the logical scoring model from the physical retrieval model. A loosely coupled approach provides \ufb02exibility and the ability to make progress independently on different aspects of the overall problem. Currently, there is an af\ufb01nity between sparse representations and query evaluation using inverted indexes on the one hand, and dense representations and HNSW on the other. But what happens when the representations move out of their respective \u201csweet spots\u201d? As we \u201cdensify\u201d sparse representations, the performance of inverted indexes is expected to degrade. As we \u201csparsify\u201d dense representations, the performance of HNSW is expected to degrade. Thus, we expect some crossover point in the middle? Perhaps for vectors that are \u201cnot really dense, but not sparse either\u201d, neither approach will work well. This suggests a need to build index structures coupled with algorithmic innovations for top-k retrieval on such vector representations. I believe that this is where a clean abstraction and the ability to \u201cmix and match\u201d different logical scoring models with physical retrieval models will really become bene\ufb01cial. We can imagine the development of different data structures and algorithms targeted to different types of representations\u2014 beyond the (basically, two) limited options we have today. Depending on the characteristics of the vector representations, for example, the number of dimensions, the entropy of the values, the degree of isotropy, etc., different physical retrieval models might be appropriate. This is taking a page out of the playbook of database researchers\u2014for example, it is precisely the logical/physical abstraction that has enabled the development of very different types of database engines such as row stores and column stores for different application scenarios [Stonebraker et al., 2005]. And who knows, maybe we can even learn physical retrieval models [Idreos et al., 2019]! Alternative comparison functions For both sparse and dense representations, the inner product holds a privileged position as the comparison function \u03c6 because ef\ufb01cient solutions already exist for the top-k retrieval problem. As I already explained in Section 2.3, \ufb01xing \u03c6 to be the inner product allows a researcher to focus on the logical scoring model in isolation (notwithstanding the issues discussed above). This is a good compromise because limiting \u03c6 to be the inner product still leaves open the entire space of neural architectures for designing the encoders\u2014and indeed, most dense retrieval research operates under this constraint. The framework does not, however, preclude alternative de\ufb01nitions of \u03c6\u2014rather, it just means that a \u201ccustom\u201d comparison function may need its own dedicated physical retrieval model (unless, that is, we solve the challenges discussed above). A good example is ColBERT [Khattab and Zaharia, 2020], which introduced a comparison function called \u201cMaxSim\u201d that computes query\u2013document similarity as the sum of the maximum cosine similarities between each query term and the \u201cbest\u201d matching document term; cf. Kusner et al. [2015]. To ef\ufb01ciently compute top-k rankings in terms of MaxSim, the authors \ufb01rst built an index for approximate nearest neighbor search over all tokens in the document collection, where each token retains a pointer back to its source document. Retrieval 11For example, the \u201cPretrained Transformer Language Models for Search\u201d series at https://blog.vespa.ai/. 12More colloquially, our colleagues who get their kicks reducing L1 cache misses and bits per posting can now get in on the neural action. 14 \fis performed by \ufb01rst fetching candidate documents using this index (by following the pointers) and then computing MaxSim for all query\u2013document candidates. In other words, the authors presented a two-stage physical retrieval model speci\ufb01cally for their novel comparison function. In fact, ColBERT offers a good example where many of the discussion threads above come together. Khattab and Zaharia described a design where the logical scoring model and the physical retrieval model are tightly coupled. Separating the two might accelerate future advances by enabling independent progress. On the one hand, researchers could rely on MaxSim as \u03c6 and explore different query or document encoders without worrying about retrieval ef\ufb01ciency. On the other hand, another group of researchers could focus on optimizing MaxSim calculations over large document collections without worrying about whether such optimizations would be useful. In this way, MaxSim might gain a \u201cprivileged\u201d status, alongside the inner product, in the selection of the comparison function \u03c6 for retrieval model design. In addition, ColBERT provides an illustrative case study for the need to characterize quality\u2013space\u2013 time\u2013cost tradeoffs in order to compare retrieval models in a \u201cfair\u201d manner. Khattab and Zaharia presented their innovation as a model that is just as effective as a retrieve-then-rerank approach using BERT-based cross-encoders [Nogueira and Cho, 2019], but is substantially faster. This, however, comes at the cost of huge index sizes\u2014154 GB for the MS MARCO passage corpus (compared to 660 MB for an inverted index). While the authors did discuss this limitation, when all four dimensions of evaluation are considered (quality, space, time, and cost), it is dif\ufb01cult to see ColBERT as a practical solution for real-world problems. Multi-stage ranking as physical optimizations In Section 2.3, I argued that multi-stage ranking architectures are simply practical implementations of expensive logical scoring models (based on brute-force scans). Here, I elaborate on this observation, which also bolsters the case for logical/physical separation. Any multi-stage ranking pipeline where the scores from each stage are additive can be converted into the functional form of Eq. (1) by \u201ccomposing\u201d the models at each stage (including \ufb01rst-stage retrieval). In a ranking pipeline where the later stages do not incorporate evidence from the earlier stages (that is, stages are used only to reduce the candidates under consideration), such as BM25 + monoBERT [Nogueira and Cho, 2019], the score of the \ufb01nal reranking stage is the logical scoring model. In either case, top-k retrieval can be performed using a brute-force scan through the entire document collection based on the logical scoring model. Thus, multi-stage pipelines can be viewed as hand-crafted optimizations in the physical retrieval model. In other words, with a clean logical/physical separation, researchers and practitioners can focus on developing the logical scoring model, leaving the realization of the physical retrieval model as a separate exercise. In the tightly coupled architectures of today, the logical scoring model and the physical retrieval model must be co-designed to produce the \u201cright\u201d multi-stage pipeline. This is inelegant, as designers are mixing elements from different levels of abstraction: what to compute with how to compute. However, this conceptual tangle need not be the only approach. For example, we might build automated processes that \u201ccompile\u201d the speci\ufb01cation of the logical scoring model into a physical realization, subjected to declaratively speci\ufb01ed constraints. These hypothetical logicalto-physical compilers can even be machine learned! The work of Wang et al. [2011] provides an example of how this could be accomplished in the context of feature-based learning to rank; perhaps these ideas from a decade ago could be dusted off for a fresh take? Unsupervised dense representations The conceptual framework proposed in this paper characterizes logical scoring models along two dimensions. The four-quadrant taxonomy illustrated in Table 1 highlights a space that has not received much attention of late. I don\u2019t have much to say here, except that perhaps this gap might suggest a research direction worth renewed investigation. Other odds and ends If the logical scoring model and the physical retrieval model represent abstractions that are helpful in advancing IR research, what other such abstractions might exist? And a related question: So far, the conceptual framework proposed here has been applied primarily to deepen our understanding of ad hoc retrieval. What, if any, implications does this framework hold for other areas of information seeking beyond the design of retrieval models? 15 \fAddressing the \ufb01rst question: An important abstraction that immediately comes to mind, although hardly novel, is that of a token stream as the input to an inverted indexer (and correspondingly, to a query processor prior to retrieval). That is, an inverted indexer merely requires a stream of discrete tokens on which to operate, and is agnostic with respect to how the tokens are generated from arbitrary natural language text. In the canonical case, these tokens correspond to \u201cwords\u201d in the language (however de\ufb01ned) after some amount of analysis (e.g., stemming), but researchers have discovered, that at least for some languages, character n-grams (which have no basis in linguistic reality) work well [Foo and Li, 2004, McNamee and May\ufb01eld, 2004]. Much along the same lines, Xue et al. [2021] recently explored pretrained neural sequence-to-sequence models based on byte sequences and showed that such models are competitive to token-based models, but more robust to noisy inputs. Perhaps it is worth reconsidering the information retrieval tokenization pipeline in light of these latest results? Addressing the second question on whether the conceptual framework presented in this paper has anything meaningful to say about other areas of information retrieval and information seeking more broadly: I can think of two answers. First, it has long been observed that information \ufb01ltering and ad hoc retrieval are intimately related, what Belkin and Croft [1992] have called \u201ctwo sides of the same coin\u201d. At a high level, ad hoc retrieval is concerned with a stream of queries posed against a (relatively) static collection of documents, whereas information \ufb01ltering is concerned with a stream of documents posed against a (relatively) static collection of queries. Filtering has a long history that dates back to the 1960s [Housman and Kaskela, 1970], which evolved into the TREC Filtering Tracks [Lewis, 1995] in the late 1990s and the general research program known as Topic Detection and Tracking (TDT) [Allan, 2002] in the early 2000s. The most recent incarnations of \ufb01ltering include the TREC Incident Streams Tracks [Buntain et al., 2020], which aim to automatically process social media streams during emergency situations to triage information and aid requests for emergency service operators. This evaluation series has its roots in the TREC Real-Time Summarization Tracks [Lin et al., 2016], where systems automatically monitor streams of social media posts to keep users up to date on topics of interest. I believe that a more succinct way to convey the connections between \ufb01ltering and ad hoc retrieval (cf. Belkin and Croft [1992]) is to say that they share logical scoring models\u2014at least in terms of Eq. (1), although the relevance criteria are often different\u2014but may require different physical retrieval models. Although information \ufb01ltering, in fact, can be physically implemented via inverted indexes, such a realization can be somewhat awkward (a side effect of the \u201ctight coupling\u201d approach). A clean separation between the logical and physical can help researchers focus on representations and scoring models without arti\ufb01cial constraints on execution. More clearly-de\ufb01ned sub-problems, I believe, will lead to accelerated progress in the \ufb01eld, with all the advantages I\u2019ve already discussed above. Second, I believe that the conceptual framework proposed here can capture relevance feedback (pseudoor based on human judgments), and more generally, interactive retrieval. The logical scoring model as currently de\ufb01ned computes the query representation from the query itself, i.e., \u03b7q(q). However, this formalism can be extended to take into account previous queries in a session, e.g., \u03b7q(qi; q