diff --git "a/abs_29K_G/test_abstract_long_2405.00982v1.json" "b/abs_29K_G/test_abstract_long_2405.00982v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.00982v1.json" @@ -0,0 +1,847 @@ +{ + "url": "http://arxiv.org/abs/2405.00982v1", + "title": "On the Evaluation of Machine-Generated Reports", + "abstract": "Large Language Models (LLMs) have enabled new ways to satisfy information\nneeds. Although great strides have been made in applying them to settings like\ndocument ranking and short-form text generation, they still struggle to compose\ncomplete, accurate, and verifiable long-form reports. Reports with these\nqualities are necessary to satisfy the complex, nuanced, or multi-faceted\ninformation needs of users. In this perspective paper, we draw together\nopinions from industry and academia, and from a variety of related research\nareas, to present our vision for automatic report generation, and -- critically\n-- a flexible framework by which such reports can be evaluated. In contrast\nwith other summarization tasks, automatic report generation starts with a\ndetailed description of an information need, stating the necessary background,\nrequirements, and scope of the report. Further, the generated reports should be\ncomplete, accurate, and verifiable. These qualities, which are desirable -- if\nnot required -- in many analytic report-writing settings, require rethinking\nhow to build and evaluate systems that exhibit these qualities. To foster new\nefforts in building these systems, we present an evaluation framework that\ndraws on ideas found in various evaluations. To test completeness and accuracy,\nthe framework uses nuggets of information, expressed as questions and answers,\nthat need to be part of any high-quality generated report. Additionally,\nevaluation of citations that map claims made in the report to their source\ndocuments ensures verifiability.", + "authors": "James Mayfield, Eugene Yang, Dawn Lawrie, Sean MacAvaney, Paul McNamee, Douglas W. Oard, Luca Soldaini, Ian Soboroff, Orion Weller, Efsun Kayi, Kate Sanders, Marc Mason, Noah Hibbler", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Large Language Models (LLMs) have enabled new ways to satisfy information\nneeds. Although great strides have been made in applying them to settings like\ndocument ranking and short-form text generation, they still struggle to compose\ncomplete, accurate, and verifiable long-form reports. Reports with these\nqualities are necessary to satisfy the complex, nuanced, or multi-faceted\ninformation needs of users. In this perspective paper, we draw together\nopinions from industry and academia, and from a variety of related research\nareas, to present our vision for automatic report generation, and -- critically\n-- a flexible framework by which such reports can be evaluated. In contrast\nwith other summarization tasks, automatic report generation starts with a\ndetailed description of an information need, stating the necessary background,\nrequirements, and scope of the report. Further, the generated reports should be\ncomplete, accurate, and verifiable. These qualities, which are desirable -- if\nnot required -- in many analytic report-writing settings, require rethinking\nhow to build and evaluate systems that exhibit these qualities. To foster new\nefforts in building these systems, we present an evaluation framework that\ndraws on ideas found in various evaluations. To test completeness and accuracy,\nthe framework uses nuggets of information, expressed as questions and answers,\nthat need to be part of any high-quality generated report. Additionally,\nevaluation of citations that map claims made in the report to their source\ndocuments ensures verifiability.", + "main_content": "INTRODUCTION The emergence of generative Large Language Models (LLMs) has brought with it the ability to automatically generate all kinds of text. With it, a host of problems\u2014old and new\u2014have (re)emerged that affect these generated texts. The fields of Information Retrieval (IR) and Natural Language Processing (NLP) both have important roles in building new methods to improve text generation and in designing approaches to evaluate the quality of these methods. LLMs can enable new ways for people to satisfy various information needs. Simple information needs (e.g., factoids) can be answered with relatively short generated responses pointing to a single source. However, when information needs are complex, nuanced, or multifaceted, a suitable response must also be more complex. They need to draw together numerous facts gathered from potentially multiple sources to completely and faithfully respond to the information need. We refer to this longer-form answer generation as a \u201creport\u201d on a user-specified topic. More formally, we define a report as a text that attempts to satisfy an explicitly stated information need by finding documents arXiv:2405.00982v1 [cs.CL] 2 May 2024 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al. in a corpus (potentially a mixture of text, images, tables, etc.) that contain relevant information, expressing that information in the text, and providing appropriate citations from the report to the supporting documents. We envision a high-quality report as the ideal response to a user with a complex task in mind, since such a report would succinctly, coherently, and verifiably cover all the information in a corpus pertinent to their information need. Note that this definition makes the framework better suited to reports that inform an analyst than to reports that generate novel analyses. Report writing can be viewed as a natural downstream task of Retrieval Augmented Generation (RAG), where faithfulness has a focus of study [52, 73, 76]. In this view, an LLM generates the report using the report request as part of the prompt and searches the document collection for relevant information that can be added to the prompt to ensure the report\u2019s accuracy. Report generation can also be thought of as summarization. From the summarization viewpoint, a report is an attributed task-based informative abstractive multi-document summary (see Section 3.1.1 for a more detailed explanation of these categories). Such a report might also include portions that are not summaries at all, but are, for example, introductory material or comparisons of the summarized information. We posit that all of these viewpoints are valid, and each informs evaluation for report generation. This work describes an abstract framework for evaluating automated report generation, ARGUE (Automated Report Generation Under Evaluation), that is built on top of lessons learned from prior evaluation approaches in information retrieval, summarization and text generation. It will be used by the TREC track NeuCLIR in its report generation task.1 The ARGUE framework builds a foundation for a broader research agenda in evaluating automatically generated long-form text beyond reports. Some of ARGUE\u2019s most important features are: \u2022 We use the concept of information nuggets out of the summarization literature to capture the content a report should contain. We express each nugget as a question together with a list of acceptable answers to that question. \u2022 Citations are a key report component. A citation is a pointer from a source element in the report (typically a sentence) to a target element in a document (typically the entire document). \u2022 We propose that precision and recall serve as the basis for most content-based measures. ARGUE supports precision measures over the sentences of the report, and recall measures over the information nuggets. 2 REQUIREMENTS This section defines requirements of a report evaluation system. We first define the various actors (and one non-actor) in ARGUE: Report Requester: The person requesting the report. This is the person whose purpose the report should satisfy. Report Audience: The person who will be reading the report. This is often the same as the report requester. Report Writer: The automated system that takes a report request and a document collection as inputs and produces the report. Report Request: A detailed specification of the report to be written. The report request can include: 1https://neuclir.github.io/ \u2022 User story: explains the report requester\u2019s background, situation, and report-writing philosophy, as well as a description of the audience for the report. \u2022 Problem statement: indicates the content that the report is required to contain. \u2022 Background: describes what is already known about the topic that need not appear in the report. \u2022 Constraints: specifies restrictions such as the length of the report or a temporal window for sources. Assessor: Any person making judgments in producing evaluation materials or scoring submitted runs. Assessors include those selecting report topics, writing report requests, identifying nuggets, binding nuggets to documents in the collection, and making other judgments necessary to assign scores to reports. The evaluation we advocate has several key attributes. First, it must ensure that the report is responsive to the report request. It must ensure the report\u2019s key information presented is attested in the document collection, that the report properly cites those documents, and that the information they contain is faithfully captured by the report. It must score a report using evaluation data created by a person. While scoring may be automated, requiring the ground truth data to be human-generated helps to prevent circularity between report generation and report evaluation, thereby reducing the bias the evaluation might have toward e.g., a particular generative model. Finally, the evaluation must have the intention of reusability. Producing a reusable evaluation is challenging because of the level of interpretation required to make the required judgments. Reusability is thus often at odds with the other goals of an evaluation. The information retrieval community has thought through many of the issues underlying reusability, and we present ARGUE to try to take advantage of that experience. While it is nearly impossible to accurately claim that any evaluation component is novel, there are points of emphasis in our proposed evaluation style that we think make it stand out from other extant text generation evaluations. First is the amount and detail of the background information provided in the report request. While other evaluations have provided additional information describing inclusion criteria, in practice systems have often focused only on brief specifications. For example, a narrative giving detailed information about what should and should not be considered relevant, long a part of TREC topics, has rarely been exploited. The arrival of large language models that can easily incorporate such materials makes now an opportune time to focus on including ancillary documentation in a report request, not just for this style of evaluation, but for any text generation evaluation. While we advocate that these ancillary details be made explicit in the evaluation, we acknowledge that in real report-writing applications implicit knowledge might be more practical and adequate for the task. Second, until recently hallucination in text generation system output was not a major focus, primarily because generative systems were not good enough to create convincing hallucinated text. With the rise of large generative LLMs hallucination has become a common part of text generation system output; the evaluation must account for this as well. Borrowing from an IR evaluation perspective, we promote the view of nuggets as opinion, not fact. In report evaluation, nuggets play the role that relevant documents play in IR. Were document \fOn the Evaluation of Machine-Generated Reports SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA relevance treated as fact rather than opinion, it would be virtually impossible to come to agreement on which documents were relevant to a given topic; inter-annotator agreement would be too low. Treating relevance as opinion avoids this problem. In exchange, relevance as opinion adds constraints to the evaluation, primarily that the author of the topic should be the relevance assessor. If relevance is not decided until after system submissions, that means that assessor continuity is important; assessors should be selected such that they can create topics at one time, and assess relevance at a later time, possibly months later. We advocate accepting this tradeoff for nuggets in report generation evaluation. For nuggets, the implication is that items reasonably seen by a report writer as nuggets might not be identified in advance by the assessor. A given evaluation might address this issue through a pyramid approach [61] to identify nugget importance if multiple reference reports are available. Or an evaluation might determine that nugget subjectivity will not change the preference order of meaningfully different systems and ignore it. In either case, we recommend that report sentences bearing and accurately reflecting a citation should not be penalized during scoring, precisely because they might be valid nuggets in someone\u2019s eyes. Constraints such as maximum document length can discourage intentional overgeneration of sentences that have a small chance of matching assessor nuggets. To meet these requirements, four broad questions should be asked about each report being evaluated: Q1 Does the report include the information contained in the document collection that the report requires? Q2 Does it accurately express all such information? Q3 Does it contain appropriate citations to the collection? Q4 Has the information been fitted together into a useful form? Q4 is a crucial part of any text generation evaluation. It covers such attributes as fluency [65], coherence [40, 50], consistency [32], and rhetorical structure [16, 21]. In light of this importance, it has a long history and has been studied in depth elsewhere. Thus, while we leave a place for this in the overall evaluation in ARGUE, we leave it to others to address it in light of the changing NLP landscape. 3 BACKGROUND Here we review related work on report writing and evaluation. 3.1 Report Writing Report writing involves text generation, for which prior work on summarization and RAG provides useful perspectives. 3.1.1 Summarization. In its most general form, a summary is a document whose substantive content is based entirely on the content of other target document(s), and that is more concise than simply presenting the other document(s) in their original form would have been [55]. Summaries have been defined along several axes: \u2022 Single-document or Multi-document [47]: Is the summary built from one document (single-document), or many (multi-document)? \u2022 Extractive or Abstractive [13]: Does the summary primarily draw language from the summarized documents (extractive), or does it generate new language (abstractive)? \u2022 Indicative or Informative [37]: Does the summary help the reader to decide whether to read the summarized document(s) (indicative), or does it include enough content to make it unnecessary to read those document(s) (informative)? \u2022 Generic or Task-Based [83]: Is the summary constructed with no particular task in mind (generic), or is there a specific task that the summary is designed to support (task-based)? \u2022 Attributed or Unattributed [70]: Does the summary include citations to the summarized documents (attributed), or does it lack citations (unattributed)? \u2022 Original or Update [56, 63]: Should the summary include all information (original), or only information that the reader does not already know (update)? \u2022 Closed or Open Domain [26, 92]: Are the documents to summarize supplied (closed domain), or must the system perform a search to identify the appropriate documents (open domain)? The reports in which we are interested are attributed task-based informative abstractive open-domain multi-document summaries that may call for either original or update summaries. 3.1.2 Retrieval-Augmented Generation. Following preliminary research on furnishing transformer architectures with external knowledge sources, Lewis et al. [41] introduce RAG models as a way to improving language model performance on knowledge-intensive tasks, using an encoded Wikipedia collection as a non-parametric memory system. RAG models have since been used to improve dialogue systems [38, 77], machine translation [7, 8], and text-style transfer [44] among other applications [43]. Various approaches have been proposed to incorporate RAG models into summarization [2, 64] and other document generation tasks. One use of retrieval has been to find an example summary, sometimes with retrieved summary reranking [9], to serve as a template for the summary of another document. Retrieval can also be used to improve language model factuality. By curating large, high quality collections, generation can be grounded in supporting documents [4]. This mechanism has been shown to be particularly beneficial for rarer entities and concepts [54]. Finally, RAG enables LLMs to access information that was not available at pre-training time, such as proprietary or copyrighted information [57]. Vision-language modeling [1, 3, 51] enables multimodal retrievalaugmented generation systems that benefit from rich non-textual data [33, 60]. Different modalities facilitate the completion of different tasks, including image understanding [12, 93], open-domain VQA [33, 49], translation [20], and multimodal generation [89]. 3.2 Evaluation As report generation includes elements of several prior tasks, including document retrieval, summarization, question answering, and retrieval-augmented generation, we briefly review salient work on those tasks that we see as related to ARGUE. 3.2.1 Information Retrieval. Evaluation of ad hoc retrieval is typically based on assessor-produced relevance judgments of documents that are selected by pooling system responses in a shared task, or sometimes based on active learning [29, 72]. Obtaining both good precision and good recall is important in real-world systems, so commonly used metrics combine both components (e.g., mean \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al. average precision, nDCG [35]). Statistical significance testing can be performed, for example with Student\u2019s \ud835\udc61-test [78]. In a report-writing scenario, recall is important to allow assessment of how comprehensively the report responds to the report request. Precision is also important for automated report generation; reports are a type of multi-document synthesis, and incorporating content from non-pertinent documents can adversely affect the utility of the report. To create evaluation datasets for report writing, care must be taken to develop report requests that match information available in the document collection. If requests are too broadly scoped, or if too much salient information is present in the collection, it will be difficult (i.e., prohibitively expensive in human labor) to determine the full set of correct nuggets present in the collection. 3.2.2 Summarization. Evaluating automatic summarization can require significant manual effort. In 2001, NIST initiated the Document Understanding Conference (DUC) to develop evaluation methods for summarization. DUC continued until 2007 and then became the summarization track of the Text Analysis Conference (TAC) through 2014. The DUC/TAC summarization evaluations were notable for having people write summaries manually, and using those \u201cmodel\u201d summaries (or \u201creference texts\u201d) as the jumping-off point for metric development. The DUC evaluation procedure measured coverage (that is, recall) through a pairwise comparison between two summaries: the model summary and a \u201cpeer\u201d summary (which could be a generated summary or another model). The model was divided into Elementary Discourse Units (EDUs), essentially clauses [45, 80] while the peer was split on sentence boundaries. An assessor would match each EDU with the sentences in the peer that contained that information, and indicate how much of the meaning of the EDU was expressed in the corresponding matched peer units. Unmarked sentences in the peer were then marked for relevance. Harman and Over [31] found that model summaries from different authors were markedly different, and that assessors also did not agree on model unit coverage ratings. Work also began around DUC 2003 on automatic metrics, specifically comparing the model summary to the peer using word n-gram statistics. Lin and Hovy [48] looked at the BLEU measure developed for machine translation, and found that recall on word unigrams correlated better with the DUC assessments than full BLEU scoring, which incorporates longer n-grams. Following that, they developed ROUGE [46], a recall-oriented metric similar to BLEU. ROUGE has a number of variants depending on how tokens are parsed, how n-grams are selected and assembled, and how scores are aggregated across summaries to obtain a system score. A study by Graham [28] explored a large grid of ROUGE parameters in comparison with BLEU using data from DUC-2004, and found that BLEU and ROUGE-2 (2-grams, stemmed, stopwords removed, computing an average of precision scores) had the highest correlation with human assessment. ROUGE has been used to evaluate summarization [46], Long-Form Question Answering (LFQA) [39, 88] and RAG [41]. ROUGE has well-documented problems as an evaluation metric in e.g., summarization [28] or LFQA [39]. From our perspective, its main problems as an evaluation metric for report generation are its requirement for reference reports (making it expensive), its poor robustness to hallucination (making it inaccurate), and that it does not handle citations (making it incomplete). In 2004, Nenkova and Passonneau [61] proposed the \u201cPyramid Method\u201d for evaluation. Since comparing generated summaries against a model is subject to the inherent variation in model summaries, they propose to abstract the model summaries into Summary Content Units (SCUs). SCUs are clauses that appear (with more or less the same meaning) in multiple model summaries. They are weighted by the number of model summaries that express them. Figure 1 shows an example of two SCUs from parts of four model summaries. In informal usage, SCUs have been referred to as \u201cnuggets.\u201d Rather than being a clause, a nugget might be a description of a concept along with how it was expressed in the models.2 Subsequent research on the pyramid method has focused on automatic creation and alignment of SCUs. For example, Gao et al. [24] performs a dependency parse of the model summary, then represents individual clauses using vector embeddings. Nugget fuzziness can be addressed by using hoppers [59, 79] to bin together differing descriptions that refer to the same item. The main difficulties in using nuggets for report evaluation are that they treat hallucinations (contradictions and misinformation) exactly the same as content that has no matching nugget, and that they do not support citations. We have incorporated nugget-based evaluation into ARGUE, tying nuggets to reports not directly, but rather through cited documents. 3.2.3 Question Answering. Factoid Question Answering (QA) evaluation typically consists of using accuracy or \ud835\udc391 against a gold standard answer (or answer set) [15, 69, 84]. This type of evaluation has many advantages, as it can be easily automated and is simple to annotate. Long-form QA [19, 62] is evaluated similarly to summarization, typically with automated metrics like ROUGE, model-based metrics like BERTScore [91] or BLEURT [75], or human evaluation [39, 88]. 3.2.4 Retrieval-Augmented Generation. Early retrieval augmented generation systems have been evaluated using task-specific metrics on end-to-end tasks. For example, in the context of question answering, exact match and \ud835\udc391 metrics have been used [30, 41]. For summarization, ROUGE and BERTScore on reference summaries are common [26]. These approaches have two limitations: they only measure ability to complete end tasks, and thus cannot assess intermediate stages or evaluate generation across multiple dimensions; and they are not well-suited to capture failures that can be introduced by current generative models [27]. More recently, techniques have proposed to more holistically evaluate RAG systems. Gienapp et al. [25] introduce a theoretical framework for evaluating ad hoc generative retrieval. Chen et al. [11] focus on robustness of RAG systems against various perturbations. Thakur et al. [82] benchmark hallucinations and the ability of RAG systems to identify relevant information for 18 languages. Others have introduced benchmarks to measure the ability of RAG systems to provide citations [6, 23, 53, 90]. While not specifically 2See https://tac.nist.gov/publications/2010/presentations/TAC2010_Summ_Overview. pdf for an example of SCUs as nuggets. \fOn the Evaluation of Machine-Generated Reports SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA A1 In 1998 two Libyans indicted in 1991 for the Lockerbie bombing were still in Libya. B1 Two Libyans were indicted in 1991 for blowing up a Pan Am jumbo jet over Lockerbie, Scotland in 1988. C1 Two Libyans, accused by the United States and Britain of bombing a New York bound Pan Am jet over Lockerbie, Scotland in 1988, killing 270 people, for 10 years were harbored by Libya who claimed the suspects could not get a fair trail in America or Britain. D2 Two Libyan suspects were indicted in 1991. SCU1 (w=4): two Libyans were officially accused of the Lockerbie bombing A1 [two Libyans]1 [indicted]1 B1 [Two Libyans were indicted]1 C1 [Two Libyans,]1 [accused]1 D2 [Two Libyan suspects were indicted]1 SCU2 (w=3): the indictment of the two Lockerbie suspects was in 1991 A1 [in 1991]2 B1 [in 1991]2 D2 [in 1991.]2 Figure 1: A pair of example Summary Content Units. Four semantically similar sentences from four different model summaries are grouped into two SCUs highlighting the key facts from those sentences. From Nenkova and Passonneau [61]. designed for RAG applications, metrics designed to evaluate factuality (e.g., FactScore [58]) or faithful manipulation of long inputs (e.g., BooookScore [10]) can complement application-specific evaluation frameworks. Most approaches to automated evaluation aim to estimate the effectiveness of RAG systems across desirable dimensions (e.g., faithfulness, answer relevance, and context relevance). Techniques include prompting LLMs to evaluate generated summaries [76], and fine-tuning lightweight models on synthetic data [73]. Downstream applications, such as question answering, can also be used to evaluate the effectiveness of RAG systems [74]. 4 PROPOSED FRAMEWORK This section describes our conceptual evaluation framework for automated report generation. We name this abstract framework ARGUE (Automated Report Generation Under Evaluation) for convenience. We model the information need as a report request, which is analogous to the topics in TREC-style ad hoc retrieval evaluation. The report writer is required to respond with a verifiable report, with citations to its information sources. As in retrieval system evaluation, we restrict the system to citing documents in a pre-defined document collection instead of arbitrary information on the web. The framework is thus limited in the range of writing types it can evaluate. In particular, it does not currently support evaluation of reported information that is not explicitly supported by statements in the document collection. This restriction allows experiments that compare systems across research studies and over time. 4.1 Framework Overview In ARGUE, creating a report generation benchmark has three phases. The first phase creates evaluation data. We believe that systems should be evaluated over human-curated data so that they are ranked on effectiveness rather than alignment to machine output. System input comprises a document collection and report requests that describe information needs. The second phase distributes these inputs to participants. Generated reports are expected to be responsive to the information needs statements. A valid report will cite source documents that contain the reported information. Citations are a key attribute of this framework. Other report stylistic requirements might include, for example, a length limit to encourage systems to express information succinctly. If the document collection is in a language different from the report request, or is multilingual, the report may be required to be written in the language of the report request. We envision that the input data will be distributed as part of an evaluation campaign, but this is not required. Assuming an evaluation campaign, generated reports will be received and evaluated by assessors; however, to support reusability, key components will be replaced by automated mechanisms to allow future systems to be scored using the same evaluation data. The third phase scores reports. Since the goal of this framework is to evaluate systems, each system will need to generate multiple reports based on the various report requests. Report scores will be aggregated to assign system scores. Required information in reports will be expressed by assessors in the form of nugget questions and answers. Answers will be attested in the collection and tied to particular documents that attest those answers, thereby tying the nuggets to supporting documents. During scoring, report citations will be used to determine which nuggets are described in the report. Thus there will be a notion of recall over nuggets, which is a new feature in RAG evaluation. Citations will also be used to ensure that non-required information that is included in the report (facts that are not part of the necessary nuggets) is attested in the collection. A precision score over report segments measures how well the report adheres to information found in the collection. This allows hallucination to be addressed, whether it be false information or true information that is unattested. While traditional recall and precision are set measures, they can be modified to account for some nuggets having greater weight than others or to allow report segments to bear multiple citations. 4.2 Evaluation Inputs and Outputs 4.2.1 Evaluation Inputs. The first system input is the collection of items that will be used as source material for the retrieval task. While these items could be documents written in one or more languages, it is also possible for the items to be images, videos, audio files, or some combination. For the reminder of this paper, we will refer to the items as documents. Because of the importance of having citeable units, the document collection will be divided into target elements, which are typically documents, but can be smaller units of text such as passages, paragraphs, or sentences, depending on the needs of the evaluation. In this paper we will assume that an entire document has been selected as the target element. Segmentation into target elements should be done once and distributed with the collection to ensure that all systems are evaluated on an even footing. The document collection should include documents that contain sufficient information relevant to the desired report. Following most information retrieval-based evaluations, documents are assumed to be truthful; verifying the truthfulness of document contents is orthogonal to and beyond the scope of the framework. Instead, the framework focuses on citation, requiring that all reported information cites supporting documents from the evaluation document collection. Information that cites \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al. a document incorrectly or that is missing a required citation is appropriately penalized. The second system input is a set of assessor-developed information needs referred to as report requests. A report will be generated for each report request. Report requests are more extensive and subtler than information needs for previous IR or summarization tasks. See Section 2 for the full report request description. Creation of report requests is a complex process that tries to satisfy multiple, sometimes conflicting goals. It bears many similarities to topic creation for a TREC-style IR evaluation [85]. In topic identification for ARGUE, the topic creator must be familiar both with information retrieval, and with any special requirements of the document collection. For example, a bilingual document collection would require that the topic creator be at least bilingual. A document collection on medical topics would require topic creators who were well-versed in the medical domain. In addition, an IR evaluation typically tries to control the number of documents that are relevant to the topic being developed, in part because doing so can improve reusability. An ARGUE evaluation must control not only the number of documents that contain relevant information, but also the number of nuggets and the number of target elements that align to each nugget. Having too many items in any of these categories leads to high assessment costs; having too few leads to higher score variance and lower ability to distinguish systems. That said, assessors need not capture all information that might satisfy the information need. It is up to the assessor to determine what, in their opinion, is the essential information. 4.2.2 Evaluation Output. The report will be generated by an automated report writer. Reports produced by the report writer should satisfy the constraints listed in Section 2. For the purposes of this framework, we make a convenience assumption that the report requester and the report audience are the same. As an example, the assessor could have the role of analyst, with the purpose of the report being to support the process of drawing analytic conclusions. The generated report will be segmented into report segments, either manually or automatically. For convenience, we will assume in this work that a report segment is a sentence, but it could be some other well-defined portion of report text. Finer-grained segments may enable more nuanced distinctions. Given that precision scores operate over report segments, and given that automated sentence segmentation is imperfect, we believe that it is important that the report writer control the segmentation. Thus, each report must be segmented into sentences by the report writer prior to evaluation. The evaluation should include guidelines on sentence segmentation. The report must also include appropriate citations, pointers from source elements (sentences) to target elements (documents). Each report sentence will bear zero or more citations, as described below. 4.3 Citations Each substantive sentence of a submitted report must cite the document target element(s) from which it was derived. Which sentences are substantive may vary according to the goals of the evaluation. A citation then is a pointer from one report segment to one target element. A given report segment may bear more than one citation, and a given target element may be cited more than once. By traversing such citations the evaluation system can map sentences in the report to documents and then to nuggets. Note that the report writer must know nothing about the nuggets that will be used to evaluate the report; they are known exclusively to the assessor. The assessor may choose to require just one citation per sentence, or, if completeness is to be measured, all valid and salient citations. The validity of a citation has three components. First, the report segment must be supported by the target element. That is, reading the target element should verify the sentence\u2019s accuracy. In a manual evaluation, the assessor decides whether a given sentence is supported by the target element. In an automated evaluation, support of a report segment for a target element could be measured in several ways. The simplest is a semantic match, testing whether the semantics of the two texts match. A number of such automated metrics are available, such as Sentence-BERT [71]. A more accurate but harder measurement would be whether the target element entails the report sentence. Entailment has been a component of evaluation sets such as GLUE [87] and SUPERGLUE [86], and good solutions to the problem have been identified [67]. Second, at the same time, the sentence bearing the citation should be responsive to the report request. This means that the cited target element is linked to a nugget, and that the report segment provides an answer to one of that nugget\u2019s questions (see below for nugget questions). Thus the acceptability of a nugget answer depends on which document the report cites. Again, the assessor will determine whether the report segment answers a nugget question. One way to automate assessment of responsiveness might be to use an automated QA system to find answers to a nugget question, then use a semantic matching system to determine whether the report segment matches one of those answers. Third, some evaluations will also assess whether a talented author in the field of the report would include that citation if they had written the report. An evaluation that simply wants all substantive sentences to bear a citation will omit this component; a more nuanced evaluation of reports in their final form could include it. In either case, judgments will need to be made on which sentences require a citation. Cases where no citation is required include introductory sentences, background sentences that reflect the problem statement, and sentences that summarize other cited sentences. If we are interested only in nugget recall, we can safely ignore whether sentences ought to have citations. But if we are interested in precision, we would not like to penalize a report for containing such non-citing sentences (except perhaps when measuring the quality of the report as a whole). To handle non-citing sentences, it must be determined whether the sentence should have a citation. If a citation is not needed, the report can be scored as if the identified sentences were not present in the report. 4.4 Nuggets The proposed evaluation is centered on nuggets. A nugget is a piece of information that should appear in the report and that could be expressed in a variety of ways in the document collection. 4.4.1 Nugget Definition. A nugget in this framework is a combination of a question and one or more answers to that question that address some aspect of the report request and that are expressed in at least one target element in the collection. Nuggets must be expressed at an appropriate level of granularity for the desired \fOn the Evaluation of Machine-Generated Reports SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA report. If the report answers such a question using appropriate citations into the document collection, we deem it to have succeeded in identifying that nugget; evaluation metrics (described in Section 4.5 below) can then use statistics over the correctly answered, incorrectly answered, and unanswered nugget questions to produce a score for a given report. Answers to nugget questions should express the information that a reasonable person would expect in a report written in response to the report request. The concept of nuggets arose from summarization evaluation [61]. New in this framework is the expression of nuggets as questions with allowable answers. We are interested in evaluation data that can be used to automatically evaluate systems, much like relevance assessments can be used to evaluate an IR system even decades after their creation. We believe this formulation will be helpful in automating report generation evaluation. Nuggets need not capture everything any report responding to the report request might legitimately include. Given that reports by necessity will be shorter than the source documents, the assessor will determine the required information and express that as nuggets, reinforcing the idea that nuggets are opinions instead of facts. The set of answers to a nugget question are drawn from all the answers supported by the document collection. Questions and answers will be in the request language even if, for example, the source information comes from an image or is in a different language. 4.4.2 Nugget Identification. Nuggets are identified by the assessor. Nuggets must be both relevant to the report request and attested in the document collection. In practice, the assessor could either look through retrieved documents to identify important aspects of the topic from the target elements, or identify nuggets a report on the topic ought to include, then search the document collection to see which are attested. A combination of both methods could be used. To ensure reproducibility and enable evaluating recall, it is desirable to identify most (or all) nuggets that should be included. In addition to identifying the set of nuggets for a report request, the assessor must also identify each target element in the document collection that supports an answer to each nugget. To do so, the assessor must have both a way to identify target elements that contain nugget-supporting information, and a way to bind target elements to nugget answers. The former problem is similar to that faced by many IR collection developers of ensuring that all or most relevant items have been discovered. Full collection annotation is not practical for large collections. Three main techniques for identifying relevant documents are interactive search, pooling [36, 81, 94], and active learning [14, 42, 68]. Interactive search is simply having a person use any desired tool to identify relevant documents. In pooling, the assessor judges only documents found in an aggregate of several systems\u2019 top results. Either assessors must have access to systems that together are likely to find most of the relevant documents, or this step must wait until task participants have submitted their runs. It is usually desirable to augment the pools manually using interactive search. In active learning, a classifier identifies relevant documents. Each time the assessor judges a document, the classifier is retrained to take the new judgment into account. Any or all of these techniques might be used to restrict the number of documents that must be examined during nugget identification. The second task, assigning target elements to nuggets, is more challenging. We highlight three challenges here. First is withinnugget variation. For example, one nugget answer might be a superset of another, such as \u201cJune\u201d versus \u201c26 June.\u201d If the more general answer is acceptable, the more specific answer must be included in the answer set to distinguish it from an incorrect answer such as \u201c12 June.\u201d The summarization community introduced hoppers [79] to capture commonality across descriptions that differ in some details. For example, two descriptions of a particular natural disaster might indicate different numbers of casualties; perhaps the descriptions were written at different times or based on two different information sources. Whether hopper-style conflation is used for a given evaluation depends on the desired report type. An overall report on the natural disaster might use hoppers; a report on how different news services covered the disaster might need to distinguish differing descriptions. As with decisions on nugget creation, if hoppers are used, the choice of hoppers is left to the assessor. A second challenge is a single report segment or target element expressing information about more than one nugget. This is handled through multiple citations borne by a single report sentence, and/or multiple mappings between target elements and nuggets. This complicates the bookkeeping needed to give appropriate credit to each nugget, but poses no theoretical problems. A third challenge is a single nugget requiring multiple report sentences or target elements to be fully captured. This challenge arises because nugget question/answer pairs lend themselves well to simple facts expressed in the report, but are less well suited to identifying complex information. Nonetheless we believe that the general framework will be extensible to complex nuggets whose expression is distributed across several report sentences or target elements by allowing complex questions answered by Boolean combinations of target elements, and by exploiting recent research in question answering [17, 18]. 4.4.3 Practical considerations. The following considerations are not requirements of the framework, but instead practical tips we have gleaned working to instantiate this and similar evaluation frameworks. First, we believe that an assessor must be familiar both with IR concepts and any special requirements of collection and evaluation topic area (such as the aforementioned bilingual or medical settings). Second, it may be advantageous for an assessor to produce a gold standard report to help assemble the information that should be in a satisfactory report. Nugget questions can then be composed from that report. Creating a gold standard report also enables a ROUGE evaluation for comparison. Third, IR evaluations usually limit the number of relevant documents to simplify and reduce the cost of evaluation. Report evaluation would also like to control the number of nuggets and document mappings to ensure the evaluation can distinguish good and bad systems; however, this can eliminate from consideration practical use cases that would otherwise be in scope for the task. This tradeoff has traditionally been considered worthwhile, but it should be remembered that it is a tradeoff. Fourth, LLMs can call on memorized knowledge not found in the document collection. Often the LLM training collection is unknown. If the LLM has not seen the evaluation corpus, it will need to rely on hallucination, which will negatively affect evaluation data quality. Finally, while finding all potential nuggets is unnecessary \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al. Figure 2: Report sentence scoring. Answers to eight yes/no questions dictate an outcome for each input sentence. + indicates that the sentence is rewarded, that it is penalized, and 0 that it does not affect the overall report score. Report Request: I am a Hollywood reporter writing an article about the highest grossing films Avengers: Endgame and Avatar. My article needs to include when each of these films was considered the highest grossing films and any manipulations undertaken to bring moviegoers back to the box office with the specific goal of increasing the money made on the film. Gold Standard Report: Avatar originally became the highest grossing film in 2010 [D1]. Avengers: Endgame replaced Avatar as the highest grossing film in 2019 [D1, D2, D3, D8, D10, D12, D13]. It overtook Avatar by adding an additional six minutes of footage to the film to draw viewers back to the movie theater [D4]. Two years later Avatar was re-released in mainland China [D1, D2, D5, D6, D7, D8, D9, D10, D11]. It earned a sufficient amount of money to retake the title of highest-grossing film in 2021 [D5, D11, D6, D7, D2, D8, D9, D1]. Nuggets as Questions and Answers: (1) When did Avatar first become the highest grossing film? \u2022 2010 [D1] (2) When did Avengers: Endgame become the highest grossing film? \u2022 2019 [D1,D2, D3, D8, D10, D12, D13] \u2022 July 2019 [D3, D12, D13] \u2022 July 20, 2019 [D3] \u2022 July 21, 2019 [D13]\u2020 (3) What did studio executives do to the Avengers: Endgame film to become the highest grossing film? \u2022 Added six minutes of additional footage [D4] \u2022 Added footage [D4] \u2022 Added 6 minutes [D4] \u2022 Additional footage at the end of the film [D14] (4) When did Avatar retake the title of highest grossing film? \u2022 2021 [D1, D2, D6,D7,D9,D11] \u2022 March 2021 [D1, D6 ,D7, D9, D11] \u2022 March 13, 2021 [D1, D6, D9] \u2022 Two years after the Avengers: Endgame became the highest grossing film [D2] (5) What event led to Avatar becoming the highest grossing film? \u2022 Re-release in Mainland China [D1, D2, D5, D6, D7, D8, D9, D10] \u2022 Re-release in China [D1, D2, D5, D6, D7, D8, D9, D10] \u2022 Release in Mainland China for a second time [D1, D2, D5, D6, D7, D8, D9, D10] \u2022 Returned to theaters in China [D11] \u2020In Taiwan Time Figure 3: Example evaluation material for a report request. since nugget worthiness is an assessor\u2019s opinion, finding all answers to nugget questions is important for collection reusability, especially as nuggets can only be answered using documents known to have the answer. If nuggets are generated prior to submissions, it might be worth pooling submissions to identify more nugget answers. 4.5 Metrics Many metrics can be used to assess automatically generated reports. Two common IR measures are recall and precision; we focus on these here because they are well-known, easy to calculate, and highlight most of the important scoring issues we face in generated report evaluation. Recall and precision each require a numerator and a denominator. The recall denominator is the number of distinct assessor-identified nuggets; its numerator is the number of correctly reported nuggets (those supported by one or more of the necessary supporting citations in the report). So recall tells us how many of the concepts central to the report were actually reported on. Precision must account for phenomena below the nugget level, so we calculate it over report segments (which again we assume to be sentences). The denominator is the number of report sentences, minus any sentence that does not require a citation or that properly cites information not part of any nugget. The numerator is the number of sentences deemed to bear accurate citations, plus any sentences specified by the evaluation that correctly bear no citation. Figure 2 describes a typical approach to sentence evaluation. The rules embedded in the flowchart are not hard-and-fast, but are likely adequate for many evaluations under this framework. In the flowchart, \u201c+\u201d means the sentence is correct and should be rewarded; \u201c\u2013\u201d means that it is incorrect and should be penalized; and \u201c0\u201d means that the sentence is not included in the score. The flowchart shows how each sentence of the report can be scored. We propose these principles to guide sentence scoring: \u2022 Sentences with citations whose target element does not support them should be penalized (Outcome#1 in Figure 2). \u2022 Properly cited and attested sentences that are not relevant to the report should be ignored (Outcome #2). \fOn the Evaluation of Machine-Generated Reports SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA (1) Avengers: Endgame and Avatar are two of the highest-grossing films in history. Outcome: #4 (2) Avengers: Endgame surpassed Avatar as the highest-grossing film globally, with a box office revenue of $2.787 billion. [D12] Outcome: #2 (3) This achievement was officially announced in July 2019. [D13] Outcome: #3, Nugget: 2 (4) To bring moviegoers back to the box office and increase the revenue of Avengers: Endgame, Marvel Studios implemented several strategies. Outcome: #4 (5) One of these strategies was re-releasing the film with additional content. Outcome: #6 (6) Marvel Studios President Kevin Feige confirmed plans for a re-release, including previously unseen footage, to generate renewed interest and boost ticket sales. [D101] Outcome: #2 (7) The new version of the film did not include any re-edited scenes within the main feature but instead featured additional scenes after the end credits, including a tribute. [D14] Outcome: #3, Nugget: 3 (8) This approach aimed to entice both existing and new audiences to revisit the theaters and contribute to the film\u2019s box office success. Outcome: #6 (9) Similarly, Avatar also made a comeback to reclaim its position as the highestgrossing film. Outcome: #4 (10) The film was re-released in China, where it achieved a single-day box office revenue of $8 million. [D2] Outcome: #3, Nugget: 5 (11) The re-release in China was particularly successful, as it generated significant revenue and propelled Avatar back to the top of the global box office charts. [D1] Outcome: #3, Nugget: 5 (12) The film\u2019s re-release in China was attributed to the impact of the COVID-19 pandemic, which led to the delay of many new releases and created an opportunity for Avatar to regain its popularity. [D2] Outcome: #3, Nugget: 5 (13) In conclusion, Avengers: Endgame and Avatar both held the title of the highestgrossing film globally at different points in time. Outcome: #4 (14) Marvel Studios strategically re-released Avengers: Endgame with additional content to attract audiences and boost ticket sales. Outcome: #6 (15) Avatar capitalized on the re-release trend in China, taking advantage of the pandemic-induced delay of new releases. Outcome: #6 (16) These manipulations aimed to increase the films\u2019 box office revenue and solidify their positions as record-breaking blockbusters. Outcome: #4 Figure 4: Example report evaluation result. \u2022 A sentence that cites a target element supporting a nugget that the sentence fulfills should be rewarded (Outcome #3). \u2022 Sentences that neither have nor require citations should not affect the score (Outcome #4). \u2022 Sentences that should contain a citation but do not should be penalized the first time their claim occurs (Outcomes #5, #6). \u2022 Sentences that claim the absence of a fact should be rewarded or penalized depending on whether the absence is explicitly stated as a nugget (Outcomes #7, #8). For this, a nugget can be created for information that the report request explicitly asks for but is not attested in the collection. Most sentences will bear either zero or one citation. A sentence can bear multiple citations, either because the same information is multiply attested in the collection, or because it is complex. Sentences that cite multiple target elements supporting the same nugget are treated as a single citation. Alternatively, the evaluation may macroaverage citation scores if all sentences are to be given equal weight, or microaverage them if the focus is on citation accuracy. Support by multiple report sentences counts only once per nugget. To automatically score a report, each decision diamond in Figure 2 must be automatable. Some are trivial, such as \u201cHas citation;\u201d others are less so. We believe current technology could do a reasonable job with most of the tasks. For instance, entailment models can likely determine if a document supports a report sentence. Note that originality is not a component of this evaluation; preventing plagiarism, while important, is a specialized area with its own metrics and evaluations [5, 22, 34, 66]. 5 EXAMPLE ASSESSMENT Figure 3 shows an example of the two items required to do manual or automatic assessment. The report request identifies the desired report content. The nugget questions and answers show how each answer is linked to the documents that attest to that answer. The Gold Standard Report that is shown is optional, but a useful intermediate step for the assessor between source document search and nugget question creation. Figure 4 is a report generated in response to the example in Figure 3, broken into report segments to illustrate manual evaluation. Each Outcome: # indicates how the sentence would be categorized using the flowchart in Figure 2. For Outcome: #3, the nugget answer in the sentence is also recorded. In this example, no sentence received a negative assessment because there were no outcomes of #1 or #7. Therefore, precision is 5/(16 \u221211) = 1.0. One nugget was repeated in Lines 10, 11, and 12, so recall is 3/5 = 0.6. Finally, for both Lines 2 and 6, the assessor would have needed to refer to the original source document to assess the statement, since the information in the sentence had not been captured in a required nugget. Assessing such sentences will likely be the most time-consuming part of manual assessment. 6", + "additional_graph_info": { + "graph": [ + [ + "James Mayfield", + "Eugene Yang" + ], + [ + "James Mayfield", + "Dawn Lawrie" + ], + [ + "James Mayfield", + "Orion Weller" + ], + [ + "James Mayfield", + "Sean Macavaney" + ], + [ + "Eugene Yang", + "Dawn Lawrie" + ], + [ + "Eugene Yang", + "David D. Lewis" + ], + [ + "Dawn Lawrie", + "Sean Macavaney" + ], + [ + "Orion Weller", + "Dawn Lawrie" + ], + [ + "Orion Weller", + "Kyle Lo" + ], + [ + "Orion Weller", + "Arman Cohan" + ], + [ + "Orion Weller", + "Luca Soldaini" + ], + [ + "Sean Macavaney", + "Arman Cohan" + ], + [ + "Sean Macavaney", + "Luca Soldaini" + ], + [ + "Sean Macavaney", + "Andrew Yates" + ] + ], + "node_feat": { + "James Mayfield": [ + { + "url": "http://arxiv.org/abs/2305.00331v1", + "title": "Synthetic Cross-language Information Retrieval Training Data", + "abstract": "A key stumbling block for neural cross-language information retrieval (CLIR)\nsystems has been the paucity of training data. The appearance of the MS MARCO\nmonolingual training set led to significant advances in the state of the art in\nneural monolingual retrieval. By translating the MS MARCO documents into other\nlanguages using machine translation, this resource has been made useful to the\nCLIR community. Yet such translation suffers from a number of problems. While\nMS MARCO is a large resource, it is of fixed size; its genre and domain of\ndiscourse are fixed; and the translated documents are not written in the\nlanguage of a native speaker of the language, but rather in translationese. To\naddress these problems, we introduce the JH-POLO CLIR training set creation\nmethodology. The approach begins by selecting a pair of non-English passages. A\ngenerative large language model is then used to produce an English query for\nwhich the first passage is relevant and the second passage is not relevant. By\nrepeating this process, collections of arbitrary size can be created in the\nstyle of MS MARCO but using naturally-occurring documents in any desired genre\nand domain of discourse. This paper describes the methodology in detail, shows\nits use in creating new CLIR training sets, and describes experiments using the\nnewly created training data.", + "authors": "James Mayfield, Eugene Yang, Dawn Lawrie, Samuel Barham, Orion Weller, Marc Mason, Suraj Nair, Scott Miller", + "published": "2023-04-29", + "updated": "2023-04-29", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION As with many other human language technologies, neural models have recently achieved state-of-the-art performance in monolingual ad hoc information retrieval (IR). A key enabler of these advances has been the appearance of large IR training sets such as MS MARCO [3]. MS MARCO was developed by mining Bing query logs to identify, for each query, a relevant and a non-relevant document drawn from the Bing index. This makes MS MARCO well-suited to training IR systems for web-style queries where the documents are English webpages. It is less well-suited to other document languages, query styles and document genres as Dai et al. [10] demonstrate. Nonetheless, MS MARCO has been the basis for much of the improvement in IR achieved by neural systems. In cross-language information retrieval (CLIR) there has been no resource comparable to MS MARCO. A number of CLIR collections are available. HC4[28]1 and TREC NeuCLIR 1 [27]2 are high-quality ad hoc CLIR collections, but are too small to serve as training data for a neural system. Collections such as CLIRMatrix[48],3 XORQA[2],4 and MIRACL[60]5 cover numerous languages, but like MS MARCO are focused on question answering and are biased towards Wikipedia articles. Their relevant documents are also not paired with non-relevant counterparts. Given the lack of appropriate training sets for ad hoc CLIR, the research community has used machine translation to translate the MS MARCO documents into other languages. This has resulted in collections such as mMARCO[5], and NeuMARCO6 of the same size as MS MARCO with queries in English and documents in another language. Using these resources, neural systems have achieved state-of-the-art CLIR performance. Yet such translated training collections suffer from a number of problems. While MS MARCO is a large resource, it is of fixed size; thus, the amount of available training data is limited. More importantly, the genre and domain of discourse of the collection are fixed; documents are drawn from the Bing index and do not include, for example, informal communications such as email and Tweets. In addition, the translated documents are not written by a native speaker of the language, but rather suffer from a phenomenon known as translationese [51]: translation artifacts that have 1https://github.com/hltcoe/hc4 2https://neuclir.github.io/neuclir1.html 3https://github.com/ssun32/CLIRMatrix 4https://github.com/AkariAsai/XORQA 5https://github.com/project-miracl/miracl 6https://ir-datasets.com/neumarco.html arXiv:2305.00331v1 [cs.IR] 29 Apr 2023 \fMayfield et al. been shown to affect cross-language transfer performance [1]. Furthermore, MS MARCO is available only for research purposes,7 so commercial systems and other non-research applications cannot make use of it. To address these problems, we introduce the JH-POLO training set creation methodology. JH-POLO starts with a pair of nonEnglish passages. These passages can be written by native speakers of the language, and can be drawn from any genre or domain. Thus, a collection generated using the JH-POLO methodology can be tailored to any desired retrieval setting. Once a passage pair has been selected, an English query is automatically generated for which one passage of the pair is relevant and the other passage is not. We use English as the query language to match the available CLIR test collections. This creates an MS MARCO-style training example comprising a query, a relevant passage, and a non-relevant passage. A generative large language model (LLM) such as GPT-3 [8] is used to produce the English query. By repeating this process, a training collection of arbitrary size can be created. This paper describes the JH-POLO methodology in detail, shows its use in creating new CLIR training sets, and describes experiments that demonstrate the efficacy of the approach. We make the following contributions: \u2022 We show that it is possible to generate a viable large CLIR training set automatically using only a target document collection and a generative LLM. To our knowledge, this is the first automatically generated CLIR training collection that uses natively-written passages. \u2022 We show that negative training examples can be selected before generating the retrieval query to which they are not relevant, thereby allowing some control over the difficulty of negative examples in the generated collection. \u2022 We show that training using the JH-POLO methodology is comparable to using machine-translated MS MARCO data when the documents to be searched are similar to the web documents used by MS MARCO documents, and more effective than training exclusively on MS MARCO when the domain or genre of the evaluation document collection deviates from that of the MS MARCO documents. 2 BACKGROUND 2.1 Cross-Language Information Retrieval When moving from monolingual IR to CLIR, there is the added complexity of crossing the language barrier between the query expression and the document expression. One popular approach is to use a Machine Translation (MT) system to translate either the queries or the documents, to achieve a monolingual space where a monolingual IR system can be used [12, 37, 61]. Another approach generates dense representations of queries and documents. Matching queries to documents happens in a shared multilingual vector space; this approach is popularly known as dense retrieval. Pre-BERT [11] dense retrieval models used non-contextualized cross-language word representations to perform CLIR matching [16, 32, 56]. The adoption of large multilingual pretrained language models (mPLMs) such 7https://microsoft.github.io/msmarco/ as mBERT [11] and XLM-R [9] led to dense neural CLIR systems that use contextualized representations for matching [33\u201335, 44]. Dense retrieval models for CLIR now rely heavily on mPLMs as the backbone of the models. However, Litschko et al. [34] demonstrate that performance using an off-the-shelf mPLM for CLIR is suboptimal. While sufficient training data is available to fine-tune an English system in the form of MS MARCO [3], analogous CLIR data is not natively available. Translated versions of MS MARCO, where the MS MARCO documents are replaced with machine translation output, have been used to fill this gap [5, 35]. This paper focuses on an alternative approach to fine-tuning CLIR systems. We explore the synthetic generation of queries from passages selected from a target document collection. Rather than effectiveness being dependent on the quality of an mPLM or the quality of machine translation, in this approach effectiveness is dependent on the ability of a generative LLM to produce effective training examples. Dense Passage Retrieval (DPR) [24] and ColBERT [26] are two of the most commonly studied and highest performing dense retrieval models. DPR computes the similarity of the query classification (CLS) token and the CLS token of each document. ColBERT computes similarities between each pair of query and document tokens and scores a document by the sum of the maximum similarity (MaxSim) of each query token [26]. Compared to other neural reranking models such as a cross-encoder [38], dense retrieval models limit ranking latency by separating query and document transformer networks to support offline indexing. DPR-X [54, 55, 59] and ColBERT-X [35] are the CLIR counterparts of DPR and ColBERT. Both use an mPLM as the underlying language model for crossing the language barrier. Exploiting both multilinguality and improved pre-training from XLM-R [9], DPR-X and ColBERT-X seek to generate similar contextual embeddings for terms with similar meanings, regardless of their language. These are the two retrieval models featured in our experiments. 2.2 LLMs and Retrieval Language models are now tightly integrated with information retrieval systems. These combined systems are used for a broad range of knowledge-intensive problems, including open-domain question answering [20, 21], conversational assistants [45, 46], fact-checking [49, 50], and even improving language modeling itself [6, 31]. At times these systems are simply combinations of separate processes [15, 20, 40], while other times they are trained end-toend from retrieval to the downstream task [21, 23, 30]. Due to the size of LLMs, they are typically used as separate components, with retrieval results passed to the LLM [15, 25, 47]. A nascent line of work has even proposed ignoring retrieval entirely and using LLMs to generate a relevant document in lieu of search [57]. In contrast to much of the research cited in this section, our work aims to use LLMs to improve IR models, rather than using retrieval to improve LLMs on NLP tasks. 2.3 Synthetic Query and Document Generation Using LLMs to improve IR models through synthetic data generation has also been a burgeoning area of interest [19, 41\u201343, 52]. A \fSynthetic Cross-language Information Retrieval Training Data Collection BM25 \u5f31\u52bfIS\u5b9e\u65bd\u7126\u571f\u884c\u52a8\uff0c\u70e7\u6cb9\u4e95... Positive Passage [The disadvantaged IS implemented scorched soil operations, burning the oil wells ...] \u6050\u6016\u7ec4\u7ec7IS\u5360\u9886\u6469\u82cf\u5c14... \u4f0a\u62c9\u514b\u519b\u961f\u5f00\u59cb\u519b\u4e8b\u884c\u52a8 Negative Passage [The terrorist organization IS has occupied Mosul ... Iraqi forces began a military campaign] Large Language Model 1 2 N The destruction of oil wells by IS forces The use of scorched earth tactics by IS forces in Mosul The implications of this destruction of oil wells on the Iraqi Government ... Generated Queries Passage Selection Prompt Figure 1: A depiction of the basic JH-POLO methodology. A target language passage (Chinese in this example, translated into English for convenience) is selected randomly from the target passage collection, and BM25 retrieval is used to identify a related passage. The two Passages are presented to a large language model, which is then prompted to generate queries for which one passage is relevant and the other is not. prominent early example is the doc2query [39] family of algorithms, which supports the generation of a query that is relevant to a given document and which is then appended to it as a form of document expansion. As language models have grown in size and ability [8], there has been a surge of interest in this topic. HyDE [13] uses LLMs to generate a synthetic document that is then used as a query, while the InPars algorithms [4, 7, 22] and PROMPTAGATOR [10] use LLMs to generate queries given document, in the reranking and end-to-end settings respectively. These works differ in how they prompt the LLMs: PROMPTAGATOR uses a different prompt template for each dataset and only shows relevant few-shot examples (i.e., what the LLM should generate) while InPars also uses non-relevant few-shot examples (i.e., what not to generate). Despite the plethora of recent research in creating synthetic training data for IR, to date, and with a few exceptions (e.g., HyDE [13]), most work has focused on the English language. This leaves it unclear how LLMs can be used to train translingual or multilingual IR systems. 3 JH-POLO METHOD Generation of a single training example starts with the selection of two passages.8 A generative LLM is given these passages and prompted to compose an English query for which one passage is relevant and the other is not. This process is repeated to generate as many training examples as desired. This method has two significant advantages: (1) It ensures that the passages are naturally-occurring text selected from the language, genre and domain of interest. Use of MS MARCO for CLIR has relied on machine translation of the MS MARCO document collection, which exhibits artifacts of machine translation. Furthermore, there is no way to alter the characteristics of the document collection underlying MS MARCO. 8Full-length documents could exceed length limits imposed by the LLM. (2) It exploits a generative LLM\u2019s strength, which is generating short English9 texts. LLMs can struggle when trying to generate a long document. Its capabilities in languages other than English are also inconsistent. By generating short English queries these problems with LLMs are ameliorated. Figure 1 is a pictorial representation of the JH-POLO process. Section 3.1 describes the left side of the figure, while Section 3.2 describes the right side of the figure. 3.1 Passage Selection Choosing passages at random would be a simple way to select two passages for use in query generation. However, doing so would almost always select two passages with no topic overlap. Any system trained using such pairs would have a difficult time distinguishing passages with a high topic overlap at test time. We would like our training data set to include related passages that exhibit significant overlap with a relevant passage but are not themselves relevant. We hypothesize that the closer the content of the two passages, the more useful the pair will be for training. There are a number of ways to choose two related passages; these include: \u2022 Use an existing document collection and passage pairs. MS MARCO is the obvious target here; it has passages, and topics with an example of a relevant passage and a non-relevant passage for each topic. \u2022 Use an existing ad hoc IR collection. For example the TREC NeuCLIR track10 provides English topics with documents in Chinese, Persian, or Russian. One way to select a pair is to use the relevance judgments (qrels) to select two passages, one from a randomly chosen judged relevant document and the other from a randomly chosen judged non-relevant document for a given topic. This does not guarantee that the same relevance judgments will apply to the selected passages, but it is likely that those passages will be related but not identical. Alternatively, one could perform retrieval on 9At this writing the major generative LLMs focus on English. 10https://neuclir.github.io/ \fMayfield et al. the original queries, and use the ranked results to select two top-scoring passages. \u2022 Use a collection with relatedness links. One could for example select two linked Wikipedia articles, or two versions of a single Wikipedia article from two different dates. \u2022 Select a passage at random, or one returned by a query, and use that entire passage as a query. Select the top retrieved passage whose BM25 retrieval score is at least a fixed threshold away from the score of the query passage as the negative passage. The last approach is the one explored in this paper. By requiring at least some separation in the BM25 scores of the two passages, we ensure that the two passages contain some different information. We also require that the passages do not come from the same underlying source document. Different genres may also necessitate additional requirements to ensure the selection of useful training pairs. For instance, we examine the longest common substring between two passages sourced from informal communications; the selected passage must contain both twenty characters and 40% of its total characters outside of that common substring. 3.2 Prompt Specification Unlike pre-trained language models that are routinely fine-tuned to adapt them to new genres, domains, or tasks, the common and economic way to use a generative LLM such as GPT-3 is to engineer a prompt to guide the desired generation. We experimented with a variety of prompts with the goal of creating suitable CLIR queries. Such a prompt must: \u2022 contain the text of each of the passages. \u2022 indicate what type of output is required. We would like to produce multiple output queries for each prompt to reduce the overall cost of building the collection. \u2022 ensure that the generated queries are written in English regardless of the language of the passages. \u2022 communicate what is meant by relevance. \u2022 require that one of the passages is relevant to the output query and the other is not. Figure 2 shows the basic prompt we used to create the training collections described in this paper. Here, {first} and {second} are replaced with the complete text of the first and second passages. The prompt requests five outputs for each passage, requires that the output is in English, and stipulates that one passage must be relevant and the other not relevant. Relevance is defined relative to an analyst writing a report; a passage is relevant if it helps the analyst write the report, and not relevant if it does not. The topic of the report is not specified in the prompt; the LLM is free to invent any report topic it likes. Thus the output query can be on any suitable topic. We experimented with few-shot prompts that included sample outputs. These prompts had two problems. First, they increased the length of the prompt, increasing the cost of the request, which for GPT-3 is dependent on the sum of the lengths of the input and output. Second, there was occasionally bleed-through of the topics of the sample outputs into the queries produced. As a result, we restricted our attention to zero-shot prompts that relied purely on description of the desired output. 3.3 Crossing the Language Barrier We use GPT-3 Davinci-311 as our large language model for two reasons. First, its input buffer is 4000 tokens, allowing us to include passages of up to about 550 words of Chinese, 260 words of Russian, or 370 words of Persian, while allowing an additional 100 or so tokens of English for the prompt. Second, Davinci is far more capable with languages other than English than are the lesser GPT-3 models. We present non-English passages to GPT-3 with no indication of what language they are written in; the prompt indicates only that they are \u2018documents.\u2019 Davinci seems to handle other languages with ease; the lesser models do not. The ability of GPT-3 Davinci-3 to handle languages other than English varies dramatically by language [8]. If GPT-3 is unable to handle a given foreign language well, an alternative is to use machine translation to produce English versions of the documents. Then an English-only process is applied to these translations. This approach relies on document relevance not changing much when a document is translated. This is plausible, although the claim remains to be proven. It should be noted that while the LLM would process the translated documents in this case, the CLIR fine-tuning would continue to use the original natively written documents. 3.4 Failure Modes We have identified four categories of error most commonly seen in JH-POLO output. The following describes them in detail. Underspecification. This occurs when the query could refer to something in the passage, but could just as easily refer to many other things completely unrelated to the passage. For example, \u201cThe emergence of images in the media related to the leak\u201d could refer to any of a number of instances of leaked documents. This failure mode can be thought of as inadequate inclusion of context in the query. Despite the apparent problem, we believe underspecified queries are less damaging as training data than other categories of errors because the negative passage is still not relevant to an underspecified query. Overspecification. This occurs when the selected passage is relevant to the query, but no other passage is likely to be relevant. For example, \u201cThe arrest of Moise\u2019s bodyguards and 3 security personnel\u201d describes one very specific facet of an event, one that will likely be found in very few of the passages about the event. This failure mode frequently occurs when numbers are part of the generated query, because it limits the query to a particular instance of a topic. While the resulting relevance judgments are still consistent, training with too many queries of this type may not be as useful for system performance since they are unlikely to capture the characteristics that the system needs to learn. Hallucination. Sometimes the LLM inserts a detail into the query that is completely unrelated to anything in the source passage. For example, the query \u201cI seek news about Hong Kong\u2019s women\u2019s basketball team\u201d produced from a passage that includes information about a police basketball team and a mother who is a representative for a youth basketball team, but no mention of women\u2019s basketball teams, is a hallucination. This is a more problematic failure mode since the passage labeled relevant is not actually relevant to the query. 11https://beta.openai.com/docs/models/gpt-3 \fSynthetic Cross-language Information Retrieval Training Data This is document A: <<{first}>> This is document B: <<{second}>> I am an analyst writing a report. Only one of the documents will help me write my report. For each document, describe in English, one per line, five things my report might be about for which that document will help me write my report and the other document will not help me write my report. Figure 2: GPT-3 prompt used to create the training examples reported in this paper Overly broad. Sometimes the LLM fails to detect that the nonrelevant passage contains information that means that it is also relevant to the query. As with hallucination, this failure mode leads to inaccurate training data. Unlike hallucination, this type of failure is also found in the MS MARCO training set, where negative examples were not necessarily judged by an assessor. 3.5 Domain and Genre Shift A key claim of this paper is that building a CLIR training set using the document collection of interest will lead to better retrieval results than just using a generic training collection such as the one underlying MS MARCO. CLIR evaluation collections over genres other than newswire are rare, making this claim challenging to validate empirically. To evaluate JH-POLO performance when the domain or genre does not match that of MS MARCO, we used the HC3 collection [29]. This collection comprises documents, queries, and relevance judgments in Chinese and Persian. The documents are Tweet reply threads of up to 100 Tweets in length. Thus, the documents are short informal conversations \u2013 very different from the web documents found in MS MARCO. When shifting domains or genres, it may be necessary to reengineer the prompt due to attributes of the data. For instance, we found that when using Tweets as our document collection, underspecificity was particularly egregious. We experimented with prompt variants to ameliorate this problem. We found that adding the sentence \u201cNo response should require the recipient to have seen the previous responses\u201d was effective at eliminating many generic noun phrases, which were at the core of most of the underspecificity problems. Figure 3 shows output examples both with and without the additional sentence. Its addition does not eliminate all underspecificity, but it greatly reduces it. The figure illustrates three such queries where \"the need for her,\" \"to the country,\" and \"without this\" all lead to underspecified queries. This phenomenon is not observed in the queries generated with the sentence. Prompt generation is still a black art; a variety of sentences conveying essentially the same requirement as this sentence did not make an appreciable dent in underspecificity. 4 VALIDATION We performed two types of validation over the generated data. Section 4.1 describes the manual evaluation undertaken, while Section 4.2 describes an automated validation that improves the quality of the training data. 4.1 Prompt Validation We manually annotated a small number of system outputs to assess the quality of each prompt. The assessor12 was provided with the two passages and one of the the resulting queries. Each such example was assigned to one of the following five categories, based on whether the passage labeled relevant was truly relevant, and whether the passage labeled non-relevant was truly not relevant: \u2022 Both assertions were correct. \u2022 The assertion of relevance was incorrect. \u2022 The assertion of non-relevance was incorrect. \u2022 Both assertions were incorrect. \u2022 The generated query was underspecified. Treating each of these outcomes other than the first as erroneous, the prompt shown in Figure 2 applied to passage pairs selected as described in Section 3.1 had an accuracy of 67% over 61 examples assessed. While underspecified queries are probably not particularly useful training examples, they also are unlikely to damage the training. If the first and last outcomes are treated as correct, accuracy rises to 72%. 4.2 Triple Validation After generation, we validate the triples with a multilingual crossencoder reranking model. Validation is a important step to filter out triples that are likely to be hallucinations or that are overly broad. One way to accomplish this validation is to use retrieval to ensure that the positive passage is ranked first, as was done in PROMPTAGATOR [10]. This was necessary in PROMPTAGATOR because negative passages were not included in its prompts. JHPOLO prompts contain both a positive and a negative passage. Therefore, our filtration process relies on the relative rankings of the two passages. In particular, we only include a triple when the positive passage is more likely to be relevant to the generated query than is the negative one; this helps to ensure the integrity of the contrastive loss used during training. Furthermore, we use a lower bound threshold on the difference between the two likelihoods to ensure that the two are not too close in meaning with respect to the query. Specifically, let \ud835\udc39(\ud835\udc5e, \ud835\udc5d) : R \u2192R be the cross-encoder model that produces a real-valued score for a given query \ud835\udc5eand passage \ud835\udc5dpair. For a given generated query, positive and negative passage triple (\ud835\udc5e, \ud835\udc5d\ud835\udc5d, \ud835\udc5d\ud835\udc5b), we consider the triple to be valid for training if \ud835\udc52\ud835\udc39(\ud835\udc5e,\ud835\udc5d\ud835\udc5d) \ud835\udc52\ud835\udc39(\ud835\udc5e,\ud835\udc5d\ud835\udc5d) + \ud835\udc52\ud835\udc39(\ud835\udc5e,\ud835\udc5d\ud835\udc5b) \u2212 \ud835\udc52\ud835\udc39(\ud835\udc5e,\ud835\udc5d\ud835\udc5b) \ud835\udc52\ud835\udc39(\ud835\udc5e,\ud835\udc5d\ud835\udc5d) + \ud835\udc52\ud835\udc39(\ud835\udc5e,\ud835\udc5d\ud835\udc5b) > \ud835\udf0f (1) 12Assessors were paper authors using Google passage translations. When there was questionable machine translation, a native speaker reviewed the passage. \fMayfield et al. Taiwanese President Tsai Ing-wen\u2019s upcoming visit to Paraguay. The need for her to pass through the US to demonstrate her presence. Excluding addition Making fun of Tsai Ing-wen\u2019s visit to Paraguay. Suggestion to give Taiwan several billion in new Taiwan dollars to the country. Assertion that Taiwanese separatists will not be appeased without this. Including addition Taiwanese President Tsai Ing-wen\u2019s planned visit to Paraguay The possibility of Tsai offering monetary incentives to Paraguay during her visit Figure 3: Comparison of output when including the prompt addition \u201cNo response should require the recipient to have seen the previous responses\u201d (below) or excluding it (above). where \ud835\udf0fmust be greater than 0; otherwise, the negative passage is more likely to be relevant to the query than the positive passage. We set \ud835\udf0fto 0.15 in our experiments to eliminate noise from the training data. 5 EFFECTIVENESS ANALYSIS In this section, we explore the effectiveness of JH-POLO-generated triples by training retrieval models on them and comparing the performance of those models over different evaluation datasets. Our purpose here is not to try to match state-of-the-art retrieval effectiveness; doing so is the purview of algorithms, and thus outside of the scope of this paper. We offer no new CLIR algorithms. Rather, we show that JH-POLO-generated training data are as good as machine-translated MS MARCO data for collections that match those data well, and superior to MS MARCO data when the two diverge. 5.1 Evaluation Collections We analyze the effectiveness of the JH-POLO methodology with two CLIR test suites \u2013 TREC NeuCLIR 202213 and HC3 [29]. Collection statistics appear in Table 1. These collections form the basis of our effectiveness analysis. The NeuCLIR 2022 dataset contains three sub-collections in Chinese, Persian, and Russian. Documents in NeuCLIR 1 are news articles extracted from Common Crawl News. The HC3 dataset consists of Chinese and Persian Tweet reply threads each containing a root Tweet and up to 100 replies. When generating synthetic training data, we draw passages from the target document collection; therefore, passages are in the domain, genre, and language of the test collection. Passage selection for the two collections differed based on the quality of the written language in the passages. For NeuCLIR 1, a positive passage was chosen randomly from all passages that exceeded a length requirement. Length requirements, which were language-specific, were set to the minimum document lengths imposed by the creators of the NeuCLIR 1 collection: 75 characters for Chinese, 100 characters for Persian, and 200 characters for Russian. To identify a negative passage, the positive passage was used as a query to search the collection, and the resulting passages were ranked using BM25. All BM25 scores were divided by the score of the positive passage. The first passage of sufficient 13The name of document collection is NeuCLIR 1. NeuCLIR 2022 refers to the evaluation suite that contains NeuCLIR 1 and the topics and relevance judgments developed for the TREC NeuCLIR Track 2022. length whose ratio of BM25 scores was less than 0.65 and where no other passages from that document scored higher than 0.65 was selected as the negative passage. For HC3, the length minimums were reduced to 15 characters for Chinese and 25 for Persian. However, a sample generation revealed that this process was insufficient for selecting Tweets with enough content to generate understandable queries. Consequently, we used 10,200 summaries from the WCEP multi-document summarization dataset [14] as queries to select positive passages (this dataset is time-aligned with HC3, and HC3 topics tended to be event-inspired). Since this summarization dataset is in English, we use Google Translate to translate the summaries into the HC3 languages to use as queries for BM25 retrieval. For Chinese, HC3 contains Tweets written in both Traditional and Simplified characters. To retrieve Tweets in either character set, translations were made into both Traditional and Simplified characters, and the two translations were concatenated to form the queries. Because these events were reported in the English media, not all of them aligned well with topics in nonEnglish Tweets. To provide as much diversity as possible, each relevant passage was uniquely paired with a single non-relevant passage; thus no passage was paired with two different passages. Another observed artifact was that re-Tweets greatly increased the presence of exact duplicate substrings. This made it challenging for an LLM to create queries for which only one of the passages was relevant. We handled this problem by imposing the longest common substring constraint. While the BM25 ratio was still used, we raised the threshold to 0.8 to create more passage pairs. However, because BM25 gives great weight to unusual tokens, URLs present in the Tweets introduced an unusual bias, causing Tweets that were related by advertisements rather than by content to be chosen as pairs. To handle this problem, we stripped URLs from all documents before passages were created. In addition, two passages were paired only if they were both retrieved by the initial retrieval and by the positive passage. This led to the creation of fewer than 10,200 pairs. Finally, the \u201csame document\u201d exclusion criterion used for the NeuCLIR 1 was dropped since Twitter conversations are less coherent than Common Crawl News documents. 5.2 Training Examples Generated Table 2 summarizes the number of triples generated by GPT-3 Davinci-3. We generated roughly the same number of triples for all three sub-collections in NeuCLIR 1, with Russian pairs having slightly more triples. Despite the prompt asking for five topics for each passage of the pair, GPT-3 would not necessarily respond \fSynthetic Cross-language Information Retrieval Training Data with the correct number, and not all generated topics would pass the filter, resulting in roughly eight queries per pair. Generation for HC3 is even more challenging, with a fanout of around seven queries per passage pair. While enforcing unique passage pairs may seem desirable, this repetition of passage pairs is similar to the repetition of query and positive passage that is found in MS MARCO training triples. In fact, our generation process actually has less repetition than MS MARCO. In the small training triple file published by MS MARCO, there are roughly 100 negative passages associated with each query, where the vast majority of the queries have only one positive passage. Repetition of query-positive passage pairs is more than ten times that found in JH-POLO. We argue that JH-POLO provides more diverse information in its triples and thus has the potential to lead to better retrieval models. 5.3 Retrieval Models for Effectiveness Analysis We used two neural dense retrieval architectures as representatives for analyzing our methodology: DPR-X [54, 59] and ColBERTX [35]. All models for each retrieval architecture are based on XLMRoBERTa-base [9] started from the same checkpoint and fine-tuned with a retrieval objective using English MS MARCO for 200,000 update steps. We vary the source of the training data in the second stage fine-tuning, which consists of 1,245 steps. This training scheme is designed to expose differences introduced by a small amount of training data, rather than to train state-of-the-art systems. Note that this training scheme does not include any advanced tricks such as iterative hard-negative mining [17, 53], in-batch negative sampling [24, 58], knowledge distillation [18], etc. The training process here is a for demonstrating the relative effectiveness of JH-POLO as a training resource compared to MS MARCO. JH-POLO training triples were generated with the passage selection processes for each evaluation collection outlined in Section 5.1. GPT-3 Davinci-3 is prompted for queries using an English description along with a pair of passages from the collection. The generated queries, along with the passages, are passed through a cross-encoder trained on mMARCO [5] 14 for validation, as described in Section 4.2. To analyze the effectiveness of JH-POLO, we fine-tune the model in the second stage with the following regimens for comparison: \u2022 English (Eng.). Continues fine-tuning the model with English MS MARCO v1. In this scenario, the model gains knowledge about non-English language during the initial training of the mPLM, but not during fine-tuning. \u2022 Translate (Trans.). Fine-tuned with MS MARCO v1 documents that have been machine-translated into the language of the target document collection.15 Queries remain in English, so the model is exposed to the CLIR task during continued fine-tuning, but the documents may contain translationese introduced by the machine translation system. This approach is also known as translate-train [35]. 14https://huggingface.co/cross-encoder/mmarco-mMiniLMv2-L12-H384-v1 15We used the NeuMARCO translation provided by the TREC NeuCLIR Track 2022. https://ir-datasets.com/neumarco.html Table 1: Dataset statistics of NeuCLIR 2022 and HC3. Collection Chinese Persian Russian Set # Qry # Docs # Qry # Docs # Qry # Docs NeuCLIR 47 3,179,209 45 2,232,016 44 4,627,543 HC3 50 5,584,146 50 7,335,221 Table 2: Statistics of the generation results. Passage Generated Valid Triples Pairs Triples Triples Per Pair NeuCLIR Chinese 19,401 187,908 154,046 7.94 Persian 19,432 180,174 153,933 7.92 Russian 19,348 185,941 159,412 8.24 HC3 Chinese 9,766 86,532 68,679 7.03 Persian 10,077 88,957 66,535 6.60 Following prior work in CLIR dense retrieval [35, 36], we used the trained models to index the collections by separating the documents into overlapping passages of 180 tokens with a stride of 90. Since both NeuCLIR and HC3 consist of TREC-style topics, we concatenated the titles and descriptions as our search query; these are the same queries used in the official NeuCLIR baseline runs for the reranking subtask. We evaluate the final retrieval effectiveness using nDCG@20 (the primary evaluation metric in TREC NeuCLIR) and Recall at 100 (R@100). 5.4 Effectiveness on News Documents As presented in Table 3, both ColBERT-X and DPR-X benefit more from further fine-tuning with JH-POLO than with more of the original English MS MARCO for both nDCG@20 and Recall@100. Since JH-POLO is naturally cross-language, a model trained on it could learn the definition of relevance directly from the target language pair. However, English MS MARCO can only provide evidence on the relationship between queries and passages; it cannot inform the CLIR system being trained about the target language. This forces the model to rely solely on the multilinguality of the pretrained language model, resulting in worse retrieval performance than if the training data encapsulated that information. By translating the MS MARCO passages to the target language, a model being trained can learn the cross-language relationships, although the resulting passages will suffer from translationese. As repeatedly observed by prior work [35, 36, 54], this translatetrain approach provides state-of-the-art CLIR effectiveness when training only on MS MARCO but is dependent on the translation quality [35]. When evaluating the models on NeuCLIR 2022, whose documents are similar to MS MARCO passages, models trained with JH-POLO are only slightly worse than their translate-train counterparts. These differences are not statistically significant, indicating that the two approaches are similar and neither consistently outperforms the other on all topics. When evaluating on HC3, which is a very different genre compared to MS MARCO, training with \fMayfield et al. Table 3: Retrieval Effectiveness. \u2217indicates significance with 95% confidence against fine-tuning with English triples using paired t-tests with Bonferrini correction on three tests (over languages). \u2020indicates significance between JH-POLO and finetuning with translated triples using the same statistical test. NeuCLIR 2022 HC3 nDCG@20 R@100 nDCG@20 R@100 Triples Chinese Persian Russian Avg. Chinese Persian Russian Avg. Chinese Persian Avg. Chinese Persian Avg. ColBERT-X Eng. 0.155 0.131 0.227 0.171 0.236 0.290 0.290 0.272 0.198 0.196 0.197 0.361 0.368 0.364 Trans. 0.216\u2217 0.220\u2217 0.267\u2217 0.234 0.320\u2217 0.389\u2217 0.325\u2217 0.345 0.208 0.254\u2217 0.231 0.385 0.400 0.393 JH-POLO 0.211 0.223\u2217 0.241 0.225 0.265 0.372 0.322 0.320 0.236 0.270\u2217 0.253 0.442\u2217 0.419 0.430 DPR-X Eng. 0.139 0.088 0.175 0.134 0.224 0.245 0.235 0.235 0.130 0.115 0.123 0.249 0.254 0.251 Trans. 0.191\u2217 0.155\u2217 0.192 0.179 0.280 0.317\u2217 0.278 0.292 0.177 0.177 0.177 0.322 0.349 0.335 JH-POLO 0.192\u2217 0.132\u2217 0.181 0.168 0.294\u2217 0.343\u2217 0.277 0.305 0.240\u2217 0.269\u2217\u2020 0.255 0.500\u2217\u2020 0.483\u2217\u2020 0.491 JH-POLO significantly outperforms translate; we will discuss this outcome in detail in the next section. Comparing the two retrieval models, DPR-X benefits from JHPOLO more than ColBERT-X does, especially in the bottom part of the ranking (measured by recall). Since DPR-X summarizes each query and passage into a single vector, it must rely on general semantics, not on token-level matching. Therefore, training with JH-POLO, which contains queries that are only relevant to part of the positive passage and do not necessarily have overlapping tokens, improves DPR-X\u2019s ability to understand subtle differences between the passages. In contrast, ColBERT-X focuses on more token-level cross-language alignments through translated passages, directly enhancing its token-level matching. However, triple quality is an artifact of the prompt used to generate it. We value the diversity and the rich queries that our prompt can provide by generating topics instead of keywords. This tendency implicitly benefits DPR-X more than ColBERT-X. If one is only considering training a specific type of retrieval model, the prompt can be adjusted to produce the kind of information the model most needs to optimize its effectiveness. Again, we do not claim to reach the state-of-the-art CLIR effectiveness simply by training on JH-POLO; such performance would require numerous optimizations, such as using XLM-Roberta large rather than XLM-Roberta base, fine-tuning for many steps beyond the reported two-stage regimen, using in-batch negatives, generating perhaps orders of magnitude more training examples, and so on. But what we do see here is that on a collection that is similar to the MS MARCO collection, JH-POLO generates training data that is on par with machine-translated MS MARCO data. 5.5 Effectiveness on Tweets When building a CLIR engine to search text that differs from the web articles that make up MS MARCO, training on JH-POLO provides dramatic improvements over MS MARCO. When training with JHPOLO-generated triples on HC3, both nDCG@20 and Recall@100 outperform translate-training with MS MARCO. While translating the MS MARCO passages into the target language helps the retrieval model cross the language barrier, the gap between the training genre and the HC3 passages is still large. JH-POLO fills this gap by directly exposing the model to Tweets during retrieval fine-tuning. Such exposure directly translates to effectiveness improvements across all regions of the ranked list. Interestingly, DPR-X is on par with, and sometimes better than, ColBERT-X when trained with JH-POLO. This is unusual, as ColBERTX generally outperforms DPR-X [54]. We hypothesize that ColBERTX requires more training data to learn how to match in a new genre than does DPR-X; while ColBERT-X must adjust all term matches, DPR-X only needs to adjust how its CLS token is created. In this case, DPR-X is more efficient at absorbing the cross-language and cross-genre knowledge provided by JH-POLO. Therefore, we argue that the smaller improvement in ColBERT-X when training on JHPOLO is not necessarily the result of ineffective JH-POLO triples, but of the nature of the retrieval model when searching across genre. Nevertheless, JH-POLO numerically improves ColBERT-X\u2019s performance on HC3, although the difference is not statistically significant. Of particular note is the JH-POLO performance in Persian, where in three of the four collection-retrieval system pairs JH-POLO outperforms translate, one of which is a statistically significant difference. Given that Persian is a lower resources language, machine translation does not perform as well in general [35]. This indicates the using a high performing generative LLM may be able to provide better training data than machine translation. 5.6 Analysis of Examples Figure 4 presents two passage pairs and the queries that GPT-3 Davinci-3 generated from them. Each passage pair is connected by an identifiable thread (eruptions in the top of the figure; voting in the bottom). Because of the way they were selected, these passages tend to contain more information than a randomly selected Tweet. Many of the topics for the eruptions are similar, but are specific to the eruption mentioned in the positive passage. We do see that occasionally there is a query for which there is no further information in the passage, such as the location of Popocat\u00e9petl. Because the bottom passages are less formal, the queries are more general. In particular, the bottom Passage A produces only \fSynthetic Cross-language Information Retrieval Training Data Passage A: 1\u670825\u65e5,\u4f4d\u4e8e\u83f2\u5f8b\u5bbe\u963f\u5c14\u62dc\u7701\u7684\u9a6c\u8363\u706b\u5c71\u55b7\u51fa\u706b\u5c71\u7070\u3002\u9a6c \u8363\u706b\u5c71\u4f4d\u4e8e\u83f2\u5f8b\u5bbe\u5415\u5b8b\u5c9b\u4e1c\u5357\u90e8\u7684\u963f\u5c14\u62dc\u7701\uff0c\u8ddd\u83f2\u9996\u90fd\u9a6c\u5c3c\u62c9\u7ea6330\u516c \u91cc\uff0c\u6d77\u62d4\u7ea62400\u7c73,\u662f\u83f2\u5883\u5185\u6700\u6d3b\u8dc3\u7684\u706b\u5c71\u4e4b\u4e00\u3002\u622a\u81f324\u65e5\uff0c\u5df2\u6709\u8d85 \u8fc77\u4e07\u4eba\u88ab\u758f\u6563\u51fa\u9a6c\u8363\u706b\u5c71\u9644\u8fd1\u7684\u5371\u9669\u533a\u57df\u3002 Volcanic ash spews from Mayon Volcano in Albay, Philippines, on January 25. Mayon Volcano is located in Albay Province in the southeast of Luzon Island in the Philippines, locating about 330 kilometers away from Manila, the capital of the Philippines, and is about 2,400 meters above sea level. It is one of the most active volcanoes in the Philippines. As of the 24th, more than 70,000 people have been evacuated from the danger zone near Mayon Volcano. Passage B: \u5f53\u5730\u65f6\u95f413\u65e5\u65e9\u4e0a5\u70b943\u5206\uff0c\u4f4d\u4e8e\u58a8\u897f\u54e5\u5e02\u8fd1\u90ca\u7684#\u6ce2\u6ce2\u5361 \u7279\u4f69\u7279\u706b\u5c71\u7a81\u7136\u55b7\u53d1\uff0c\u55b7\u51fa\u5927\u91cf\u706b\u5c71\u7070\uff0c\u968f\u5373\u5192\u51fa\u767d\u70bd\u5ca9\u3002\u6ce2\u6ce2\u5361\u7279 \u4f69\u7279\u706b\u5c71\u9ad8\u8fbe5426\u7c73\uff0c\u662f\u58a8\u897f\u54e5\u7b2c\u4e8c\u9ad8\u5cf0,\u4e5f\u662f\u4e16\u754c\u4e0a\u6700\u6d3b\u8dc3\u7684\u706b\u5c71\u4e4b \u4e00\u3002 At 5:43 a.m. local time on the 13th, the #Popocat\u00e9petl volcano located in the outskirts of Mexico City suddenly erupted, spewing out a large amount of volcanic ash, followed by incandescent rocks. With a height of 5,426 meters, Popocatepetl is the second highest mountain in Mexico and one of the most active volcanoes in the world. Passage A being relevant: \u2713The location of Mt. Mayon in the Philippines \u2713The evacuation of more than 70,000 people from the area near Mt. Mayon \u2713The elevation of Mt. Mayon \u2713The activity of Mt. Mayon compared to other volcanoes in the Philippines \u2713The date of the eruption of Mt. Mayon Passage B being relevant: \u2713The location of Popocat\u00e9petl volcano near Mexico City \u2713The elevation of Popocat\u00e9petl \u2713The activity of Popocat\u00e9petl compared to other volcanoes in Mexico \u2713The date of the eruption of Popocat\u00e9petl \u2713The composition of the materials emitted by Popocat\u00e9petl during the eruption Passage A: \u6b22\u8fce\u548c\u6211\u4e00\u8d77\u5728\u7f8e\u56fd\u5927\u9009\u4e2d#\u6210\u4e3a\u9009\u6c11\u3002\u9a6c\u4e0a\u5bfb\u627e\u4f60\u7684\u6295 \u7968\u7ad9\uff0c\u770b\u770b\u4f60\u53ef\u4ee5\u4e3a\u8c01\u6295\u7968\u3002\u6211\u63a8\u4e3e\u6211\u4eec\u656c\u7231\u7684\u4e60\u8fd1\u5e73\u603b\u4e66\u8bb0\uff01\u5e0c\u671b \u4f60\u4e5f\u80fd\u6295\u4ed6\u4e00\u7968 Join me in #becoming a voter in the US election. Find your polling place now and see who you can vote for. I recommend our beloved General Secretary Xi Jinping! I hope you can vote for him too Passage B: \u7f8e\u56fd\u5927\u9009\u65e5\u5b9a\u5728\u793c\u62dc\u4e8c\uff0c\u662f\u56e0\u4e3a\u5f53\u65f6\u7f8e\u56fd\u4eba\u591a\u4e3a\u65b0\u6559\u5f92\u519c \u6c11\uff0c\u5468\u65e5\u53bb\u8fc7\u6559\u5802\uff0c\u5468\u4e00\u52a8\u8eab\u51fa\u53d1\uff0c\u5468\u4e8c\u5230\u8fbe\u6295\u7968\u7ad9\u6295\u7968\u3002\u7531\u6b64\uff0c\u5927 \u9009\u65e5\u5386\u6765\u662f\u4eb2\u670b\u597d\u53cb\u96be\u5f97\u805a\u4f1a\u7684\u597d\u65e5\u5b50\uff0c\u5927\u5bb6\u4f1a\u5e26\u4e0a\u9762\u7c89\u3001\u767d\u83dc\u548c\u8089 \u9985\uff0c\u5230\u6295\u7968\u7ad9\u4e00\u8d77\u5305\u997a\u5b50\uff0c\u5305\u597d\u4e86\u4e00\u8fb9\u5403\u4e00\u8fb9\u6295\u3002\u800c\u997a\u5b50\u4e0d\u54ac\u5f00\u5c31\u4e0d \u77e5\u9053\u4ec0\u4e48\u9985\uff0c\u4e5f\u5bd3\u610f... The U.S. election day is set on Tuesday because Americans were mostly Protestant farmers at that time, they went to church on Sunday, leave by Monday, and arrived at polling stations by Tuesday to vote. Therefore, the general election calendar has always been a rare good day for relatives and friends to gather. Everyone will bring flour, cabbage and minced meat to the polling station to make dumplings together, and vote while eating. And you don\u2019t know the kind of stuffing of the dumplings until you take a bite, which also means... Passage A being relevant: \u2713Endorsement of Xi Jinping \u2713Inviting others to join in voting for a particular candidate \u2717The importance of voting in the US election \u2717The importance of collective voting and participation \u2717The necessity of actively seeking out one\u2019s local voting station Passage B being relevant: \u2713A detailed history of the US voting system \u2713Chinese-American cultural customs \u2713How US citizens of different religions view election day \u2713Traditional Chinese foods associated with the US election \u2713The importance of family and friends gathering on election day Figure 4: Sample queries generated by JH-POLO. Text in italics is the translation of the corresponding Chinese Tweet. \u2713and \u2717indicate whether the generated query passed the cross-encoder filter. two queries; the other three were filtered out during the validation step and are marked with an \u2717. The remaining queries are supported by the passage. In the queries for Passage B, the first one clearly identifies a topic in the passage; however, the third query concerning religions is not well supported, as the passage does not explain the viewpoints of Protestant farmers. While one might question the connection between traditional Chinese food and US elections, the passage does include information on that, and the LLM captures it well. 6 COST GPT-3 is not free; the cost of producing a CLIR training collection using JH-POLO depends on the size of the collection and the cost of GPT-3 per request. At this writing, GPT-3 Davinci-3 (the most capable model) costs us US$0.02 per 1000 subword tokens (the sum of the number of tokens in the prompt and in the output). Subwords are produced by the GPT-2 tokenizer,16 which is similar 16https://beta.openai.com/tokenizer to SentencePiece.17 Thus, our training corpus built on the Chinese NeuCLIR collection cost us about US$400 to produce, while Persian and Russian cost about 20% more. GPT-3 throughput has been about two prompts per second with ten concurrent processes, allowing us to create the collections in about 16 hours. The cost per 1000 training examples for the NeuCLIR collections averaged US$3 for the prompt shown in Figure 2. The number of requests is the same as the number of passage pairs shown in Table 2. While the cost does add up, it is orders of magnitudes cheaper than producing a dataset annotated by humans, such as MS MARCO. 7" + } + ], + "Eugene Yang": [ + { + "url": "http://arxiv.org/abs/2204.11989v1", + "title": "C3: Continued Pretraining with Contrastive Weak Supervision for Cross Language Ad-Hoc Retrieval", + "abstract": "Pretrained language models have improved effectiveness on numerous tasks,\nincluding ad-hoc retrieval. Recent work has shown that continuing to pretrain a\nlanguage model with auxiliary objectives before fine-tuning on the retrieval\ntask can further improve retrieval effectiveness. Unlike monolingual retrieval,\ndesigning an appropriate auxiliary task for cross-language mappings is\nchallenging. To address this challenge, we use comparable Wikipedia articles in\ndifferent languages to further pretrain off-the-shelf multilingual pretrained\nmodels before fine-tuning on the retrieval task. We show that our approach\nyields improvements in retrieval effectiveness.", + "authors": "Eugene Yang, Suraj Nair, Ramraj Chandradevan, Rebecca Iglesias-Flores, Douglas W. Oard", + "published": "2022-04-25", + "updated": "2022-04-25", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Dense retrieval models, such as ColBERT [24], ANCE [41], and DPR [23], have been adapted to cross language ad-hoc retrieval (CLIR) where queries and documents are in different languages by replacing monolingual embedding with a multilingual embeddings (e.g., mBERT [11] and XLM-R [8]). These dense retrieval models learn to encode queries and documents separately into fixedlength dense representations by fine-tuning a pretrained model Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGIR \u201922, July 11\u201315, 2022, Madrid, Spain \u00a9 2022 Association for Computing Machinery. ACM ISBN 978-1-4503-8732-3/22/07...$15.00 https://doi.org/10.1145/3477495.3531886 Continued Language Model Pretraining Task-specific Finetuning for Retrieval Pretrained Language Model, e.g., XLM-R Figure 1: Pipeline for training a dense retrieval model. We propose an additional pretraining phase targeting CLIR. (e.g, BERT [11]) with a retrieval objective using a large number of query-document pairs such as the ones from MS MARCO [2] or Natural Questions [25]. Recent work showed that these models are effective for CLIR when trained with monolingual query-document pairs, enabling zero-shot transfer [28, 32, 37]. Alternatively, training the model with translated MS MARCO (translate-train) is more effective but also much more expensive [32, 39]. However, most pretrained language models do not explicitly ensure the representations of a pair of related texts are similar [15]. This calls for a task-specific fine-tuning process to retrofit the representation produced by the pretrained model to be closer between related or relevant text. Such processes can be complex and computationally expensive, such as RocketQA [35], and, thus, efficient multi-stage training that introduces a \u201ccontinued pretraining\u201d to the pipeline was proposed for monolingual retrieval [16, 20] before running a task-specific fine-tuning with retrieval objectives (illustrated in Figure 1). By construction, the representations for the text conveying similar information in different languages are not necessarily similar, since multilingual pretrained models such as mBERT and XLM-R do not introduce parallel text during pretraining. In other settings, incorporating alignment information into the retrieval model has been shown to be useful for CLIR [10, 19]. We hypothesize that explicitly promoting token-level similarity during the pretraining phase will enhance the effectiveness of CLIR models. To address the aforementioned issues, we propose C3, a continued pretraining approach leveraging weak supervision with document-aligned comparable corpora to encourage representations of the text with similar meaning in different languages to be more similar using contrastive learning. This continued pretraining phase modifies an off-the-shelf pretrained model before it is fine-tuned to the actual retrieval objective, as illustrated in Figure 1. Specifically, we model the similarity between a pair of texts using contrastive learning with token-level embeddings to encourage the arXiv:2204.11989v1 [cs.IR] 25 Apr 2022 \fSIGIR \u201922, July 11\u201315, 2022, Madrid, Spain Yang, et al. model to embed token-level similarity and alignment information. We use Wikipedia articles in the language pair of interest, linking them based on cross-language links present in Wikipedia. We test this using high-resource languages for which strong evaluation resources are available. but this weakly supervised approach could also be applied to lower resource languages in which alternative approaches that rely on parallel text might prove impractical. To summarize, our contributions are two-fold. First, we propose to continually pretrain the model for CLIR dense retrieval to promote similar representations between texts with similar meanings in different languages, using contrastive learning. To our best knowledge, this is the first work that applies contrastive learning to CLIR pretraining in this way. Secondly, we do this in a way that relies only on weakly supervised document-scale links in Wikipedia. 2 BACKGROUND AND RELATED WORK Since the introduction of pretrained transformer-based language models, neural retrieval models have been taking advantage of these models for more effective query-document matching. Early work in monolingual retrieval involved building cross-encoder models [9, 29, 33] that leveraged the full interaction between the queries and documents to produce the relevance scores. Subsequently, similar models [19, 22, 38, 42, 44] were adapted to the CLIR setting. While effective, such models can only rerank documents since they process both queries and documents during inference and thus yield a longer running time compared to traditional sparse retrieval techniques such as BM25 [36]. DPR-style dense retrieval models overcome this limitation by scoring the documents based on the similarity of their representations, which allows the language models to encode documents beforehand. However, the representations produced by off-the-shelf language models are undertrained [16] and thus directly scoring documents with such representations yields suboptimal retrieval results [27]. Additional task-specific fine-tuning with relevance labels produces much better representations on either the sequence level, such as ANCE [41] and DPR [23], or the token level, such as ColBERT [24]. Especially for sequence level representations, often summarized in the CLS token, contrastive learning [6, 34] that trains a model with one positive and multiple negative examples for each query has been shown to be one of the most effective training techniques for dense retrievers [21, 35, 41]. In-batch negative sampling further reduces the memory requirement by treating positive examples for other queries as negative [16, 23]. A similar objective was utilized to pretrain the cross-encoder retrieval model for the CLIR task [43]. Continuing pretraining the off-the-shelf language model has been investigated in mono-lingual retrival [5, 13, 16]. Specifically, coCondenser [16] continued pretraining of the language model with a passage-containing classification task (i.e., determining if a pair of passages belong to the same document) through contrastive learning on the representation of the passages for monolingual IR before fine-tuning it as a DPR model. This weakly supervised pretraining teaches the model to bring the representation of passages extracted from the same documents closer, which benefits the downstream dense retrieval task by assuming passages from the same document convey a similar meaning. coCondenser also trains with a masked language modeling task on the Condenser head [15] that adds two additional layers at the end of the network with the embeddings of CLS from the last layer and the rest from the middle layer. This Condenser head is removed after pretraining, but it has been shown to adjust the encoder effectively. Recently, dense retrieval models have been adapted to CLIR by replacing the encoder with a multilingual pretrained model, such as mBERT, XLM or XLM-R [3, 27, 32]. To utilize existing monolingual collections with a large number of relevance labels such as MS MARCO [2], dense retrievers with multilingual embeddings can be trained on such corpora with zero-shot transfer to CLIR by leveraging the multilinguality of the encoder [28, 32]. Alternatively, with the help of translation models, one can translate the monolingual training collection into the language pair of interest and train the retriever on it (a \u201ctranslate-train\u201d approach) [32, 38]. This training scheme encourages the model to bring the representations of related queries and documents closer across languages. However, training effective translation models can be resource-intensive. Besides the challenges in obtaining the translation, teaching the models two complex tasks jointly can also be tricky. Learning from coCondenser, a two-stage process with a continued language model pretraining followed by task-specific fine-tuning can help the model acquire knowledge incrementally. The following section introduces a pretraining approach that encourages the model to bring the representations of passages in different languages with similar meanings closer before fine-tuning with retrieval objectives. 3 C3: CONTINUED PRETRAINING WITH CONTRASTIVE LEARNING FOR CLIR In this section, we introduce C3, a continued pretraining approach with contrastive learning that encourages similar representations of a pair of texts across languages. The language model learns to establish a semantic space containing the two languages of interest with meaningful similarity by training with this objective. Specifically, consider a comparable corpus with linked document pairs (\ud835\udc51S \ud835\udc56,\ud835\udc51T \ud835\udc56) in languages S and T (i.e., pairs of documents in different languages containing similar information). Given a list of such document pairs [(\ud835\udc51S 1 ,\ud835\udc51T 1 ), (\ud835\udc51S 2 ,\ud835\udc51T 2 ), . . . , (\ud835\udc51S \ud835\udc5b,\ud835\udc51T \ud835\udc5b)], we construct a list of spans [\ud835\udc60S 1 , \ud835\udc60T 1 , \ud835\udc60S 2 , \ud835\udc60T 2 , . . . , \ud835\udc60S \ud835\udc5b, \ud835\udc60T \ud835\udc5b] by randomly sampling one span from each document. Let \u210e\ud835\udc3f \ud835\udc56be the sequence of token representations of span \ud835\udc60\ud835\udc3f \ud835\udc56where \ud835\udc3f\u2208{S, T }, we construct its SimCLR [6] contrastive loss as L\ud835\udc50\ud835\udc5c \ud835\udc56\ud835\udc3f= \u2212log exp \u0010 \ud835\udc53 \u0010 \u210eS \ud835\udc56,\u210eT \ud835\udc56 \u0011\u0011 \u00cd\ud835\udc5b \ud835\udc57=1 \u00cd \ud835\udc58\u2208{S,T } 1(\ud835\udc56\u2260\ud835\udc57\u2227\ud835\udc3f\u2260\ud835\udc58) exp \u0010 \ud835\udc53 \u0010 \u210e\ud835\udc59 \ud835\udc56,\u210e\ud835\udc58 \ud835\udc59 \u0011\u0011 with 1(\u2022) being the indicator function and \ud835\udc53(\u210e1,\u210e2) being the similarity function between representations \u210e1 and \u210e2. This contrastive loss is similar to the one proposed in coCondenser [16] but encourages the model to learn different knowledge. Instead of sampling pairs of spans from the same document, we construct the pair by sampling one span from each side of the linked documents. Equation 3 promotes the representation \u210eS \ud835\udc56and \u210eT \ud835\udc56to be closer while discouraging representations of spans in the same language from being similar (since \ud835\udc58can the same as \ud835\udc3f). This construction pushes the encoder away from clustering text in the same language in \fC3: Continued Pretraining with Contrastive Weak Supervision for Cross Language Ad-Hoc Retrieval SIGIR \u201922, July 11\u201315, 2022, Madrid, Spain Table 1: Reranking effectivness of ColBERT and DPR models with and without our C3 pretraining. The top shows XLMRoBERTa-base models; the bottom shows XLM-algin-base models. Symbols indicate statistically significant differences at \ud835\udc5d< 0.05 by a two-tailed paired \ud835\udc61-test with Bonferroni correction for 6 tests, either with and without C3 (*) or between C3 and original BM25 results (\u2020). \u0394 shows the mean relative improvement from C3 across the 6 collections. nDCG@100 nDCG@10 Retrieval With HC4 NTCIR CLEF HC4 NTCIR CLEF Model C3 Chinese Persian Chinese Persian German French \u0394 Chinese Persian Chinese Persian German French \u0394 QT + BM25 0.362 0.354 0.264 0.336 0.419 0.563 0.258 0.251 0.229 0.407 0.379 0.505 XLM-RoBERTa (base) ColBERT \u2717 0.352 0.385 0.249 0.283 0.510 0.590 0.248 0.277 0.223 0.325 0.513 0.514 \u2713 *0.444 0.391 0.278 \u2020*0.286 \u20200.521 0.574 +8% *0.345 0.274 0.255 0.337 \u20200.535 0.482 +11% DPR \u2717 0.330 0.319 0.218 0.259 0.467 0.531 0.223 0.220 0.184 0.299 0.449 0.449 \u2713 *0.395 0.341 0.255 \u20200.266 \u20200.503 0.562 +10% *0.287 0.226 0.231 \u20200.302 \u2020*0.523 0.491 +15% XLM-align (base) ColBERT \u2717 0.425 0.399 0.303 0.252 0.523 0.579 0.332 0.294 0.283 0.285 0.532 0.478 \u2713 \u2020*0.483 0.400 \u20200.330 0.275 \u20200.528 0.588 +4% \u2020*0.408 0.280 \u20200.316 0.321 \u20200.536 0.499 +6% DPR \u2717 0.385 0.366 0.260 0.235 0.480 0.581 0.300 0.256 0.239 0.265 0.482 0.503 \u2713 0.421 0.403 0.286 \u20200.244 \u20200.503 0.586 +6% 0.324 0.312 0.264 \u20200.279 \u20200.520 0.506 +8% the semantic space and pulls the text across languages with similar meanings closer, while retaining distributional robustness by randomly matching the spans in the documents. To promote token-level similarities, we apply the MaxSim operator proposed in ColBERT [24] as the similarity function \ud835\udc53(\u210e1,\u210e2). Specifically, the function can be written as \ud835\udc53(\u210e1,\u210e2) = \u2211\ufe01 \ud835\udc56\u2208|\u210e1 | max \ud835\udc57\u2208|\u210e2 | \u210e1\ud835\udc56\u00b7 \u210e\ud835\udc47 2\ud835\udc57 where |\u210e\u2022| denotes the number of tokens in the corresponding span and \u210e\u2022\ud835\udc58denotes the representation of the \ud835\udc58-th token in \u210e\u2022. With this similarity function, the contrastive learning loss flows into the token representation to explicitly promote token alignment in the semantic space. Finally, we combine L\ud835\udc50\ud835\udc5c \ud835\udc56\ud835\udc3fwith the masked language modeling loss L\ud835\udc5a\ud835\udc59\ud835\udc5a \ud835\udc56\ud835\udc3f and L\ud835\udc50\ud835\udc51\ud835\udc5a\ud835\udc59\ud835\udc5a \ud835\udc56\ud835\udc3f on span \ud835\udc60\ud835\udc3f \ud835\udc56from the transformer network and the Condenser head [15], respectively, to train the bottom half of the network more directly. Therefore, the total loss L can be expressed as L = 1 2\ud835\udc5b \ud835\udc5b \u2211\ufe01 \ud835\udc56=1 \u2211\ufe01 \ud835\udc3f\u2208{S,T } h L\ud835\udc50\ud835\udc5c \ud835\udc56\ud835\udc3f+ L\ud835\udc50\ud835\udc51\ud835\udc5a\ud835\udc59\ud835\udc5a \ud835\udc56\ud835\udc3f + L\ud835\udc5a\ud835\udc59\ud835\udc5a \ud835\udc56\ud835\udc3f i 4 EXPERIMENTS AND ANALYSIS Our experiment follows the workflow in Figure 1. In this specific study, we use English as our pivot language for the queries and Chinese, Persian, French, and German as our document languages. However, we argue that C3 is generalizable to other language pairs. In the rest of the section, we discuss our experiments\u2019 data, models, setup, and results. 4.1 Datasets To continue pretraining the off-the-shelf pretrained models with C3, we assembled linked Wikipedia articles on the same topic in different languages. Specifically, we leveraged CLIRMatrix [40], a retrieval collection that uses the article titles as the queries to retrieve documents for 19,182 language pairs. For each language pair, we extract all query and document pairs with relevance score 6, which are the Wikipedia pages on the same topic as asserted by inter-wiki links (one query only has one document with a score of 6 given a specific language pair). These documents are linked to construct the comparable corpus. We extracted 586k, 586k, 1,283k, and 1,162k document pairs for Chinese, Persian, French, and German, respectively. For task-specific fine-tuning, we use the \u201csmall\u201d training triples provided in MSMARCO-v1, which consists of 39 million triples of query, positive, and negative passages. We evaluate the final retrieval models on HC4 [26], a newly constructed evaluation collection for CLIR, for Chinese and Persian, NTCIR [31] for Chinese, CLEF 08-09 for Persian [1, 14], and CLEF 03 [4] for French and German. HC4 consists of 50 topics for each language. We have 100 topics for NTCIR and CLEF 08-09 Persian and 60 topics for CLEF 03 French and German. We use the title in English as our evaluation queries. Despite experimenting with relatively high resource language pairs, we argue that there is no language-specific component in C3. We believe C3 is applicable to language pairs that have similar amount of linked Wikipedia pages. 4.2 Experiment Setup We test our proposed approach with XLM-R-base [8]. Additionally, we also tested XLM-align-base [7], which is a variant of XLM-R-base pretrained with parallel text in 14 language pairs and multilingual text in 94 languages. All text in our experiments is tokenized by Sentence BPE [8], which XLM-R uses. We construct the spans from document pairs with a window of 180 tokens. We pretrain the model with C3 for 100,000 gradient update steps with an initial learning rate set to 5\u00d710\u22126 using 4 GPUs with 24GB of memory each. We leveraged Gradient Cache [18] to run with batches of 64 document pairs (16 per GPU). \fSIGIR \u201922, July 11\u201315, 2022, Madrid, Spain Yang, et al. Table 2: Ablation study on different similarity function used in contrastive learning with and without the Condenser head (Cond.). The values showed in the table is nDCG@100 on HC4 Chinese test set. Contrastive Similarity Lang. Model Ret. Model Cond. None CLS MaxSim XLM-R ColBERT \u2717 0.352 0.389 0.410 \u2713 \u2013 0.391 0.444 DPR \u2717 0.330 0.382 0.381 \u2713 \u2013 0.368 0.395 XLM-A ColBERT \u2717 0.425 0.482 0.474 \u2713 \u2013 0.457 0.483 DPR \u2717 0.385 0.406 0.406 \u2713 \u2013 0.408 0.421 We tested on two dense retrieval models: ColBERT [24] and DPR [23]. After pretraining, each model is fine-tuned with the retrieval objective (either ColBERT or DPR) for 200,000 steps also using 4 GPUs with a learning rate set to 5 \u00d7 10\u22126 for each querydocument language pair. We use the original implementation of ColBERT for its fine-tuning and Tevatron [17] for DPR with a shared encoder for queries and documents. Both retrieval models are tested in a reranking setting, where all models rerank the top-1000 documents retrieved by BM25 with machine translated queries. The machine translation models for Chinese and Persian were trained using AWS Sockeye v2 Model [12] with 85M and 12M general domain parallel sentences for each language pair respectively. We used Google Translate for German and French. 4.3 Results and Analysis Table 1 summarizes the main results of our experiments, which indicate that building dense retrieval models using C3 yields better effectiveness. When starting from XLM-R, C3 provides an 8% relative improvement in nDCG@100 (and 11% in nDCG@10) over directly fine-tuning a ColBERT model. The model benefits from the warm start before training with relevance labels by pretraining with a similar objective (MaxSim) with weakly supervised text. On the other hand, we observe a slightly larger gain on DPR, suggesting even retrieval models that score documents with sequence representations (i.e., embeddings of CLS tokens) benefit from a task that promotes token-level similarity. The improvement in the retrieval effectiveness by C3 is less substantial when starting from XLM-align (at most 6% in nDCG@100 compared to 10%). Since XLM-align is trained with parallel text, its ability to create relationships between text across languages is better than XLM-R, resulting in a diminishing return from investing computation resources in pretraining. Nevertheless, C3 still provides more effective retrieval results across languages. Among the evaluated language pairs, English-French is particularly interesting. Applying C3 yields negative \u201cimprovements\u201d in some cases. As English and French have a close relationship linguistically, we suspect the original XLM-R model, which is not trained with parallel text, already establishes an effective cross-language Figure 2: ColBERT models with zero-shot transfer (ZS) and translate-train (TT) approaches using XLM-RoBERTa-base on HC4 Chinese test set. The dashed line demonstrate the nDCG@100 value for XLM-RoBERTa-large with both approaches. semantic space. Continued pretraining with C3 may simply not be necessary in such a case. Notably, XLM-align, which initialized its parameters by XLM-R, also yields worse retrieval results (0.590 to 0.579 in nDCG@100 and 0.514 to 0.478 in nDCG@10), which further supports our observation. Note that all our reranking models underperform BM25 on CLEF Persian collection. After evaluating separately on topics generated in CLEF 2008 and 2009, we discovered that the topic characteristics are different between the two (nDCG@100 of 0.421 on 08 and 0.250 on 09 for BM25). Models pretrained with C3 underperform BM25 in 2008 topics, but are at least on par with BM25 on 2009 topics. While this effect deserves further investigation, we note that queries for this collection were originally created in Persian and then translated into English, possibly initially by nonnative speakers [1, 14]. Perhaps the English queries in 2009 better match the English for which our models have been trained. Nevertheless, C3 still improves the pretrained language models in these cases. Comparing the average relative improvements (over all six test collections) that result from applying C3, we consistently see somewhat stronger relative improvements with nDCG@10 than with nDCG@100. From this we conclude that the effects of the improved modeling are particularly helpful nearer to the top of the ranked list, where interactive users might be expected to concentrate their attention. To investigate our hypothesis regarding the utility of token-level similarity, we evaluate models in which different similarity functions were used as a basis for contrastive learning in continued pretraining. Using the CLS token in this way is similar to the coCondenser model. Results in Table 2 suggest that with the Condenser head, as implemented in the coCondenser model, pretraining with MaxSim similarity as the contrastive learning objective produces better retrieval models. The improvement is minimal without the Condenser head, indicating that token-level similarity benefits from routing information directly to the bottom half of the network. Interestingly, the second-best approach among the four combinations is CLS-based contrastive learning without using the Condenser head, which contradicts the original proposal of coCondenser. However, any continued pretraining is rewarding. Despite the competition \fC3: Continued Pretraining with Contrastive Weak Supervision for Cross Language Ad-Hoc Retrieval SIGIR \u201922, July 11\u201315, 2022, Madrid, Spain among the variants, all language models with continued pretraining outperform their original off-the-shelf version. Finally, we ask the question: what if we can afford to translate MS MARCO so that we can use a translate-train model? To investigate, we utilize the Chinese translation of the MSMARCO-v1 training triples from ColBERT-X [32], which can also be accessed via ir_datasets [30] with the dataset key neumarco/zh1. Figure 2 shows that without C3, the ColBERT model improves from 0.352 to 0.421, which is still worse than zero-shot transfer models trained with C3 for CLIR, suggesting allocating effort to C3 rather than training a translation model when computational resources are limited. When both are affordable, the effectiveness (0.457) is on par with zero-shot transfer a ColBERT model with XLM-R-large (0.451), which is even more expensive to train. With translate-train, ColBERT with XLM-R-large achieves close to 0.5 in nDCG@100 but requires more computational resources to run. 5" + }, + { + "url": "http://arxiv.org/abs/2202.11827v2", + "title": "TARexp: A Python Framework for Technology-Assisted Review Experiments", + "abstract": "Technology-assisted review (TAR) is an important industrial application of\ninformation retrieval (IR) and machine learning (ML). While a small TAR\nresearch community exists, the complexity of TAR software and workflows is a\nmajor barrier to entry. Drawing on past open source TAR efforts, as well as\ndesign patterns from the IR and ML open source software, we present an open\nsource Python framework for conducting experiments on TAR algorithms. Key\ncharacteristics of this framework are declarative representations of workflows\nand experiment plans, the ability for components to play variable numbers of\nworkflow roles, and state maintenance and restart capabilities. Users can draw\non reference implementations of standard TAR algorithms while incorporating\nnovel components to explore their research interests. The framework is\navailable at https://github.com/eugene-yang/tarexp.", + "authors": "Eugene Yang, David D. Lewis", + "published": "2022-02-23", + "updated": "2022-04-24", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Technology-assisted review (TAR) is the use of information retrieval (IR) and machine learning (ML) technologies to reduce the cost and increase the effectiveness of manual review of large text collections. Application areas include legal discovery [3], systematic literature review in medicine [36], construction of evaluation collections [18], and responses to sunshine law requests [26]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGIR \u201922, July 11\u201315, 2022, Madrid, Spain \u00a9 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-8732-3/22/07...$15.00 https://doi.org/10.1145/3477495.3531663 Workshops such as DESI1, SIRE2, LegalAIIA3, and ALTARS4 have brought these applications to the awareness of the research community. Shared evaluation efforts such as the TREC Legal Track [4, 9, 12, 27, 34], the TREC Total Recall Track [11, 30], and the CLEF eHealth Technology-Assisted Review Tasks [14\u201316] have made data sets and formalized evaluation approaches available to researchers. However, the inherent complexity of TAR tasks, even when abstracted to research data sets, still imposes a substantial barrier to research. Many research questions in TAR focus on optimizing interactions between cost and effectiveness during an evolving review process. Testing a new TAR approach requires exploring variations of, and capturing rich data from, iterative active learning processes and multiple review stages. Further, the dynamics of these algorithms varies strongly not only across tasks (whether real or simulated) but even across choices of starting conditions and random seeds. Expensive large scale experiments are therefore necessary to derive meaningful generalizations. Finally, samplebased effectiveness estimation is itself an object of study in TAR (driven largely by the needs of legal applications) [20]. This raises the stakes for consistency and replicability of evaluation. TARexp is an open-source Python framework intended to reduce barriers to entry for TAR research. We draw on design patterns from operational TAR software and past open source TAR efforts, as well as those from the broader machine learning and information retrieval open source ecosystem, including libact [45] , pyTorch [28], pyTerrier [25], ir-datasets [24], and ir-measures [23]. TARexp allows configuration and dispatching of experimental runs with support for parallel processing, resumption and extension of runs, and reproducibility. It incorporates reference implementations of key components such as supervised learning, active learning, stopping rules, sample-based estimation of effectiveness, and TAR-specific cost visualizations [42]. Interfaces in the form of abstract classes for these components aid researchers in implementing and studying their own approaches. The framework is also compatible with Jupyter Notebooks for running exploratory experiments and visualizing results. The framework is available at https://github.com/eugene-yang/tarexp, and a live demo is available on Google Colab5. 2 BACKGROUND Shared software resources in IR research date back to the 1960s [32]. Numerous open source research-oriented retrieval libraries are in 1users.umiacs.umd.edu/~oard/desi7 2http://users.umiacs.umd.edu/~oard/sire11/ 3https://sites.google.com/view/legalaiia-2021/home 4http://altars2022.dei.unipd.it/ 5https://colab.research.google.com/github/eugene-yang/tarexp/blob/main/examples/ exp-demo.ipynb arXiv:2202.11827v2 [cs.IR] 24 Apr 2022 \fSIGIR \u201922, July 11\u201315, 2022, Madrid, Spain Yang and Lewis setting = component.combine(component.SklearnRanker(LogisticRegression , solver= ' liblinear ' ), component.PerfectLabeler(), component.RelevanceSampler(), component.FixedRoundStoppingRule(max_round=20))() workflow = tarexp.OnePhaseTARWorkflow(dataset , setting , seed_doc=[1023], batch_size=200, random_seed=123) recording_metrics = [ir_measures.RPrec, tarexp.OptimisticCost(target_recall=0.8, cost_structure=(25,5,5,1))] for ledger in workflow: print(\"Round {}: found {} positives in total\".format(ledger.n_rounds , ledger.n_pos_annotated)) print(\"metric:\", workflow.getMetrics(recording_metrics)) Figure 1: Sample Python Snippet for Running One-Phase TAR Workflow. Please refer to the online Google Colab Notebook demo page for a full working example. use, including Anserini [44], pyTerrier [25], Indri [33], Galago [5], and Patapsco [10]. An even larger ecosystem supports research on ML and natural language processing (NLP), including core IR tasks such as text categorization, summarization, question answering, and recommendation [2, 13, 46]. However, the needs of TAR research are most similar to those in active learning, an area that has seen less open source activity. We have drawn on ideas from libact [45] 6 , the most notable open source active learning framework. The structure of TARexp is inspired by its modularization of learning algorithms, sampling strategies, and labelers, but we take a more flexible approach. In particular, we allow components that perform multiple roles in a workflow to support, for example, stopping rules that draw on training sample-based estimates. Providing open source code with published research papers is increasingly common, including in several recent TAR studies [22, 39, 41]. These reference implementations are useful in replicating particular studies, but have been less useful at supporting TAR research more broadly. There have been a few open source efforts focused on broader support for TAR experimentation. The TAR Evaluation Toolkit7 enables simulation of a fixed set of active learning workflows on a labeled version of the Enron collection, and was used in several research studies by the tool\u2019s authors [7, 8, 47]. The Baseline Model Implementation [6] (BMI) is a successor to the TAR Evaluation Toolkit that is wrapped with a VirtualBox virtual machine that provides an interface for users to run TAR interactively. It was used in the TREC Total Recall Tracks as baseline systems[11, 30]. HiCAL8 embeds BMI in a Django-based framework along with the Indri search engine [33] and an interface for interactive assessment [1]. Components communication through HTTP APIs and so HiCAL has more potential for modification than the previous efforts. It was been used in annotation of the HC4 collections [18]. FreeDiscovery9 wraps a REST API around selected scikit-learn IR and ML learning functionality, as well as providing new algorithms for eDiscovery tasks such as email threading and duplication 6https://github.com/ntucllab/libact 7https://cormack.uwaterloo.ca/tar-toolkit/ 8https://hical.github.io 9https://tryfreediscovery.com/ Workflow Component: Ranker Component: Labeler Component: Sampler Component: Stopping Rule ... Experiment Dataset Evaluation Measures Labels (Optional) Figure 2: Structure overview of TARexp. detection. It does not incorporate support for active learning experiments itself, but has been used as a component in active learning experiments [38]. Numerous open source or web-based tools are available for carrying out systematic reviews10, but most provide little support for algorithmic experimentation. One exception is ASReview11 , an open source tool implemented in Python and Javascript [35]. It includes a simulation mode that allows running experiments on labeled data sets, and supports user configuration of supervised learning, active learning, and feature extraction methods12. 3 STRUCTURE OF TARexp A major advance of TARexp over previous TAR research software is the ability to declaratively specify TAR workflows. Users can create components defined using a standard interface and combine them with TARexp components in workflows of their design. This includes incorporating different simulations of human-in-theloop reviewing, or even embedding in systems using actual human review (though we have not done the latter). Execution of declaratively specified review workflows is supported by a workflow object (Sec 3.1). The changes in the labeling state of the document collection are recorded in the the ledger (Sec 3.2). 10http://systematicreviewtools.com/ 11https://github.com/asreview/asreview 12https://asreview.nl/blog/simulation-mode-class-101 \fTARexp: A Python Framework for Technology-Assisted Review Experiments SIGIR \u201922, July 11\u201315, 2022, Madrid, Spain During the iterative process, the workflow reaches out to a set of workflow components (Sec 3.3), each of which can play one or more roles in the workflow. Finally, an experiment (Sec 3.5) object defines a set of experiments and dispatches them sequentially or in parallel. Figure 1 is a code snippet that demonstrates how each element combines to form a working TAR process, Figure 2 is a general overview diagram of TARexp. 3.1 Workflow An object of class workflow executes the user\u2019s declarative specification of a TAR workflow. In doing so, it reaches out to components for services specified in the declarative specification such as creating training batches, scoring and ranking the collection, and testing for stopping conditions. After an optional initial seed round where the user can specify a starting set of labeled training data, the workflow is executed as a sequence of training rounds. Each round consists of selecting a batch of training documents (using a sampler object), looking up labels for those documents (using the labeler object), training a model and scoring and ranking the collection documents (using the ranker object). TARexp supports specifications of both one and two-phase TAR workflows, as described in Yang et al. [42]. One-phase workflows (tarexp.OnePhaseTARWorkflow in code) can be run for a fixed number of training rounds, or until all documents have been reviewed. Two-phase reviews also use a stopping rule to determine when to end training, but then follow that by ranking the collection with the final trained model and reviewing to a statistically determined cutoff. The workflow object maintains only enough state in-memory to work through a training round including the seed for random number generators. Besides the optionally written document scores, the rest of the state of the workflow is recorded in the ledger, which is written to secondary storage at user-configurable intervals. This allows easy restarting of crashed runs with minimal redundant work. The workflow object is implemented as a Python iterator, allowing procedures defined outside the workflow to execute at each round. The iterator yields a frozen ledger (see next Section). The user can define a custom per-round evaluation process or record information for later analysis. 3.2 Ledger Any aspect of the history of a batch-based workflow can, if necessary, be reproduced from a record of which documents were labeled on which training rounds (including any initial seed round). The ledger object records this state in memory, and writes it to disk at user-specified intervals to enable restarts. The persisted ledger for a complete run can be used to execute TARexp in frozen mode where no batch selection, training, or scoring is done. Frozen mode supports efficient testing of new components that do not change training or scoring, e.g. non-interventional stopping rules [20], effectiveness estimation methods, etc. Evaluating stopping rules for two-phase reviews also requires persisting scores of all documents at the end of each training round, an option the user can specify. 3.3 Components TARexp implements algorithms via components. A component is an object that is declared to serve one or more roles in a workflow, e.g. the stopping rule, the training batch sampler, the ranker, or the labeler. Components communicate only through the workflow. The association of components with multiple roles is important when implementing algorithms where, for instance, the stopping rule interacts tightly with a particular batch selection method (e.g. AutoStop [22]). The current release of TARexp defines the interface of multi-role components, but release of the particular multi-role components we have implemented is waiting on a paper under review [40]. TARexp supports classification models implemented in Scikitlearn through component.SklearnRanker wrapper. However, any supervised learning model that can produce a score for each document in the collection can be integrated into TARexp. We have tested an initial implementation of Transformer-based models for TAR Yang et al. [43], but have not yet integrated this code into the released version of TARexp. TARexp provides reference implementations of a variety of TARspecific algorithms, to aid reproducibility and reduce experimenter work. For instance, uncertainty sampling [19], relevance feedback [29], and simple random sampling batch selection algorithms are provided. Stopping rules are a particular focus of TAR research. TARexp provides implementation of the Knee and Budget Rules [6, 7], a configurable bath precision rule, the 2399 Rule [31, 41], fixed numbers of training rounds [36], the Quant and QuantCI Rules [41], and others. A Labeler object simulates human review. For most TAR experiments, we assume we simply look up the gold label of each document using component.PerfectLabeler. Random errors can be introduced using component.SuccessProbLabeler. 3.4 Evaluation Consistent implementation of effectiveness metrics, including tricky issues like tiebreaking is critical to TAR experiments. This is true both for evaluation, and because stopping rules may incorporate effectiveness estimates based on small samples. We provide all metrics from the open source package ir-measures13 through the tarexp.Workflow.getMetrics method. Metrics are computed on both the full collection and unreviewed documents to support both finite population and generalization perspectives [39]. In addition to standard IR metrics, TARexp implements OptimisticCost to support the idealized end-to-end cost analysis for TAR proposed in Yang et al. [42]. Such analysis requires specifying a target recall and a cost structure associated with the TAR process (Line 8 in Figure 1). TARexpalso provides helper functions for plotting cost dynamics graphs (Section 4.1). 3.5 Experiments TAR inherits both the large topic-to-topic variability of IR tasks, and the strong dependence on initial conditions and random seeds of active learning processes. Multiple collections, topics, and runs are necessary to reliably demonstrate that one approach dominates 13https://ir-measur.es/en/latest/ \fSIGIR \u201922, July 11\u201315, 2022, Madrid, Spain Yang and Lewis Figure 3: Cost dynamic graphs on topic GPRO in RCV1-v2 with different cost structured targeting 80% recall produced by the helper function. The height of the color blocks indicate the cost on each part spent at the respective round. The faded section indicates the rounds that pass the optimal stopping point and the grey vertical dashed line indicates the round where the one phase TAR workflow would reach the recall target. another. Inspired by pyTerrier [25], TARexp supports of a declarative representation for experiments, as shown in this example: exp = tarexp.TARExperiment( ' ./experiment_output/ ' , random_seed=123, max_round_exec=50, metrics=[ir_measures.RPrec, ir_measures.P@10], tasks=tarexp.TaskFeeder( dataset , labels[[ ' GPRO ' , ' GOBIT ' ]]), components=setting , workflow=tarexp.OnePhaseTARWorkflow , batch_size=[200, 50]) results = exp.run(n_processes=2, n_nodes=2, node_id=0, resume=True, dump_frequency=5) where an example of defining the variable setting is seen in Figure 1. The object tarexp.TaskFeeder creates a stream of tasks, each of which corresponds to a different labeling of the same document collection. In the example, GPRO and GOBIT are categories in RCV1 [21]. components parameter specify the kind of components for experimenting, which can be either a single one, e.g., the one in Figure 1, or a list of such. Given a task, tarexp.TARExperiment then creates one workflow object for each combination of component, hyperparameter, and random seed choices. The example above will yield four workflows since we have two tasks and two specified batch sizes, and only one alternative was specified for each component. We support spawning multiple processes both across machines on a network, and in multiple threads on appropriate hardware. The method .run dispatches the TAR tasks with runtime settings. In the above example, experiments will run on the first of the two machines with two processes on each, resulting in all four tasks being run simultaneously. The .run method returns the per-round metric values of all the experiment tasks running on the node. 4 HELPER FUNCTIONS AND INTEGRATION Besides the core functionality of executing experiments, TARexp provides several tools to aid in analyzing results. Most are included in our Google Colab notebook. 4.1 Experiment Analysis and Visualization The experiment results that tarexp.TARExperiment returns are in generic Python dictionaries. Through createDFfromResults, the results are transformed into Pandas [37] DataFrames for further analysis. The resulting DataFrame contains a multi-level index of the experimental parameters and multi-level columns containing the values of effectiveness and cost metrics. We also provide visualization tools to produce cost dynamics graphs, such as Figure 3, through both Python and command-line interfaces. The following is an example command for creating a graph with two runs and three cost structures. python \u2212m tarexp.helper.plotting \\ \u2212\u2212runs GPRO=location/to/GPRO \\ GOBIT=location/to/GOBIT \\ \u2212\u2212cost_structures 1\u22121\u22121\u22121 10\u221210\u221210\u22121 25\u22125\u22125\u22121 \\ \u2212\u2212y_thousands \u2212\u2212with_hatches 4.2 Jupyter Notebook and Google Colab Our framework is fully integrated with Jupyter Notebook [17] , a browser-based tool for running python interactively. Users can also run TARexp on Google Colab14, a cloud version of Jupyter Notebook powered by Google, by installing TARexp through Pypi15, the online Python package distribution repository. Figure 4 is a screenshot of running TAR experiments on the Google Colab. 5" + }, + { + "url": "http://arxiv.org/abs/2108.12752v1", + "title": "TAR on Social Media: A Framework for Online Content Moderation", + "abstract": "Content moderation (removing or limiting the distribution of posts based on\ntheir contents) is one tool social networks use to fight problems such as\nharassment and disinformation. Manually screening all content is usually\nimpractical given the scale of social media data, and the need for nuanced\nhuman interpretations makes fully automated approaches infeasible. We consider\ncontent moderation from the perspective of technology-assisted review (TAR): a\nhuman-in-the-loop active learning approach developed for high recall retrieval\nproblems in civil litigation and other fields. We show how TAR workflows, and a\nTAR cost model, can be adapted to the content moderation problem. We then\ndemonstrate on two publicly available content moderation data sets that a TAR\nworkflow can reduce moderation costs by 20% to 55% across a variety of\nconditions.", + "authors": "Eugene Yang, David D. Lewis, Ophir Frieder", + "published": "2021-08-29", + "updated": "2021-08-29", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "Introduction Online social networks are powerful platforms for personal communication, community building, and free expression. Unfortunately, they can also be powerful platforms for harassment, disinformation, and perpetration of criminal and terrorist activities. Organizations hosting social networks, such as Facebook, Twitter, Reddit, and others, have deployed a range of techniques to counteract these threats and maintain a safe and respectful environment for their users. One such approach is content moderation: removal (hard moderation) or demoting (soft moderation) of policy-violating posts [1, 2]. Despite recent progress in machine learning, online content moderation still heavily relies on human reviews [3]. Facebook\u2019s CEO Mark Zuckerberg stated that language nuances could get lost when relying on automated detection approaches, emphasizing the necessities for human judgments. 1 Ongoing changes in what is considered inappropriate content complicates the use of machine learning [4]. Policy experts have argued that complete automation of content moderation is socially undesirable regardless of algorithmic accuracy [5]. It is thus widely believed that both human moderation DESIRES 2021 \u2013 2nd International Conference on Design of Experimental Search & Information REtrieval Systems, September 15\u201318, 2021, Padua, Italy \" eugene@ir.cs.georgetown.edu (E. Yang); desires2021paper@davelewis.com (D. D. Lewis); ophir@ir.cs.georgetown.edu (O. Frieder) \u00a9 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) 1https://www.businessinsider.com/zuckerberg-nuances-conte nt-moderation-ai-misinformation-hearing-2021-3 and automated classification will be required for online content moderation for the foreseeable future [1, 5, 6]. This has meant not just capital investments in machine learning tools for moderation, but also massive ongoing personnel expenses for teams of human reviewers [7]. Surprisingly, the challenge of reducing costs when both machine learning and manual review are necessary has been an active area of interest for almost two decades, but in a completely different area: civil litigation. Electronic discovery (eDiscovery) projects involve teams of attorneys, sometimes billing the equivalent of hundreds of euros per person-hour, seeking to find documents responsive to a legal matter [8]. As the volume of electronically produced documents grew, machine learning began to be integrated in eDiscovery workflows in the early 2000s, a history we review elsewhere [9]. The result in the legal world has been technologyassisted review (TAR): human-in-the-loop active learning workflows that prioritize the most important documents for review [10, 11]. One-phase (continuous model refinement) and two-phase (with separate training and deployment phases) TAR workflows are both in use [9, 12]. Because of the need to find most or all relevant documents, eDiscovery has been referred to as a high recall review (HRR) problem [13, 14, 15]. HRR problems also arise in systematic reviews in medicine, sunshine law requests, and other tasks [16, 17, 18]. Online content moderation is an HRR problem as well, in that a very high proportion of inappropriate content should be identified and removed. Our contributions in this paper are two-fold. First, we describe how to adapt TAR and its cost-based evaluation framework to the content moderation problem. Second, arXiv:2108.12752v1 [cs.IR] 29 Aug 2021 \fwe test this approach using two publicly available content moderation datasets. Our experiments show substantial cost reductions using the proposed TAR framework over both manual review of unprioritized documents and training of prioritized models on random samples. 2. Background Content moderation on online platforms is a necessity [19, 20] and has been argued by some to be the defining feature of an online platform [6]. Despite terms of service and community rules on each platform, users produce inappropriate content, particularly when anonymous [21]. Inappropriate content includes toxic content such as hate speech [22], offensive content [23], and mis / disinformation [4, 23]. It also includes content that is inappropriate for legal or commercial reasons, such as potential copyright violations [5, 24]. The identification of toxic content can require subtle human insight [4, 22], both due to attempts at obfuscation by posters, and because the inappropriateness of the content is often tied to its cultural, regional, and temporal context [1, 3]. Misand disinformation often consists of subtle mixtures of truthful and misleading content that require human common sense inferences and other background knowledge [4, 23]. Social media organizations have deployed numerous techniques for implementing community policies, including graphand time-based analyses of communication patterns, user profile information, and others [25]. Our focus here, however, is on methods that use the content of a post. Content monitoring falls into three categories: manual moderation, text classification, and human-in-theloop methods. The latter two approaches leverage machine learning models and are sometimes collectively referred to as algorithmic content moderation in policy research [5]. Manual moderation is the oldest approach, dating back to email mailing lists. It is, however, extremely expensive at the scale of large social networks and suffers potential human biases. Additionally, mental health concerns are an issue for moderators exposed to large volumes of toxic content [25, 26, 27]. The simplest text classification approaches are keyword filters, but these are susceptible to embarrassing mistakes2 and countermeasures by content creators. More effective text classification approaches to content moderation are based on supervised machine learning [28, 29]. Content types that have been addressed include cyberbullying [29, 30, 31, 32], hate speech 2https://www.techdirt.com/articles/20200912/11133045288/p aypal-blocks-purchases-tardigrade-merchandise-potentially-viol ating-us-sanctions-laws.shtml [22, 31, 33, 34, 35, 36] or offensive language in general [23, 37, 38, 39, 40, 41, 42]. However, some moderation judgments are inevitably too subtle for purely automated methods3, particularly when content is generated with the intent of fooling automated systems [1, 25, 43]. Content that is recontextualized from the original problematic context, for example, through reposting, screenshotting, and embedding in new contexts complicates moderation [2]. Additionally, bias in automated systems can also arise both by learning from biased labels and from numerous other choices in data preparation and algorithmic settings [27, 44, 45]. Biased models risk further marginalizing and disproportionately censoring groups that already face discrimination [1]. Differences in cultural and regulatory contexts further complicate the definition of appropriateness, creating another dimension of complexity when deploying automated content moderation [4]. Human-in-the-loop approaches, where AI systems actively manage which materials are brought to the attention of human moderators, attempt to address the weaknesses of both approaches while gathering training data to support supervised learning components [25, 46]. Filtering mechanisms that proactively present only approved content (pre-moderation) and/or removal mechanisms that passively take down inappropriate ones are used by platforms depending on the intensity [4]. Reviewing protocols could shift from one to the other based on the frequency of violations or during a specific event, such as elections4. Regardless of the workflows, the core and arguably the most critical components is reviews. However, the primary research focus of human-in-theloop content moderation has been on classification algorithm design and bias mitigation, rarely on the investigation of the overall workflow. Like content moderation, eDiscovery is a high recall retrieval task applied to large bodies of primarily textual content (typically enterprise documents, email, and chat) [11, 12]. Both fixed data set and streaming task structures have been explored, though the streaming context tends to bursty (e.g., all data from a single person arriving at once) rather than continuous. Since cost minimization is a primary rationale for TAR [47], research on TAR has focused on training regimens and workflows for minimizing the number, or more generally the cost, of documents reviewed [9, 12]. A new TAR approach is typically evaluated for its ability to meet an effectiveness target while minimizing cost or a cost target while maximizing effectiveness [18, 48, 49]. This makes approaches developed for TAR natural to consider for content moderation. 3https://venturebeat.com/2020/05/23/ai-proves-its-a-poor-su bstitute-for-human-content-checkers-during-lockdown/ 4https://www.washingtonpost.com/technology/2020/11/07/f acebook-groups-election/ \f3. Applying TAR to Content Moderation In most TAR applications, at least a few documents of the (usually rare) category of interest are available at the start of the workflow. These are used to initialize an iterative pool-based active learning workflow [50]. Reviewed documents are used to train a predictive model, which in turn is used to select further documents based on predicted relevance [51], uncertainty [52], or composite factors. Workflows may be batch-oriented (mimicking pre-machine learning manual workflows common in the law) or a stream of documents may be presented through an interactive interface with training done in the background. These active learning workflows have almost completely displaced training from random examples when supervised learning is used in eDiscovery. Two workflow styles can be distinguished [9]. In a one-phase workflow, iterative review and training simply continues until a stopping rule is triggered [49, 53, 54]. Stopping may be conditioned on estimated effectiveness (usually recall), cost limits, and other factors [53, 55, 56]. Two-phase workflows stop training before review is finished, and deploy the final trained classifier to rank the remaining documents for review. The reviewed documents are typically drawn from the top of the ranking, with the depth in the ranking chosen so that an estimated effectiveness target is reached [18, 48]. Two-phase workflows are favored when labeling of training data needs to be done by more expensive personnel than are necessary for routine review. The cost of both oneand two-phase TAR workflows can be captured by in a common cost model [9]. The model defines the total cost of a one-phase review terminated at a particular point as the cost incurred in reviewing documents to that point, plus a penalty if the desired effectiveness target (e.g., a minimum recall value) has not been met. The penalty is simply the cost of continuing on to an optimal second-phase review from that point, i.e. the minimum number of prioritized documents is reviewed to hit the effectiveness target. For a two-phase workflow, we similarly define total cost to be the cost of the training phase plus the cost of an optimal second phase using the final trained model. These costs in both cases are idealizations in that there may be additional cost (e.g. a labeled random sample) to choose a phase two cutoff citecikmpaper. However, the model allows a wide range of workflows to be compared on a common basis, as well as allowing differential costs for review of positive vs. negative documents, or phase one vs. phase two documents. While developed for eDiscovery, the above cost model is also a good fit for content moderation. As discussed in the previous section, the human-in-the-loop moderashut up mind your own business and go f*** some one else over (a) Wikipedia collection. : being in love with a girl you dont even know yours is sadder : f*** off you f***ing c***! (b) ASKfm collection Figure 1: Example content in the collections tion approaches used in social media are complex, but in the end reduce to some combination of machine-assisted manual decisions (phase one) and automated decisions based on deploying a trained model (phase two). Operational decisions such as flagging and screening all posts from an account or massive reviewing of posts related to certain events [4, 6] are all results of applying previously trained models, which is also a form of deployment. Also, broadly applying the model to filter the content vastly reduces moderation burden when similar content is rapidly being published on the platform with the risk of falsely removal [4]. We claim no optimal for this specific simplified model in evaluating content moderation, but an initial effort for modeling the human-in-the-loop moderation process. When applying the model to content moderation, however, we assume uniform review costs for all documents. This seems the best assumption given the short length of texts reviewed and what is known publicly about the cost structure of moderation [6]. In the next section, we describe our experimental setting for adapting and evaluating TAR for content moderation. 4. Experiment Design Here we review the data sets, evaluation metric, and implementation details for our experiment. 4.1. Data Sets We used two fully labeled and publicly available content moderation data sets with a focus on inappropriate user-generated content. The Wikipedia personal attack data set [32] consists of 115,737 Wikipedia discussion comments with labels obtained via crowdsourcing. An example of the comment is presented in Figure 1(a) Eight \fannotators assigned one of five mutually exclusive labels to each document: Recipient Target, Third Party Target, Quotation Attack, Other Attack, and No Attack (our names). We defined three binary classification tasks corresponding to distinguishing Recipient Target, Third Party Target, or Other Attack from all other classes. (Quotation Attack had too low a prevalence.) A fourth binary classification task distinguished the union of all attacks from No Attack. A document was a positive example if 5 or more annotators put it in the positive class. Proportion of the positive class ranged from 13.44% to 0.18%. The ASKfm cyberbullying dataset [29] contains 61,232 English utterance/response pairs, each of which we treated as a single document. An example of the conversation is presented in Figure 1(b). Linguists annotated both the poster and responder with zero or one of four mutually exclusive cyberbullying roles, as well as annotating the pair as a whole for any combination of 15 types of textual expressions related to cyberbullying. We treated these annotations as defining 23 binary classifications for a pair, with prevalence of the positive examples ranging from 4.63% to 0.04%. For both data sets we refer to the binary classification tasks as topics and the units being classified as documents. Documents were tokenized by separating at punctuation and whitespace. Each distinct term became a feature. We used log tf weighting as the features for the underlying classification model. The value of a feature was 0 if not present, and else 1 + \ud835\udc59\ud835\udc5c\ud835\udc54(\ud835\udc61\ud835\udc53), where \ud835\udc61\ud835\udc53is the number of occurrences of that term in the document. 4.2. Algorithms and Workflow Our experiments simulated a typical TAR workflow. The first training round is a seed set consisting of one random positive example (simulating manual input) and one random negative example. At the end of each round, a logistic regression model was trained and applied to the unlabeled documents. The training batch for the next round was then selected by one of three methods: a random sampling baseline, uncertainty sampling [52], or relevance feedback (top scoring documents) [51]. Variants of the latter two are widely used in eDiscovery [57]. Labels for the training batch were looked up, the batch was added to the training set, and a new model trained to repeat the cycle. Batches of size 100 and 200 were used and training continued for 80 and 40 iterations respectively, resulting in 8002 coded training documents at the end. We implemented the TAR workflow in libact5 [58], an open-source framework for active learning experiments. We fit logistic regression models using Vowpal Wabbit6 with default parameter settings. Our experiment 5https://github.com/ntucllab/libact 6https://vowpalwabbit.org/ framework is available on GitHub7. 4.3. Evaluation Our metric was total cost to reach 80% recall as described in Section 3. This was computed at the end of each training round as the sum of the number of training documents, plus the ideal second phase review cost as a penalty, which is the number of additional top-ranked documents (if any) needed to bring recall up to 80%. Ranking was based on sorting the non-training documents by probability of relevance using the most recent trained model. Note that we experimented with 80% recall as an example. However, the TAR workflow is capable of running with arbitrary recall target, such as 95% for systematic review [18, 56]. In actual TAR workflows, recall would be estimated from a labeled random sample. Since the cost of this sample would be constant across our experimental conditions we used an oracle for recall instead. 5. Results and Analysis Our core finding was that, as in eDiscovery, active selection of which documents to review reduces costs over random selection. Figure 2 shows mean cost to reach 80% recall over 20 replications (different seed sets and random samples) for six representative categories. On all six categories, all TAR workflows within a few iterations beat the baseline of reviewing a random 80% of the data set (horizontal line labeled Manual Review). The Wikipedia Attack category is typical of low to moderate prevalence categories (\ud835\udc5d= 0.1344). Uncertainty sampling strongly dominates both random sampling (too few positives chosen) and relevance feedback (too many redundant positives chosen for good training). Costs decrease uniformly with additional training. We plot 99% confidence intervals under the assumption that costs are normally distributed across replicates. Costs are not only higher for relevance feedback, but less predictable. The ASKfm Curse Exclusion (\ud835\udc5d = 0.0169) and Wikipedia Other attack (\ud835\udc5d= 0.0019) category are typical low prevalence categories. Uncertainty sampling and relevance feedback act similarly in such circumstances: even top scoring documents are at best uncertainly positive. Average cost across replicates levels off and starts to increase after 44 iterations for uncertainty sampling and 45 iterations for relevance feedback. This is the point at which additional training no longer pays for itself by improving the ranking of documents. For this category (and typically) this occurs shortly before 80% recall is reached on the training data alone (iteration 48 for uncertainty sampling and iteration 52 for relevance feedback). 7https://github.com/eugene-yang/TAR-Content-Moderation \fFigure 2: Total cost for TAR alternatives to identify 80% of positive documents for Wikipedia Attack, Other Attack, and Recipient Attack, and ASKfm Curse Exclusion, General Insult, and Sexism classifications. Values are averaged over 20 replicates, and a 99% confidence interval on costs is shown as shading around each curve. Horizontal line is cost to review a random 80% of the data set. Task such as the ASKfm Sexism category (\ud835\udc5d= 0.0030) that deals with nuances in human languages requires more training data to produce a stable classifier. While obtaining training data by random sampling stops reducing the cost after the first iteration, uncertainty sampling and relevance feedback continue to take advantage of additional training data to minimize the cost and become more predictable. Note that the general relationship between the prevalence of the task and the cost of reaching a certain recall target using TAR workflows is discussed Yang et al. [9]. Table 1 looks more broadly at the two datasets, averaging costs both over all topics and over 20 replicate runs for each topic for batch sizes of both 100 and 200 . By 20 iterations with batch size of 100 (2002 training documents), TAR workflows with both relevance feedback and uncertainty sampling significantly reduce costs versus TAR with random sampling. (Significance is based on paired t-tests assuming non-identical variances and making a Bonferroni correction for 72 tests.) All three TAR methods in turn dominate reviewing a random 80% of the dataset, which costs 92,590 for Wikipedia and 90,958 for ASKfm. The improvement over cost plateaued after the training sets reached 5000 documents for ASKfm but continue for Wikipedia. Categories in Wikipedia (\ud835\udc5d= 0.1344 to 0.0018) are generally more frequent comparing to ASKfm (\ud835\udc5d= 0.0463 to 0.001), providing more advantage for training to identify more positive documents. Larger batch size slightly reduce the improvement as the underlying classifiers are retrained less frequently. In practice, the sizes are depending on the cost structure of reviewing and specific workflows in each organization. However, as the classifiers are frequently updated with more coded documents, the total cost would be reduced over the iterations. Besides the overall cost reduction, Figure 3 shows a heatmap of mean precision across 20 replicates for batches 1 to 81 with batch size of 100, to give insight into the moderator experience of TAR workflows. Precision for relevance feedback starts high and declines very gradually. Uncertainty sampling maintains relatively constant precision. For the very low prevalence category Curse Exclusion we cut off the heatmap at 52 iterations for relevance feedback and 48 iterations for uncertainty sampling since on average 80% recall is obtained on training data alone by those iterations. For both categories, even applying uncertainty sampling that is intended to improve the quality of the classifier improves the batch precision over the random sampling be a significant amount. \fTable 1 Total review cost to reach 80% recall. Values are averaged over all topics for a data set and 20 replicates. Percentages show relative cost reduction over the random sample training baseline. A * indicates that the difference is statistically significant over the random sample training baseline with 99% confidence by conducting paired t-test with Bonferroni correction. ASKfm Wikipedia batch # Train Random Relevance Uncertainty Random Relevance Uncertainty 100 202 47685.53 *49833.73 (-4.50) *50273.21 (-5.43) 52948.45 *60751.69 (-14.74) 52210.00 ( 1.39) 1002 46327.93 *43329.31 ( 6.47) *42723.12 ( 7.78) 49010.71 52931.28 ( -8.00) *39879.78 (18.63) 2002 45139.15 *38179.79 (15.42) *37938.19 (15.95) 47805.25 46673.34 ( 2.37) *29387.06 (38.53) 3002 44148.28 *34909.72 (20.93) *34719.50 (21.36) 47065.66 *38964.91 ( 17.21) *25676.82 (45.44) 4002 43731.25 *33439.69 (23.53) *32795.05 (25.01) 47234.75 *34408.14 ( 27.16) *24202.29 (48.76) 5002 43469.91 *32261.33 (25.78) *31957.57 (26.48) 47125.79 *31267.88 ( 33.65) *22746.94 (51.73) 6002 42973.85 *31767.73 (26.08) *31384.51 (26.97) 47300.02 *28945.59 ( 38.80) *21922.42 (53.65) 7002 42563.09 *30567.00 (28.18) *30502.95 (28.33) 47086.42 *27356.89 ( 41.90) *21301.92 (54.76) 8002 42385.43 *30708.85 (27.55) *30441.77 (28.18) 47106.34 *25949.51 ( 44.91) *21144.28 (55.11) 200 202 47685.53 *49302.36 (-3.39) *49339.93 (-3.47) 52948.45 *58866.41 (-11.18) 55747.35 (-5.29) 1002 46327.93 45014.51 ( 2.84) 44733.10 ( 3.44) 49010.71 *55302.14 (-12.84) *42896.71 (12.47) 2002 45139.15 *40473.12 (10.34) *39894.98 (11.62) 47805.25 49968.88 ( -4.53) *33981.56 (28.92) 3002 44148.28 *37050.02 (16.08) *36902.63 (16.41) 47065.66 42521.55 ( 9.65) *28332.55 (39.80) 4002 43731.25 *35310.13 (19.26) *34888.22 (20.22) 47234.75 *37492.98 ( 20.62) *25667.95 (45.66) 5002 43469.91 *33690.33 (22.50) *33519.15 (22.89) 47125.79 *34933.90 ( 25.87) *24070.44 (48.92) 6002 42973.85 *32425.25 (24.55) *32612.13 (24.11) 47300.02 *33004.90 ( 30.22) *22839.39 (51.71) 7002 42563.09 *31488.77 (26.02) *31813.08 (25.26) 47086.42 *31664.04 ( 32.75) *22084.88 (53.10) 8002 42385.43 *31198.75 (26.39) *31171.80 (26.46) 47106.34 *29346.76 ( 37.70) *21837.84 (53.64) Figure 3: Precision in each batch for TAR workflows on Wikipedia Attack(\ud835\udc5d= 0.1344) and ASKfm Curse Exclusion(\ud835\udc5d= 0.0169) classifications. The x-axis shows the iteration number. A lighter color in an iteration block indicates higher precision. 6. Summary and Future Work Our results suggest that TAR workflows developed for legal review tasks may substantially reduce costs for content moderation tasks. Other legal workflow techniques, such as routing near duplicates and conversational threads in batches to the same reviewer, may be worth testing as well. This preliminary experiment omitted complexities that should be explored in more detailed studies. Both content moderation and legal cases involve (at different time scales) streaming collection of data, and concomitant constraints on the time available to make a review decision. Batching and prioritization must reflect these constraints. Moderation in addition must deal with temporal variation in both textual content and the definitions of sensitive content, as well as scaling across many languages and cultures. As litigation and investigations become more international, these challenges may be faced in the law as well, providing opportunity for the legal and moderation fields to learn from each other." + }, + { + "url": "http://arxiv.org/abs/2106.09871v1", + "title": "Heuristic Stopping Rules For Technology-Assisted Review", + "abstract": "Technology-assisted review (TAR) refers to human-in-the-loop active learning\nworkflows for finding relevant documents in large collections. These workflows\noften must meet a target for the proportion of relevant documents found (i.e.\nrecall) while also holding down costs. A variety of heuristic stopping rules\nhave been suggested for striking this tradeoff in particular settings, but none\nhave been tested against a range of recall targets and tasks. We propose two\nnew heuristic stopping rules, Quant and QuantCI based on model-based estimation\ntechniques from survey research. We compare them against a range of proposed\nheuristics and find they are accurate at hitting a range of recall targets\nwhile substantially reducing review costs.", + "authors": "Eugene Yang, David D. Lewis, Ophir Frieder", + "published": "2021-06-18", + "updated": "2021-06-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG" + ], + "main_content": "INTRODUCTION Technology-assisted review (TAR) refers to human-in-the-loop iterative active learning workflows for large scale document review. A major application area is eDiscovery: review of documents in the law (civil litigation, regulatory review, and investigations) [2, 12, 26]. Other applications include open government document requests [3] and systematic review in medicine [14\u201316, 32]. On each iteration a TAR workflow uses a predictive model to select a batch of documents to review (typically a few hundred) using an active learning method such as relevance feedback [5, 19, 25]. Those documents are reviewed by a human expert and added to the collection of reviewed documents. The review documents are used to train another predictive model, and the cycle repeats. In a one-phase TAR workflow this process is iterated until some stopping condition is met. Relevance feedback (training on topranked documents) is often used as the active learning method. In a Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. DocEng \u201921, August 24\u201327, 2021, Limerick, Ireland \u00a9 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-8596-1/21/08...$15.00 https://doi.org/10.1145/3469096.3469873 two-phase TAR workflow, the process is stopped before completion and the final trained classifier is used to identify a large set of documents to go to another review team (often at a lower perdocument cost) to finish the review [36]. TAR workflows typically must meet a target for the proportion of relevant documents found (i.e. recall) while also holding down costs. The choice of stopping point is therefore critical. Two-phase workflows often condition stopping on a sample-based estimate of recall, with the cost of random sampling offset by the cost savings from using a cheaper second phase review team. One-phase reviews, on the other hand, are often used in situations, e.g. a single investigator searching for key information, where the cost of labeling a sample purely to support a stopping rule is untenable. A variety of heuristic stopping rules have therefore been proposed for stopping one-phase reviews [4, 8, 9, 12, 22, 26]. These have mostly been developed in the context of a single TAR application or recall goal and their general applicability is unknown. In this paper we propose the theory of model-based estimation in survey research [29] provides a foundation for more flexible and rigorous stopping rules in technology-assisted review. Our contributions are (1) two new heuristic stopping rules, Quant and QuantCI, that adjust for task characteristics and let stopping be conditioned on an arbitrary user-chosen recall goal, and (2) a rigorous evaluation of our proposed rules against heuristic stopping rules from both the academic literature and current industry practice. We examine how the recall goal, prevalence of relevant documents, and the difficult of the classification problem combine to affect stopping rule behavior. We find our proposed rules are accurate at hitting a range of recall targets while substantially reducing review costs. 2 BACKGROUND We can classify stopping rules for TAR workflows into three groups: sample-based, interventional, and heuristic. Our focus in this paper is on one-phase reviews, so we emphasize stopping rules for those. Sample-based stopping rules are based on drawing a random sample from a collection of interest, having it reviewed, and using that labeled sample to estimate the recall of review process at each potential stopping point. Sample-based stopping rules have been proposed for both one-phase [8, 20, 28] and two-phase [1, 34] reviews. When estimated recall is high enough, the review is stopped or, in the case of a two-phase review, a cutoff is set on the second phase. Sample-based stopping rules are the method of choice when a statistical guarantee on effectiveness is needed, as in some legal contexts. Unfortunately, the size of a sample necessary to estimate recall with a given confidence is inversely proportional to the prevalence of relevance documents [18]. For example, estimating recall with a margin of error of 0.05 and a confidence level of 95% requires arXiv:2106.09871v1 [cs.IR] 18 Jun 2021 \fDocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Yang, Lewis, and Frieder a simple random sample containing roughly 385 relevant documents. If the prevalence of relevant documents was 1%, one could need on average a sample size of 38,500 documents. This makes sample-based rules prohibitively expensive in many settings. Interventional stopping rules instead take the approach of modifying the TAR process itself in support of estimating recall. One approach is to modify the active learning algorithm to incorporate probabilistic sampling [9, 22]. This lets the selected data do triple duty: training the selection model, estimating recall, and accomplishing review. Another approach is to terminate conventional active learning at some point and switch to purely random review [4], allowing recall to be estimated. While powerful, these methods are not a free lunch: they reduce the effectiveness of active learning in exchange for supporting estimation. They also cannot be used, for instance, by review managers who are limited to the active learning methods provided by a particular piece of commercial review software. Heuristic stopping rules are the most generally applicable, and are the focus of this paper. They make a decision of when to stop a TAR workflow using only the data naturally labeled during the workflow. We can distinguish between target-aware heuristic rules which modify their behavior based on a desired recall target, and target-agnostic heuristic rules that attempt to optimize some general condition on the quality of the review. We survey a wide range of particular heuristic rules in the next section. 3 HEURISTIC STOPPING RULES In this section we review a range of heuristic stopping rules from both the research literature and operational practice. While our treatment is not exhaustive, we cover the major styles of rules that have been proposed. 3.1 Fixed Iterations Method Both one-phase and two-phase TAR processes are sometimes stopped simply after a predetermined number of training iterations as a \u201csimple, operational stopping criteria\u201d described by Wallace et al. [32]. Other works also referred this rule as a pragmatic stopping rule [4]. This has the advantage of making review cost known in advance (assuming uniform document review costs), but provides no ability to adapt to the category prevalence, difficulty, or recall target. 3.2 The \u201c2399\u201d Method Some early work in document review for eDiscovery used stopping rules based on a sample-based estimate of elusion (the proportion of relevant documents in the unreviewed population) [27]. A sample of 2399 documents is sufficient to estimate elusion with 95% confidence and a 2% margin of error. This led to an odd fixation on the number 2399 in eDiscovery. One result was the the \u201c2399\u201d heuristic method [7, 9]. It stops a one-phase TAR review after 2399 + \ud835\udc65\ud835\udc45documents have been reviewed, where \ud835\udc45is the number of positive documents the review has found, and \ud835\udc65is a tuned hyperparameter. Our experiments use \ud835\udc65= 1.2 as proposed by the rule\u2019s inventors [7, 9]. 3.3 Batch Positives Method A more adaptive approach to one-phase TAR reviews is to stop iterations when one or more recent training batches contain only a few positive documents, i.e., low precision. This can be given an economic rationale. If the fraction of relevant documents in a batch is \ud835\udc65, then the number of documents that must be examined to find a single relevant document is roughly 1/\ud835\udc65. This is a measure of the marginal cost to find the next relevant document. Such marginal utility arguments can be linked to the legal concept of proportionality in the discovery process [13]. Further, under the assumption that batch precision declines in a roughly monotonic fashion after some point in training, once the marginal cost exceeds a threshold, it never declines, so stopping is appropriate. 3.4 Probability Cutoff Method Closely related to the batch precision rule are rules that stop review when all unreviewed documents have a score below some cutoff. In particular, assume that the scores are in fact well-calibrated predicted probabilities. Then the reciprocal 1/\ud835\udc5dof a probability score \ud835\udc5dis the number of documents that must be examined at that point to find a single relevant document. As with batch precision, this is a measure of the marginal cost of finding the next relevant document. However, also as with batch precision, there is no particular connection with recall goals. 3.5 Knee Method A refinement of the Batch Positive method is the Knee Method [8]. This is based on the gain curve [26] for a one-phase TAR process. A gain curve plots how the number of positive documents found increases as more documents are reviewed, typically on a per-batch basis. At each potential stopping point \ud835\udc60, the knee method computes the ratio of slopes in a two-segment approximation to the gain curve: \ud835\udf0c(\ud835\udc60) = \ud835\udc46\ud835\udc59\ud835\udc5c\ud835\udc5d\ud835\udc52((0, 0), (\ud835\udc56, \ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc56))) \ud835\udc46\ud835\udc59\ud835\udc5c\ud835\udc5d\ud835\udc52((\ud835\udc56, \ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc56)), (\ud835\udc60, \ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60) + 1) = \ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc56) \ud835\udc56 \ud835\udc60\u2212\ud835\udc56 \ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60) \u2212\ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc56) + 1 (1) Here \ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc58) is the number of relevant documents found at rank k. The value of \ud835\udc56is chosen such that (\ud835\udc56, \ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc56)) is a \u201cknee\u201d, i.e., has maximum perpendicular distance from the line segment between (0, 0) and (\ud835\udc60, \ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60)). The Knee Method stops at the first \ud835\udc60such that \u2022 \ud835\udf0c(\ud835\udc60) \u2265156 \u2212\ud835\udc5a\ud835\udc56\ud835\udc5b(\ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60), 150), and \u2022 \ud835\udc60\u22651000 The Knee Method is targeted at a recall goal of 0.7, and Cormack and Grossman [8] do not discuss how it might be adapted to other recall targets. 3.5.1 Adapting the Knee Method to Fixed Size Batches. Cormack and Grossman [6] specify that values of \ud835\udc60for the Knee Method should be based on the batch schedule used by their BMI (Baseline \fHeuristic Stopping Rules For Technology-Assisted Review DocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Table 1: Transformed rounds for knee tests from exponential scheduling in BMI to fixed batch size of 200. BMI Schedule Batch Size 200 Schedule Batch Training Knee Training Knee Round Size Set Size Test? Round Set Size Test? 5 801 No 33 104 991 No 6 1001 No 34 115 1106 Yes 7 1201 Yes 35 127 1233 Yes 36 140 1373 Yes 8 1401 Yes 37 154 1527 Yes 9 1601 Yes 38 170 1697 Yes 10 1801 Yes 39 187 1884 Yes 11 2001 Yes 40 206 2090 Yes 12 2201 Yes 41 227 2317 Yes 13 2401 Yes 42 250 2567 Yes 14 2601 Yes 15 2801 No 43 275 2842 Yes 16 3001 Yes Model Implementation) system1. That schedule uses a single relevant seed document (batch size \ud835\udc35= 1) on iteration 12. Then the batch size \ud835\udc35\ud835\udc58for round \ud835\udc58is \ud835\udc35\ud835\udc58\u22121 + \u2308\ud835\udc35\ud835\udc58\u22121/10\u2309, i.e., batches grow exponentially in size. We can easily adapt the Knee Method, however, to fixed size batches. Since the method is intended to be conservative, our goal is to never stop sooner than the Knee Method would, but also to bound the extra cost we incur by using fixed size batches. We assume, as in the BMI schedule, that the first round is a seed set consisting of a single document, but then follow that with fixed size batches. We assume batches of size 200 as an example. Table 1 shows the BMI batch schedule in the vicinity of the earliest potential stopping point: \ud835\udc60= 1000. For batch size 200, the first fixed training set size over 1000 is 1001. Since that is smaller than the first Knee Method batch, we conservatively wait until training set size 1201 for the first check on the modified schedule. We then check at training set sizes 1401, 1601, ... 2601, each of which is subsequent to at least one check that the BMI-based knee method would do. We do not check at 2801, because it would not be conservative to give an additional chance to stop then. We do check at training set size 3001 (greater than 2842), and subsequently at the first training set size larger than each training set size for the BMI-based schedule. 3.6 Budget Method Cormack and Grossman [8] also proposed the Target Method, which draws a random sample of documents until 10 positive documents are found, hiding those documents from the TAR process. It then stops a one-phase TAR review after the 10 positive documents are rediscovered [8, 20]. As a sample-based method, it is out of the scope of this paper, but Cormack and Grossman [8] have proposed a heuristic alternative to it: the Budget Method. 1https://plg.uwaterloo.ca/~gvcormac/trecvm/ 2For supervised learning, 100 random documents artificially labeled as negative examples are used on round 1 only, but these are not considered to be part of the batch size. The Budget Method is based on three observations: \u2022 For any amount of effort, a TAR process should find more positive documents than would random sampling. \u2022 The target method on average will draw a target set of roughly 10\ud835\udc36/\ud835\udc45documents, where \ud835\udc36is the collection size and \ud835\udc45is the number of relevant documents in the collection. \u2022 At any moment in time, the number of relevant documents \ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60) is a lower bound on \ud835\udc45, and thus \ud835\udc36/\ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60) is an upper bound on the expected size of the size of the sample the target method would draw. The Budget Method stops review at the first end-of-batch training set size \ud835\udc60if \u2022 \ud835\udc60\u22650.75\ud835\udc36, or \u2022 both \ud835\udf0c(\ud835\udc60) \u22656 and \ud835\udc60\u226510\ud835\udc36/\ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60) are true The first test is based on the assumption that reviewing a 75% random sample of the collection would likely achieve a recall of at least 0.7, and reviewing 75% found by a TAR process should do even better. The second way of stopping is based on two criteria: a simplified version of the knee test that is assumed to be safe when \ud835\udc45is large, and a test that keeps the knee test from taking effect until \ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60) is somewhat large. Again, it is not clear how to adapt this method to recall goals differing 0.7. 3.7 CMH Heuristic Method Recently, Callaghan and M\u00fcller-Hansen [4] proposed an interventional method that stops the TAR process based on a heuristic method, and then incrementally samples random documents to both fulfill the recall target and statistically verify the fulfillment. While the sampling phase is interventional (and thus out of scope for this paper), the heuristic approach (CMH heuristic method) is of interest. At each round of active learning with end-of-batch training set size \ud835\udc60, the CMH heuristic method splits the successive rounds into two parts by a pivot round \ud835\udc57, hypothetically assumes the second part was sampled randomly, and calculates the probability of the current recall being larger than or equal to the target recall \ud835\udc5d\ud835\udc57. Specifically, let \ud835\udc60\ud835\udc57being the end-of-batch training set size of any successive round \ud835\udc57and \ud835\udf0f\ud835\udc61\ud835\udc4e\ud835\udc5fbeing the recall target, \ud835\udc5d\ud835\udc57= \ud835\udc43 \u0010 \ud835\udc4b\u2264\ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60) \u2212\ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60\ud835\udc57) \u0011 where \ud835\udc4b\u223cHypergeometric(\ud835\udc36\u2212\ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60\ud835\udc57), \ud835\udc3e\ud835\udc61\ud835\udc4e\ud835\udc5f,\ud835\udc60\u2212\ud835\udc57) \ud835\udc3e\ud835\udc61\ud835\udc4e\ud835\udc5f= $ \ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60) \ud835\udf0f\ud835\udc61\ud835\udc4e\ud835\udc5f \u2212\ud835\udc45\ud835\udc52\ud835\udc59(\ud835\udc60\ud835\udc57) + 1 % The TAR process is stopped if \ud835\udc5a\ud835\udc56\ud835\udc5b(\ud835\udc5d\ud835\udc57) < 1 \u2212\ud835\udefcfor some confidence level \ud835\udefc. In our experiment, we follow the original proposal of the method and set \ud835\udefcto 95%. 4 STOPPING BASED ON A RECALL ESTIMATE The heuristic stopping rules above except the CMH heuristic method either ignore the review\u2019s recall target, or are designed for a particular target (e.g. 0.7 for the knee rule). A practical heuristic stopping rule should allow an arbitrary recall target to be chosen and respond accordingly. \fDocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Yang, Lewis, and Frieder Stopping based on an estimate of recall is a natural approach. Labeling a random sample purely to estimate recall is often viewed as too expensive (Section 2). However, if recall could be estimated, even roughly, from the training data already chosen by relevance feedback this would provide a broadly applicable rule. We present such an approach below based on the statistical technique of modelbased estimation. 4.1 Model-Based Estimation Using randomness properties of a sample to justify extrapolation to a population (design-based estimation) [29] is not the only approach to estimation. An alternative, widely used in survey research, is model-based estimation [29]. In this approach, (1) the sampled items are used to fit a predictive model in a supervised fashion, (2) that model is applied to unsampled items to predict a value for each, and (3) the predicted values for unsampled items are used in producing the population estimate. In information retrieval, similar approaches have been explored for text quantification tasks [11, 17, 30]. In our context, we already have an appropriate predictive model: the logistic regression model (Section 5.2) trained on relevance feedback samples to prioritize documents for review. It outputs, for any document \ud835\udc56, a predicted probability of relevance \ud835\udc5d\ud835\udc56, i.e. an estimate of a 0/1 relevance label. If R and U are the set of reviewed and unreviewed documents respectively, then b \ud835\udc45\ud835\udc5f = \u2211\ufe01 \ud835\udc56\u2208R \ud835\udc5d\ud835\udc56 (2) b \ud835\udc48\ud835\udc5f = \u2211\ufe01 \ud835\udc56\u2208U \ud835\udc5d\ud835\udc56 (3) are, respectively, the model-based estimate of the number of relevant documents in the reviewed R and unreviewed documents U. Two plausible point estimates of recall are then: \ud835\udc45\ud835\udc5f \ud835\udc45\ud835\udc5f+ b \ud835\udc48\ud835\udc5f = \ud835\udc45\ud835\udc5f \ud835\udc45\ud835\udc5f+ \u00cd \ud835\udc56\u2208U \ud835\udc5d\ud835\udc56 (4) b \ud835\udc45\ud835\udc5f b \ud835\udc45\ud835\udc5f+ b \ud835\udc48\ud835\udc5f = \u00cd \ud835\udc56\u2208R \ud835\udc5d\ud835\udc56 \u00cd \ud835\udc56\u2208C \ud835\udc5d\ud835\udc56 (5) Equation 5 may seem strange, since we know \ud835\udc45\ud835\udc5f(the number of reviewed relevant documents). However, recall is a ratio estimator, and there is an advantage to having any systematic modeling error affecting b \ud835\udc48\ud835\udc5fto be present in both numerator and denominator. We in practice found Equation 5 to provide substantially better results, so discuss rules based only on that. This estimate is based on strong assumptions that rarely hold exactly. In this case the assumption is that the probabilities from the relevance model are well-calibrated, i.e., that if one took a large sample of documents with predicted probability \ud835\udc5d\ud835\udc56, it would contain a fraction \ud835\udc5d\ud835\udc56of relevant documents [10]. 4.2 Approximating Variance Stopping when a point estimate, even an unbiased one based on a random sample, equalled the recall goal would miss that goal a substantial fraction of the time. (50% of the time if sampling errors fell equally on either side of the point estimate.) Stopping rules based on sample-based estimates address this by instead stopping when the lower bound of a confidence interval exceeds the recall goal [1, 8, 20]. As larger samples are used, the confidence interval narrows and stopping occurs sooner. Producing a confidence interval requires an estimate of the variance of the point estimate. A general method for approximating the variance of a complicated random variable (in our case our recall estimator) is to use a truncated Taylor series. This is variously referred to as the linearization method, propagation of error method, delta method, and Taylor series method [31, 33]. The estimate in Equation 5 is based on modeling the relevance of a document \ud835\udc56as the outcome of a Bernoulli random variable \ud835\udc37\ud835\udc56\u223c\ud835\udc35\ud835\udc52\ud835\udc5f\ud835\udc5b\ud835\udc5c\ud835\udc62\ud835\udc59\ud835\udc59\ud835\udc56(\ud835\udc5d\ud835\udc56). Indeed, we can rewrite our point estimate in Equation 5 as an approximation of the expected value of a random variable: E \u0014\ud835\udc37R \ud835\udc37C \u0015 = E \u0014 \u00cd \ud835\udc56\u2208R \ud835\udc37\ud835\udc56 \u00cd \ud835\udc56\u2208C \ud835\udc37\ud835\udc56 \u0015 \u2248E \u0002\u00cd \ud835\udc56\u2208R \ud835\udc37\ud835\udc56 \u0003 E \u0002\u00cd \ud835\udc56\u2208C \ud835\udc37\ud835\udc56 \u0003 = \u00cd \ud835\udc56\u2208R \ud835\udc5d\ud835\udc56 \u00cd \ud835\udc56\u2208C \ud835\udc5d\ud835\udc56 (6) where \ud835\udc37R = \u00cd \ud835\udc56\u2208R \ud835\udc37\ud835\udc56, \ud835\udc37U = \u00cd \ud835\udc56\u2208U \ud835\udc37\ud835\udc56, and \ud835\udc37C = \ud835\udc37R + \ud835\udc37U. (This expression also makes clear that our point estimate uses the ratio of estimated values of random variables to approximate the computationally awkward expected value of a ratio of random variables.) Approximating \ud835\udc37R/(\ud835\udc37R + \ud835\udc37U) by a Taylor series truncated to first order gives [23]: \ud835\udc49\ud835\udc4e\ud835\udc5f \u0012 \ud835\udc37R \ud835\udc37R + \ud835\udc37U \u0013 = E \u0014 \ud835\udf15\ud835\udc53(\ud835\udc37R, \ud835\udc37U) \ud835\udf15\ud835\udc37R \u00152 \ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37R) + E \u0014 \ud835\udf15\ud835\udc53(\ud835\udc37R, \ud835\udc37U) \ud835\udf15\ud835\udc37U \u00152 \ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37U) + 2E \u0014 \ud835\udf15\ud835\udc53(\ud835\udc37R, \ud835\udc37U) \ud835\udf15\ud835\udc37R \u0015 E \u0014 \ud835\udf15\ud835\udc53(\ud835\udc37R, \ud835\udc37U) \ud835\udf15\ud835\udc37U \u0015 \ud835\udc36\ud835\udc5c\ud835\udc63(\ud835\udc37R, \ud835\udc37U) (7) where \ud835\udf15\ud835\udc53(\ud835\udc37R, \ud835\udc37U) \ud835\udf15\ud835\udc37R = 1 \ud835\udc37R + \ud835\udc37U \u2212 \ud835\udc37R (\ud835\udc37R + \ud835\udc37U)2 (8) \ud835\udf15\ud835\udc53(\ud835\udc37R, \ud835\udc37U) \ud835\udf15\ud835\udc37U = \u2212\ud835\udc37R (\ud835\udc37R + \ud835\udc37U)2 (9) Since the partial derivative of \ud835\udc37R (Equation 8) is always positive and the partial derivative of \ud835\udc37U (Equation 9) is always negative, the coefficient of the covariance \ud835\udc36\ud835\udc5c\ud835\udc63(\ud835\udc37R, \ud835\udc37U) is negative. By omitting the negative terms, an upper bound on the variance is: \ud835\udc49\ud835\udc4e\ud835\udc5f \u0012 \ud835\udc37R \ud835\udc37R + \ud835\udc37U \u0013 \u2264 E \u0014 1 (\ud835\udc37R + \ud835\udc37U)2 \u0015 \ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37R) + E \" \ud835\udc372 R (\ud835\udc37R + \ud835\udc37U)4 # \u0010 \ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37R) + \ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37U) \u0011 The logistic regression model is based on the assumption that the Bernoulli random variables are independent. Making that assumption here as well, and continuing to approximate the expected value of a ratio by a ratio of expected values, we can approximate the right hand side by \ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37R) (b \ud835\udc45\ud835\udc5f+ b \ud835\udc48\ud835\udc5f)2 + b \ud835\udc452 \ud835\udc5f(\ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37R) + \ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37U)) (b \ud835\udc45\ud835\udc5f+ b \ud835\udc48\ud835\udc5f)4 \fHeuristic Stopping Rules For Technology-Assisted Review DocEng \u201921, August 24\u201327, 2021, Limerick, Ireland where \ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37S) = \ud835\udc49\ud835\udc4e\ud835\udc5f \u2211\ufe01 \ud835\udc56\u2208S \ud835\udc37\ud835\udc56 ! = \u2211\ufe01 \ud835\udc56\u2208S \ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37\ud835\udc56) = \u2211\ufe01 \ud835\udc56\u2208S \ud835\udc5d\ud835\udc56(1 \u2212\ud835\udc5d\ud835\udc56) (10) for S = R and U. Finally, by assuming the recall estimator \ud835\udc37R/(\ud835\udc37R + \ud835\udc37U) is approximately normally distributed, the 95% confidence interval of the recall is b \ud835\udc45\ud835\udc5f b \ud835\udc45\ud835\udc5f+ b \ud835\udc48\ud835\udc5f (11) \u00b12 \u221a\ufe04 1 (b \ud835\udc45\ud835\udc5f+ b \ud835\udc48\ud835\udc5f)2\ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37R) + b \ud835\udc452 \ud835\udc5f (b \ud835\udc45\ud835\udc5f+ b \ud835\udc48\ud835\udc5f)4 \u0010 \ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37R) + \ud835\udc49\ud835\udc4e\ud835\udc5f(\ud835\udc37U) \u0011 (12) 4.3 Stopping Rules Based on Recall Estimates Given the above, we define two stopping rules based on the above approach of quantifying the number of relevant documents using model-based estimation. The Quant method stops the TAR process when the point estimate of recall (Equation 5) first reaches or exceeds the recall goal. The QuantCI method stops the TAR process when the lower end of a 95% (2 standard deviations) confidence interval first reaches or exceeds the recall goal. As the number of reviewed relevant documents increases, QuantCI approaches the behavior of Quant. 5 EXPERIMENTAL SETUP 5.1 Dataset We simulated recall-controlled retrieval tasks using RCV1-v2 [21], a widely-used text categorization collection consisting of 804,414 news articles and 658 categories (including topics, industries, and geographical regions). To study tasks with a wide variety of characteristics, we selected the same 5 categories from each of 9 bins based on three ranges of class prevalence and three ranges of difficulties from the previous studies [36]. Each selected category exhibits different characteristics, enabling the simulation of a wide variety of tasks. The lowest and highest prevalence bins were not used due to the wide range of values. Difficulty bins were based on the effectiveness of a logistic regression model trained on a random 25% of documents and evaluated by R-precision on the remaining 75%. For efficiency purposes, we downsampled the collection to 20%. 5.2 Implementation We implemented active learning based on libact [37], an opensource, active-learning experimental framework. At round 0, one random positive document is used as the seed document to instantiate the active learning, and 200 documents based on relevance feedback [5, 25] are sampled and reviewed at each round after. Supervised learning used the scikit-learn implementation of Logistic Regression with an L2 regularization weight of 1.0. All words are used as features and are represented by BM25-saturated term frequencies [24, 35]. 5.3 Baselines We compare Quant and QuantCI with multiple existing heuristics stopping methods, which are all target-agnostic except the CMH heuristic method (CMH-heuristic). These target-agnostic rules include, the Knee and Budget Methods [8], 2399-rule, BatchPos, MaxProb, and CorrCoef. The Batch Positive Method (BatchPos) stops when the number of positive documents in the batch is less than or equal to 20 (i.e., precision is less than or equal 0.1) based on industrial convention. We tested stopping immediately (patient=1) and waiting for three additional rounds (patient=4) when the threshold is met. The Probability Cutoff Method (MaxProb) stops when the maximum probability of the unreviewed documents is less than or equal to 0.1. This threshold yields a similar cost criterion as the BatchPos in which, in expectation, one positive document costs nine negative documents to retrieve. Apart from the rules discussed in Section 3, a score correlationstyle approach that stops based on the correlation coefficients between two consecutive rounds (CorrCoef) is also tested. This approach stops when the average correlation coefficient of the last three rounds is greater than or equal to 0.99. 5.4 Evaluation For each category, we run 10 active learning runs with different seed documents, with a total of 450 runs. The trajectories of the runs are fixed since the stopping methods tested do not intervene with the iterative process. We evaluate the rules on the recall at stopping and the total cost at stopping for the given recall target. To quantitatively evaluate the reliability of each rule under different targets, we report the mean square error (MSE) of the recall at stopping instead of the reliability measure (proportion of the runs that fulfilled the recall target at stopping) proposed by Cormack and Grossman [8]. The reliability measure favors stopping late since longer runs yield higher recall and, thus, easier to fulfill the target, resulting in massive recall overshooting for low recall targets. In an extreme case, a null stopping rule that never stops the process would have perfect but uninformative reliability for all recall targets. By reporting MSE, over and undershooting the target are both penalized. We assign an idealized cost penalty for undershoot the target recall [20]. If the process stops before fulfilling the target, we assign a cost penalty corresponding to the number of documents one needs to review based on the ordering suggested by the predictive model at stopping before reaching the target. This penalty is also called the optimal second phase review [36]. If overshooting the recall, the cost of reviewing the excessive documents is also charged. For example, for a collection of 10,000 documents with 1,000 relevant documents and a recall target of 0.5, suppose one has only retrieved 300 relevant documents at iteration 10 (reviewed a total of 2,001 documents). If the rank list provided by the final predictive model requires going down to rank 2,300 among the 7,999 unreviewed documents to retrieve the additional 200 requested relevant documents, the idealized penalty is 2,300. And the total cost is therefore 4,301. In the next section, we normalize the cost of each run by its optimal cost. Since the cost varies by task characteristics and target, \fDocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Yang, Lewis, and Frieder Table 2: Mean square error (MSE) of recall at stopping against different recall targets. The last column reports the mean of all targets of each rule. 0.1 0.3 0.5 0.7 0.9 Avg. Knee 0.522 0.279 0.115 0.032 0.029 0.195 Budget 0.546 0.294 0.123 0.031 0.019 0.203 2399-Rule 0.493 0.272 0.131 0.070 0.089 0.211 CorrCoef 0.445 0.241 0.117 0.072 0.108 0.197 MaxProb-0.1 0.447 0.253 0.140 0.106 0.153 0.220 BatchPos-20-1 0.294 0.199 0.185 0.250 0.396 0.265 BatchPos-20-4 0.446 0.258 0.150 0.122 0.173 0.230 CMH-heuristics 0.498 0.362 0.196 0.063 0.008 0.225 Quant 0.236 0.164 0.102 0.072 0.050 0.124 QuantCI 0.236 0.163 0.092 0.046 0.021 0.112 we report the ratio of the cost using a specific stopping rule over the optimal cost to avoid naturally high-cost categories diluting the values. In Table 3, the average of such ratios over 450 runs is reported for each stopping rule. 6 RESULTS AND ANALYSIS Both Quant and QuantCI demonstrate strong overall reliability on recall. In Table 2, QuantCI has the lowest MSE under the low recall targets (0.1 to 0.5) while the unadjusted version (Quant) remains strongly comparable. QuantCI is also very competitive in high recall targets (e.g., 0.7 and 0.9), where the Knee and Budget Methods are designed for. CMH-heuristic has an extremely small loss under 0.9 recall target but one of the worst in the lower recalls, indicating a vast overshooting. In the next section, we focus on the distribution of recall at stopping points for each target-aware rule. The budget method provides the most stable recall over the 450 runs among the tested existing target-agnostic heuristic stopping rules in high recall targets. As shown in Table 2, it presents the smallest loss on 0.7 and 0.9 recall targets. The knee method has a slightly larger loss than the budget method. Both rules met their designed purpose for high-recall tasks but overshot for lower recall targets, such as 0.3, as they are unaware of the target. Other methods exhibit large variations across the tasks and runs, making these rules impractical for any recall target. The BatchPos-20-1 stops extremely early, favoring low recall targets but still exhibiting a large loss. While BatchPos-20-4 delays the stopping and reduces the recall loss under higher recall targets, the loss is still large compared to other approaches. Despite being target-agnostic, stopping rules that fail to provide a stable recall cannot serve as the stopping rule even for a specific recall target. In the rest of the section, we compare the target-aware methods with the Knee and Budget Methods. 6.1 Recall at Stopping All four target-aware stopping rules adjust the stopping decision based on the recall target and exhibit clear decreasing trends in Figure 1. Despite the high classification effectiveness on the easy tasks (the right column of Figure 1), it is difficult to estimate the number of relevant documents in general. All four approaches overshoot all levels of recall targets in the three easy category bins. On the other hand, CMH-heuristic, on average, overshoots the target in all nine bins, similar to what we observed in Table 2. Despite the awareness the target, assuming parts of the active learning queried documents are random creates the dependency to the task\u2019s characteristics. For hard categories that active learning often experiences difficulties to find relevant documents from some iterations, those reviewing batches are similar to random, resulting in better-tracked stopping recall. The assumption rarely holds from the actual scenarios in other bins, producing larger overshooting in results. Quant tracks the target recall closer than CMH-heuristic. Since Quant estimates the number of unreviewed relevant document by the sum of the predicted probabilities, the estimation is usually too big for easy categories (a lot of documents seem positive) but too small for hard categories (most documents seem negative), resulting in overshooting and undershooting the recall, respectively. The medium categories (the middle column in Figure 1) strike a balance between the two and, therefore, allow the Quant to more accurately track the recall. Finally, QuantCI further improves the confidence of stopping and alleviates undershooting the target. For rare categories (the first row in Figure 1), recall at stopping for the high recall targets increases while remaining similar recall for the lower recall targets. This phenomenon indicates that the standard deviation adjustment does not uniformly delay the stopping like the patient parameter for BatchPos but delays according to the quality of the estimation. Despite the similarity between Quant and QuantCI in the recall at stopping, the costs induced by each approach significantly differ. In the next section, we discuss the total cost of applying each stopping rule. 6.2 Total Cost at Stopping Among the target-agnostic rules we tested, the Knee Method provides the lowest cost as shown in Table 3. The Budget Method, despite stops with stabler recall, costs significantly higher than the Knee Method under all tested recall targets (95% confidence with Bonferroni correction of 5 tests on paired t-test). Even with 0.7 as the recall target, where the rules are designed for, the Budget Method (in contrast to the implication from the name) costs 4.02 times more than the optimal cost compared to 2.42 times by the Knee Method. For lower recall targets, such as 0.1, target-agnostic rules all induce large cost overheads. Since these heuristics rules all rely on certain notions of stability testing, the models are usually still unstable when the low recall targets are reached, resulting in vast overshooting and excessive reviewing cost. Simple rules such as MaxProb, BatchPos, and 2399-Rule, while exposing higher cost in high recall targets, yield cost lower than the Knee Method. Since the stability tests of these simple approaches are weaker, they tend to stop the process earlier than the more sophisticated ones, such as Knee and Budget Methods. \fHeuristic Stopping Rules For Technology-Assisted Review DocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Figure 1: Recall at stopping of recall target-aware stopping rules. The numbers associated with the labels on the x-axis are the target recall. 50 runs (10 seeds and 5 categories) of each category bin are displayed using boxplot conventions: the box ranges from the 25% (Q1) to 75% (Q3) quartiles of recall with the 50 runs; the green lines and triangles are the median and mean recall, respectively, and whiskers extend to the lesser of the most extreme cost observed or to a distance 1.5(Q3 Q1) from the edges of the box. Outliers are presented as dots above and below the whiskers. The awareness of the target does not guarantee lower cost. CMH-heuristic creates higher cost than BatchPos-20-4 (95% confidence with Bonferroni corrections of 5 tests on paired t-test). Regardless of the decreasing trend, CMH-heuristics still vastly overshoots the target. The average recall at stopping CMH-heuristic rule is 0.76 with a 0.1 recall target, which is even higher than 0.67 using BatchPos-20-4 for all targets. On the other hand, both our proposed methods, Quant and QuantCI, tightly track the target to provide a total cost that is closer to optimal. For low (e.g., 0.1) to medium (e.g., 0.5) recall targets, QuantCI yields significantly lower cost than any other tested rule except Quant with 0.1 recall target (95% confidence with Bonferroni corrections of 27 tests on paired t-test). In Figure 1, the trend of recalls at stopping using Quant is slightly concave or convex in some category bins, such as Rare-Hard and Medium-Medium. Since the Quant method estimates the total number of relevant documents based on the probabilities produced by the classifiers, the quality of such estimations depends on the task\u2019s difficulty. Therefore, standard deviation adjustment provides additional protection to the stopping rule against the instability that often appears during early iterations of active learning. The resulting recall at stopping approaches linear in Figure 1. The cost also reduces significantly. However, for 0.1 recall target, the optimal stopping point (iteration just achieved the target) often appears when the predicted probabilities are not yet stabilized. For QauntCI, it conservatively waits until the predictions are sufficiently confident (when the lower bound of confidence intervals are larger than the target), usually implying overshooting the target. Therefore, despite that \fDocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Yang, Lewis, and Frieder Table 3: Average ratio over the cost of stopping each category against the optimal cost for each recall target. Values in parentheses are the standard deviations. \u2020 and \u2217indicate the statistical difference with 95% confidence of QuantCI against the Knee Method and Quant, respectively. The significant test is conducted by paired t-test with Bonferroni correction. Target = 0.1 Target = 0.3 Target = 0.5 Target = 0.7 Target = 0.9 Knee 21.52 ( 62.78) 5.96 (10.07) 3.43 ( 5.40) 2.42 ( 3.13) \u20202.36 ( 1.70) Budget 56.45 (180.25) 13.62 (43.61) 5.43 ( 7.93) 4.02 ( 6.70) 3.35 ( 4.16) 2399-Rule 22.52 ( 41.64) 7.16 ( 9.56) 4.35 ( 3.99) 3.76 ( 4.30) 3.42 ( 4.36) CorrCoef 16.64 ( 30.13) 5.80 ( 7.06) 3.94 ( 4.87) 3.48 ( 6.06) 3.64 ( 5.51) MaxProb-0.1 11.29 ( 12.93) 5.42 ( 4.12) 4.93 ( 6.27) 4.75 ( 7.61) 3.56 ( 5.65) BatchPos-20-1 12.44 ( 11.90) 12.41 (13.94) 13.90 (16.51) 13.72 (18.48) 10.34 (20.34) BatchPos-20-4 12.89 ( 17.27) 5.84 ( 4.39) 5.36 ( 6.04) 5.19 ( 7.40) 3.57 ( 4.62) CMH-heuristics 27.37 ( 59.29) 18.18 (30.47) 16.18 (14.67) 12.28 (13.21) 17.33 (25.30) Quant 6.12 ( 9.45) 3.47 ( 3.60) 3.07 ( 3.71) 2.94 ( 3.53) \u22172.81 ( 1.81) QuantCI \u20206.01 ( 9.79) \u2020\u22173.01 ( 3.15) \u2020\u22172.45 ( 2.41) \u22172.38 ( 1.95) 3.35 ( 3.57) the average cost ratio of QuantCI is lower than Quant under 0.1 recall target, the variance of QuantCI is larger, and the difference is not significant. For higher recall targets (e.g., 0.7 and 0.9), our proposed approaches perform similarly with the Knee Method, the best existing heuristic rule for such targets. Note that the Knee Method was designed for a 0.7 recall target. Our proposed Quant and QuantCI produce very similar cost even in this crowded battlefield. When targeting 0.9 recall, the Knee Method offers a slightly lower cost ratio than Quant and QuantCI. Again, since the Quant method relies on the underlying classifier to estimate the total number of relevant documents, identifying the last several relevant documents is challenging both ranking-wise (for prioritizing the reviewing) and probabilistic-wise (for estimating its existence). When the last several needles are still in the haystack, the number of relevant documents in the unreviewed set is under-estimated, resulting inflated estimated recall, and therefore, undershooting the real recall. Furthermore, the variance of such recall estimation during those unstable stages is undoubtedly high, driving QuantCI to stop the process later; often, it is too late and overshoots the recall. This results in cost overheads (3.35). Both Quant and QuantCI outperform the widely-compared targetagnostic Knee Method except when setting an extremely high recall target without any intervention and additional random samples. The challenging high recall target often requires a high reviewing cost even with optimal stopping, providing a large incentive for using a sample-based stopping approach. The cost of the random sample can be amortized to the longer active learning run. Next, we investigate a specific active learning run closely. 6.3 A case study on stopping and cost In Figure 2, we demonstrate an active learning run on category I24000 (a Rare-Medium category with prevalence 0.002133) with seed #7. Each target-agnostic rule stops at the same iteration across the targets but appears at different locations because of each graph\u2019s different x-axis range. The optimal stopping point is where the early stopping penalty first becomes zero. Additional to the heuristic rules, we tested a sample-based rule that stops when the recall estimated by the sample reaches the target, which is similar to the Target Rule proposed by Cormack and Grossman [8]. We evaluated both samples with 20 and 50 relevant documents. By sampling uniformly at random until enough relevant documents are selected, the expected sample sizes (including the non-relevant ones) are 9,378 and 23,445, respectively. We conducted 100 trials for each sampling size and reported the median stopping iterations. Despite the same increasing rate of the reviewing cost (blue blocks in Figure 2), stopping late for low recall targets with low optimal cost yields a higher relative cost overhead. This observation also implies that it is cheaper to stop later for a high recall target than a low one. The same observation can be made on the left side of the optimal stopping point but with a convex relationship instead of linear due to the nonlinearity of the learning curves. If stopping vastly prematurely, e.g., BatchPos-20-1, the cost overhead is excessive because of the under-trained model. For the 0.9 recall target, the improvement of the model plateaued (the decrements of the orange block), so stopping between iteration 25 to 70 yields a very similar total cost. Besides leading to a lower cost, the overhead of stopping early and late differs across recall targets as well. The shape of the penalty also differs based on the characteristics of the task. Due to the space constraint, we cannot demonstrate other categories. But the distinctions between the targets already demonstrate the need to track the target. Compared to the heuristic approaches, estimating the progress based on a random sample is much more accurate than heuristics. In each tested recall target except 0.9, the sampling approach either stops at the optima or off by one iteration. However, this accuracy comes with a price on the sample, which in total cost usually exceeds heuristic approaches. For the challenging 0.9 recall target, the sampling approaches are still much more accurate than the heuristics but are off by more iterations. In this case, the cost of using such sampling methods is similar or lower than the heuristic ones, providing the incentive of reviewing the large random sample a priori. \fHeuristic Stopping Rules For Technology-Assisted Review DocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Figure 2: Cost and stopping round of each rule on category I24000 with seed #7. The height of the blue and orange blocks indicate the cost and penalty if stopping at the corresponding round. Vertical dashed lines indicate the stopping round of the rules labeled at the right of the line. CMH-heuristic stops beyond the range presented for all recall targets. Triangles and squares represent the cost at stopping of random sample with 20 and 50 positive documents, respectively. However, routinely testing the same goal under the same random sample yields sequential biases [33]. The simple approach we tested here is no exception. While not proposing this simple rule for a high recall target, this observation motivates the usage of sample-based stopping rules. A valid approach that avoids sequential biases is still an active research topic. \fDocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Yang, Lewis, and Frieder 7 SUMMARY In this work, we proposed a target-aware heuristic stopping rule Quant that estimates the total number of relevant documents and the recall progress using a model-based approach for one-phase technology-assisted review. We provided the standard deviation of our recall estimator and further improved the stopping rule based on the stability of the estimation. The Quant Method with standard deviation adjustment QunatCI has the lowest MSE on recall at stopping among the tested rules. It also yields the lowest total cost when targeting low recall and competitive with the popular Knee method for high recall targets." + }, + { + "url": "http://arxiv.org/abs/2106.09866v1", + "title": "On Minimizing Cost in Legal Document Review Workflows", + "abstract": "Technology-assisted review (TAR) refers to human-in-the-loop machine learning\nworkflows for document review in legal discovery and other high recall review\ntasks. Attorneys and legal technologists have debated whether review should be\na single iterative process (one-phase TAR workflows) or whether model training\nand review should be separate (two-phase TAR workflows), with implications for\nthe choice of active learning algorithm. The relative cost of manual labeling\nfor different purposes (training vs. review) and of different documents\n(positive vs. negative examples) is a key and neglected factor in this debate.\nUsing a novel cost dynamics analysis, we show analytically and empirically that\nthese relative costs strongly impact whether a one-phase or two-phase workflow\nminimizes cost. We also show how category prevalence, classification task\ndifficulty, and collection size impact the optimal choice not only of workflow\ntype, but of active learning method and stopping point.", + "authors": "Eugene Yang, David D. Lewis, Ophir Frieder", + "published": "2021-06-18", + "updated": "2021-06-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.HC" + ], + "main_content": "INTRODUCTION Manual review of large document collections is a key task in the law and other applications. In the law, electronic discovery (eDiscovery) or electronic disclosure (eDisclosure) refers to a range of enterprise document review tasks including review for responsiveness in civil litigation [3, 20], regulatory reviews [36], and internal investigations [22]. Structurally similar tasks include systematic review in medicine (finding published clinical trials of a treatment) Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. DocEng \u201921, August 24\u201327, 2021, Limerick, Ireland \u00a9 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-8596-1/21/08...$15.00 https://doi.org/10.1145/3469096.3469872 [53] and content monitoring for hate speech and harrassment in social media [16, 19]. Technology-assisted review (TAR) refers to iterative human-inthe-loop workflows where experts label documents for relevance to a task, and supervised learning of predictive models from those labeled documents is used to find the next batch of documents for review [3]. In machine learning, this is referred to as an active learning workflow [49]. Exploding volumes of documents have made TAR widely used in large legal matters [42]. Low cost is not the only objective. TAR reviews often must meet an effectiveness target, typically on recall (the fraction of the relevant documents were found) [30]. A party in a litigation may, for example, agree to over at least 80% of documents responsive to a request for production. TAR workflows vary in whether they distinguish (e.g. by using different personnel) between coding documents for training predictive models, and coding them to accomplish the review task itself. In a one-phase TAR workflow all reviewed documents are also used for training, while in a two-phase TAR workflow there are separate training and review phases. There is substantial debate, particularly in the law, over which workflow to use (Section 2.2). A key but neglected factor in choosing among TAR workflows is that costs differ from reviewer to reviewer, and even from document to document. Our contributions in this paper are (1) a mathematical model that predicts how these varying costs affect total workflow cost (Section 3); (2) a new visualization tool, the cost dynamics graph, which displays the landscape of total costs encountered by an evolving workflow (Section 4); and (3) an empirical study that tests the predictions of our mathematical model (Section 6). We confirm our model\u2019s prediction that one-phase workflows are optimal under a mathematically narrow (but practically common) set of conditions. The results also provide new insights into choice of workflows, active learning methods, and stopping rules. 2 TAR WORKFLOWS IN THE LAW TAR workflows are widely applicable, but have seen their most extensive use in the law. The first analytics technology applied to finding documents relevant in civil litigation was Boolean text search, originally applied to manually keypunched abstracts of documents [7]. A Boolean query, typically a disjunction of words, phrases, and proximity operators referred to as a keyword list, is used to select a subset of collected documents for review [4, 39]. It thus serves as a binary (yes/no) classifier [30]. While some documents might be examined while creating the keyword list, most documents reviewed by the workflow are those that the query retrieves, i.e., those on which the classifier makes a positive prediction. We refer to a workflow where a classifier creation phase (phase one) is followed by a distinct document review phase (phase two) as a two-phase TAR workflow. arXiv:2106.09866v1 [cs.IR] 18 Jun 2021 \fDocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Yang, Lewis, and Frieder As documents became increasingly digital, additional technologies were applied. Grouping technologies such as duplicate detection, near-duplicate detection, email threading, and document clustering allowed related documents to be reviewed together and/or decisions propagated to group members [23, 50]. Text retrieval methods such as query reformulation, statistical ranked retrieval, and latent indexing became used in the law under the heading of \u201cconcept search\u201d [28, 37, 39]. As these technologies became available, they were often used interactively. A user might examine a graphical display of document clusters, each having a summary description, and label each cluster as relevant or not relevant. 1 They might run multiple ranked retrieval searches, and label one or many documents at the top of a ranking as relevant or not relevant. Even the result of a Boolean query might be bulk-labeled as relevant. The net result of such interactions is that a review decision has been made (explicitly or by a default assumption of non-relevance) for every document in the collection. This user-created classification may be viewed as definitive (the interactive reviewers are the final reviewers) or only tentative. In any case, while classifiers (e.g. Boolean queries) may have been created along the way, the final classification is the cumulative result of many user actions during the review, and does not correspond to any single final query or classifier. We refer to workflows of this sort as one-phase TAR workflows. 2.1 Supervised Learning in TAR Workflows Supervised learning of predictive models from labeled documents (often referred to as predictive coding in the law) began to be used in civil litigation in the mid-2000s [4], as well as attracting research attention [51]. The first United States federal case explicitly approving its use in review for responsiveness (producing documents requested by adversaries) appeared in 2012.2 Cases followed in Ireland3, Australia4, England5, and other jurisdictions. An early focus was replacing Boolean queries in two-phase workflows with binary text classifiers produced by supervised learning [5, 41, 43]. Replacing manual query writing with supervised learning made the phases of a two-phase review more similar. Both now involved labeling documents, with the difference being only in whether the primary purpose of labeling was training or the final review dispensation. Unlike Boolean queries, classifiers produced by supervised learning algorithms typically assign a numeric score to every document. That made them easy to use not just in a two-phase culling workflow, but also for prioritizing documents in an interactive one-phase workflow. Losey first proposed one-phase workflows where supervised learning was one of several text analytics tools used6, in the 1https://chrisdaleoxford.com/2008/09/21/attenex-round-every-corner 2Da Silva Moore v. Publicis Groupe (Da Silva Moore 17), No. 11 Civ. 1279(ALC)(AJP), 2012 WL 607412. (S.D.N.Y. Feb. 24, 2012 3Irish Bank Resolution Corp. v. Quinn [2015] IEHC 175 (H. Ct.) (Ir). 4McConnell Dowell Constructors (Aust) Pty Ltd v Santam Ltd & Ors (No.1) [2016] VSC 734 5Pyrrho Inv. Ltd. v. MWB Bus. Exch., Ltd., [2016] EWHC 256 (Ch) [1] (Eng.) 6https://e-discoveryteam.com/2012/07/01/day-one-of-a-predictive-codingnarrative-searching-for-relevance-in-the-ashes-of-enron/ spirit of Bates\u2019 berrypicking formulation7 of information access [6]. Tredennick8, Cormack & Grossman [9, 14], and others proposed one-phase workflows using supervised learning only or predominantly. 2.2 Controversy in the Law There is currently intense debate in the legal world over workflow design. At one extreme, some commentators assert two-phase culling workflows are \u201cTAR 1.0\u201d [52] or \"simple\" active learning [9], and promote one-phase workflows under headings such as \u201cTAR 2.0\u201d [52], \u201cTAR 3.0\u201d9, \u201cHybrid Multimodal Predictive Coding 4.0\u201d10, and so on. Cormack & Grossman have asserted the superiority of a onephase using relevance feedback [45] (training on top-ranked documents) workflow under the trademarked terms Continuous Active Learning (TM) 11 and CAL (TM) 12 in patent filings [13] and scholarly [12] work. Members of the judiciary have weighed in.13 On the other hand, two-phase workflows based on Boolean text querying, metadata querying, supervised learning, and other binary classifier formation methods continue to be widely used.14 The US Department of Justice Antitrust Division\u2019s model agreement for use of supervised learning assumes a two-phase culling workflow with no human review of classifier responsiveness decisions in phase two. 15 Scholars have felt the need to push back and argue for attorney oversight of phase two machine learning decisions. [26]. 2.3 Review Cost in TAR Our view is that one-phase and two-phase workflows both have their place, and that the per-document cost structure of review is a neglected factor in choosing between them. To take an extreme example, under the DOJ order mentioned above, no human review for responsiveness occurs in phase two. Per-document costs in phase two are not zero (there is usually at least bulk screening for attorney-client privilege), but are lower than for labeling training data in phase one. A two-phase review is not only required by the order, but is economically advantageous. More generally, different reviewers may be used in the two phases. The label assigned to a training document affects the fate of many other documents, so review for training is sometimes viewed as requiring senior attorneys (e.g., associates billing at up to US 7https://e-discoveryteam.com/2013/04/21/reinventing-the-wheel-my-discovery-ofscientific-support-for-hybrid-multimodal-search/ 8https://web.archive.org/web/20140420124327/http://www.catalystsecure.com/blog/ 2013/11/tar-2-0-continuous-ranking-is-one-bite-at-the-apple-really-enough/ 9https://www.law.com/legaltechnews/2020/11/19/tar-3-0-expectations-for-modernreview-technology/ 10https://e-discoveryteam.com/2018/10/08/do-tar-the-right-way-with-hybridmultimodal-predictive-coding-4-0/ 11United States Trademark 86/634255 12United States Trademark 86/634265 13US Federal Judge Andrew Peck stated in 2015: \u201cIf the TAR methodology uses \u2018continuous active learning\u2019 (CAL) (as opposed to simple passive learning (SPL) or simple active learning (SAL)), the contents of the seed set is much less significant.\u201d Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 128-29 (S.D.N.Y. 2015). 14Court orders involving two-phase workflows in recent US cases include In Re Broiler Antitrust (N.D. Ill., Jan 3, 2018, City of Rockford v. Mallinckrodt ARD Inc (N.D. Ill. Aug. 7, 2018), and Livingston v. City of Chicago, No. 16 CV 10156 (N.D. Ill. Sep. 3, 2020) 15https://www.justice.gov/file/1096096/download \fOn Minimizing Cost in Legal Document Review Workflows DocEng \u201921, August 24\u201327, 2021, Limerick, Ireland $1000 per hour16). Conversely, review decisions not used for training are often done by contract attorneys (perhaps billed out at US $40 per hour17). A second factor is the relative cost of different types of documents. If most nonrelevant documents are obviously off-topic, this might require little reading (e.g., just an email subject line) to determine that. Conversely, relevant documents may require full reading to make a decision. Little published data on this are available, but Mcdonald et al. [35] found that coming to a classification decision for sensitive documents took longer than for nonsensitive documents. Further, responsive documents may require additional decisions, such as privilege review [38]. TAR research has largely ignored this cost asymmetry. Systems have been evaluating using generic information retrieval metrics, or cost-based measures that treat all costs as equal [2, 11, 17, 33, 37, 47, 51, 54]. A rare exception is MINECORE, a proposed two-phase workflow for combined responsiveness and privilege review [38]. MINECORE assumes, however, that multiple classifiers are used as well. The CLEF 2017 eHealth Technology-Assisted Review proposed a \u201ccost-effective\u201d systematic review sub-task, where one evaluation metric took into account whether a reviewer examined just the abstract and title of a document, or read the full document [24]. The CLEF 2018 eHealth Technology-Assisted Review web page proposed two such measures18. Unfortunately, it is not clear these measures were actually used: neither track overview discusses results on them [24, 25]. 3 A FRAMEWORK FOR TAR COSTS The goal of most TAR workflows is to hit an effectiveness target (e.g., 0.80 recall) while minimizing cost. Choices such as workflow style, sampling strategy, and stopping point must be made with an eye toward this goal. We propose that a finer-grained examination of costs can inform these decisions. 3.1 An Idealized Cost Model For the purposes of this study, we define a TAR cost structure to be a four-tuple \ud835\udc60= (\ud835\udefc\ud835\udc5d, \ud835\udefc\ud835\udc5b, \ud835\udefd\ud835\udc5d, \ud835\udefd\ud835\udc5b). Here \ud835\udefcand \ud835\udefdrepresent the cost of reviewing one document during the first and second phase respectively; the subscripts \ud835\udc5dand \ud835\udc5bindicate the cost for reviewing positive (e.g., relevant or responsive) and negative (e.g., nonrelevant or nonresponsive) documents, respectively. Assume a one-phase TAR workflow with uniform training batches of size \ud835\udc4f, stopping after \ud835\udc61iterations with \ud835\udc44\ud835\udc61positive documents found. The cost incurred is \ud835\udefc\ud835\udc5d\ud835\udc44\ud835\udc61+\ud835\udefc\ud835\udc5b(\ud835\udc4f\ud835\udc61\u2212\ud835\udc44\ud835\udc61). Stopping at iteration \ud835\udc61might not meet the recall goal, however, so we also need a failure cost to account for remediation. For analysis purposes, we propose defining the failure cost for a one-phase review to be the cost of continuing on to an optimal two-phase review. That is, we assume that the model trained on documents labeled during the \ud835\udc61iterations before stopping is used to 16https://www.law.com/americanlawyer/2020/05/22/associate-billing-rates-surpass1k-as-firms-snap-up-bankruptcy-work/ 17https://www.theposselist.com/2018/06/18/tales-from-the-trenches-the-explosionof-e-discovery-document-review-projects-in-d-c-and-nyc/ 18https://sites.google.com/view/clef-ehealth-2018/task-2-technologically-assistedreviews-in-empirical-medicine rank the unreviewed documents, and those documents are reviewed in order until the recall target is hit. If a collection contains R positive documents, and the recall goal is \ud835\udc54(e.g., 0.8), then \ud835\udc44= \u2308\ud835\udc54\ud835\udc45\u2309is the minimum number of documents that must be reviewed to reach the recall target. A one-phase review has a deficit of \ud835\udc44\u2212\ud835\udc44\ud835\udc61positive documents when \ud835\udc44> \ud835\udc44\ud835\udc61. Let \ud835\udf0c\ud835\udc61 be the minimum number of documents that must be examined from the top of the ranking of unreviewed documents to find an additional \ud835\udc44\u2212\ud835\udc44\ud835\udc61positive documents. Note that \ud835\udf0c\ud835\udc61is reduced both by having found more positive documents in phase one, and by having trained a more effective model at the end of phase one to use in phase two. Given the above, we can define the phase two cost (failure penalty) to be \ud835\udefd\ud835\udc5d(\ud835\udc44\u2212\ud835\udc44\ud835\udc61) +\ud835\udefd\ud835\udc5b(\ud835\udf0c\ud835\udc61\u2212\ud835\udc44+\ud835\udc44\ud835\udc61). The total cost,\ud835\udc36\ud835\udc5c\ud835\udc60\ud835\udc61\ud835\udc60(\ud835\udc61), of a one phase review with this failure penalty is, for \ud835\udc44\ud835\udc61< \ud835\udc44: \ud835\udefc\ud835\udc5d\ud835\udc44\ud835\udc61+ \ud835\udefc\ud835\udc5b(\ud835\udc4f\ud835\udc61\u2212\ud835\udc44\ud835\udc61) + \ud835\udc3c[\ud835\udc44\ud835\udc61< \ud835\udc44] \u0000\ud835\udefd\ud835\udc5d(\ud835\udc44\u2212\ud835\udc44\ud835\udc61) + \ud835\udefd\ud835\udc5b(\ud835\udf0c\ud835\udc61\u2212\ud835\udc44+ \ud835\udc44\ud835\udc61)\u0001 (1) where \ud835\udc3c[\ud835\udc44\ud835\udc61< \ud835\udc44] is 1 if \ud835\udc44documents were not found in the first \ud835\udc61 iterations, and 0 otherwise. We then define the cost of a two-phase review stopping at \ud835\udc61 similarly, i.e., we assume the second phase is conducted optimally, reviewing the same \ud835\udf0c\ud835\udc61documents that a failed one-phase review would. Indeed, in this framework, a one-phase review is simply a two-phase review that stops at a point where the second phase is not necessary. Our idealized definition of failure cost has several advantages. It is a deterministic, easily computed value defined for all stopping points and independent of the stopping rule used. More importantly, it puts one-phase and two-phase reviews on the same footing. When statistical guarantees of effectiveness are needed, two-phase reviews often use a labeled random sample to choose a second-phase cutoff. Proponents of one-phase reviews frequently point to the cost of labeling this random sample, while ignoring the fact that holding a one-phase review to the same statistical standard would incur a similar cost. 3.2 Cost Dynamics Graphs Our framework allows costs to be compared equitably not just between workflow types, but across iterations within a single workflow. As a visualization of how workflow cost evolves over iterations, we propose using a cost dynamics graph that plots our total cost measure at each possible stopping point, while breaking out components of the cost separately. Figure 1 provides an example. Total review cost after each iteration is separated into first phase positives (blue w/ dots), first phase negatives (orange w/ strokes), second phase positives (uniform green), and second phase negatives (red w/ circles). Costs here are for the same TAR workflow execution, but plotted under three different cost structures (Section 4). The workflow is carried out for 30 iterations of relevance feedback with a training batch size of 200 (details in Section 5). With a uniform cost structure (a), minimum cost (lowest height of the total shaded area) is reached at iteration 20 where (almost) no second phase review is needed: a one-phase review is basically \fDocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Yang, Lewis, and Frieder Figure 1: Cost dynamics graphs for category GENV on 20% RCV1-v2 collection. We assume relevance feedback with a 0.80 recall target under three cost structures. See Section 5 for experimental details. The x-axis shows number of iterations of active learning. Colored/shaded areas represent the four costs in Equation 1, such that the height of the filled area equals total review cost when stopping at that iteration. Iterations after the minimum cost iteration are more lightly shaded. The dashed line at iteration 20 indicates when a one-phase review achieves 0.80 recall. optimal19. For cost structure (b), where phase one is ten times as expensive as phase two, stopping much sooner (at iteration 9) is optimal, with an 41% cost reduction over the essentially one phase review ending at iteration 20. For cost structure (c), where both phase one and positive documents are five times more costly than their counterparts, stopping at iteration 6 is optimal, with 28% of cost reduction over stopping at iteration 20. 3.3 Fixed Versus Variable Costs Equation 1 is awkward to use directly for reasoning about workflow costs. If we rewrite to collect terms in \ud835\udc44, \ud835\udc44\ud835\udc61, and \ud835\udf0c\ud835\udc61we get, for \ud835\udc44\ud835\udc61\u2264\ud835\udc44, that \ud835\udc36\ud835\udc5c\ud835\udc60\ud835\udc61\ud835\udc60(\ud835\udc61) is (\ud835\udefc\ud835\udc5d\u2212\ud835\udefc\ud835\udc5b\u2212\ud835\udefd\ud835\udc5d+ \ud835\udefd\ud835\udc5b)\ud835\udc44\ud835\udc61+ \ud835\udefc\ud835\udc5b\ud835\udc4f\ud835\udc61+ \ud835\udefd\ud835\udc5b\ud835\udf0c\ud835\udc61+ (\ud835\udefd\ud835\udc5d\u2212\ud835\udefd\ud835\udc5b)\ud835\udc44, (2) while for \ud835\udc44\ud835\udc61\u2265\ud835\udc44it is simply (\ud835\udefc\ud835\udc5d\u2212\ud835\udefc\ud835\udc5b)\ud835\udc44\ud835\udc61+ \ud835\udefc\ud835\udc5b\ud835\udc4f\ud835\udc61. (3) For a given target \ud835\udc44and cost structure \ud835\udc60, the fourth term, (\ud835\udefd\ud835\udc5d\u2212 \ud835\udefd\ud835\udc5b)\ud835\udc44, is a fixed cost. The remaining terms are variable costs that reflect characteristics of the TAR approach. The first term depends on the number of positives found by iteration \ud835\udc61, while the second term, \ud835\udefc\ud835\udc5b\ud835\udc4f\ud835\udc61, is simply linear in the number of iterations \ud835\udc61. The third term, \ud835\udefd\ud835\udc5b\ud835\udf0c\ud835\udc61, depends on both the undone work at iteration \ud835\udc61and the quality of the predictive model formed by then. In the next section, we use this decomposition of costs to predict TAR behavior under typical cost structures. 4 SOME TAR COST STRUCTURES We now examine typical cost structures and their implications for workflow style and active learning method. 4.1 Uniform Cost For many review projects, it is reasonable to assume all review costs are roughly equal. If so, we can without loss of generality assume \ud835\udc60= (1, 1, 1, 1), and thus total cost is \ud835\udc36\ud835\udc5c\ud835\udc60\ud835\udc61\ud835\udc62\ud835\udc5b\ud835\udc56\ud835\udc53\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udc61) = \ud835\udc4f\ud835\udc61+ \ud835\udf0c\ud835\udc61 (4) 19While difficult to see in (a), there actually are a small number of documents, a fraction of one batch, reviewed in phase two. where \ud835\udf0c\ud835\udc61= 0 if \ud835\udc44\ud835\udc61\u2265\ud835\udc44. Suppose that we conduct a relevance feedback review under this cost structure, and make the following plausible assumptions: (1) more training data lead to better models, and (2) training batches with more positives lead to better models than training batches with fewer positives. Neither is uniformly true, and the second is often false early in training [29]. Later in a TAR review, however, both tend to be reasonable assumptions. We also assume that batch size \ud835\udc4fis small to neglect any benefit that a two-phase review would get from reviewing less than a full batch. We formalize Assumption 1 by positing that, for any set of documents and any cutoff \ud835\udc58, the top \ud835\udc58documents from that set have precision equal or higher when ranked by model \ud835\udc40\ud835\udc61+1 than by model \ud835\udc40\ud835\udc61, where \ud835\udc40\ud835\udc61is the model trained on \ud835\udc61batches. Consider the decision after iteration \ud835\udc61of whether to continue a one-phase review or switch to phase two. Let \ud835\udc48\ud835\udc61be the set of unreviewed documents after \ud835\udc61iterations. Whether we continue a one-phase review or switch to a two-phase review, the next \ud835\udc4f documents reviewed will be \ud835\udc35\ud835\udc61+1, the top \ud835\udc4fdocuments from a ranking of \ud835\udc48\ud835\udc61induced by \ud835\udc40\ud835\udc61. With a uniform structure, the cost is the same whether \ud835\udc35\ud835\udc61+1 is reviewed in phase one or phase two. In a one-phase review, \ud835\udc35\ud835\udc61+1 will be added to the training set. Model \ud835\udc40\ud835\udc61+1 will be produced and used to rank \ud835\udc48\ud835\udc61\\ \ud835\udc35\ud835\udc61+1. The next \ud835\udc4fdocuments, \ud835\udc35\ud835\udc61+2 will be drawn from \ud835\udc40\ud835\udc61+1\u2019s ranking of \ud835\udc48\ud835\udc61\\ \ud835\udc35\ud835\udc61+1. In a two-phase review \ud835\udc35\ud835\udc61+1 will also be reviewed, but not used for training. The next \ud835\udc4fdocuments (\ud835\udc35\u2032 \ud835\udc61+2) will be drawn instead from \ud835\udc40\ud835\udc61\u2019s ranking of \ud835\udc48\ud835\udc61\\ \ud835\udc35\ud835\udc61+1. The cost of reviewing \ud835\udc35\ud835\udc61+2 or \ud835\udc35\u2032 \ud835\udc61+2 is the same. Under our assumptions, however, a one phase review has both immediately found more positive documents (\ud835\udc35\ud835\udc61+2 will have more on average than \ud835\udc35\u2032 \ud835\udc61+2), and will have a better model (\ud835\udc40\ud835\udc61+1 instead of \ud835\udc40\ud835\udc61) to find future documents. We therefore should not transition to phase two. Since the same analysis applies at every iteration, we predict that a one-phase review is optimal in this setting. This analysis, while not a proof, suggests why one-phase relevance feedback workflows have shown good results with uniform costs [9, 11]. Figure 1(a) shows an example of cost dynamics with a uniform cost structure. \fOn Minimizing Cost in Legal Document Review Workflows DocEng \u201921, August 24\u201327, 2021, Limerick, Ireland 4.2 Expensive Training When more senior attorneys are required for training, per document review costs in phase one may be a factor of 10 or more higher than in phase two (Section 2.3). A similar asymmetry can occur in systematic review in medicine [24, 25]. Such cost structures have the form (\ud835\udefc, \ud835\udefc, \ud835\udefd, \ud835\udefd) with total cost \ud835\udc36\ud835\udc5c\ud835\udc60\ud835\udc61\ud835\udc5d\u210e\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc51(\ud835\udc61) = \ud835\udefc\ud835\udc4f\ud835\udc61+ \ud835\udefd\ud835\udf0c\ud835\udc61 (5) where \ud835\udf0c\ud835\udc61= 0 if \ud835\udc44\ud835\udc61\u2265\ud835\udc44. We usually have \ud835\udefc> \ud835\udefd, so a one phase review is optimal only if every training batch of size \ud835\udc4fimproves the classifier enough to eliminate \ud835\udc4f\ud835\udefc/\ud835\udefddocuments from the competing phase two review. Since learning curves show diminishing returns with training data [27], we predict a two phase review will usually be optimal in this setting. Whether relevance feedback or some other active learning method will dominate is less clear. Scenario (b) in Figure 1 shows an example. 4.3 Expensive Positives In this section, we consider several scenarios where positive examples are more expensive to review than negative examples. 4.3.1 Additive Cost for Positives. In the law, positive documents may require review for factors (e.g., attorney-client privilege) not applicable to negative documents. One model for this is a fixed additional cost \ud835\udc63for each positive document, regardless of the phase in which it is found, i.e., structure (\ud835\udefc+\ud835\udc63, \ud835\udefc, \ud835\udefd+\ud835\udc63, \ud835\udefd). The cost function when \ud835\udc44\ud835\udc61\u2264\ud835\udc44is \ud835\udc36\ud835\udc5c\ud835\udc60\ud835\udc61(\ud835\udc61) = ((\ud835\udefc+ \ud835\udc63) \u2212\ud835\udefc\u2212(\ud835\udefd+ \ud835\udc63) + \ud835\udefd) \ud835\udc44\ud835\udc61 + \ud835\udefc\ud835\udc4f\ud835\udc61+ \ud835\udefd\ud835\udf0c\ud835\udc61+ ((\ud835\udefd+ \ud835\udc63) \u2212\ud835\udefd) \ud835\udc44 = \ud835\udefc\ud835\udc4f\ud835\udc61+ \ud835\udefd\ud835\udf0c\ud835\udc61+ \ud835\udc63\ud835\udc44 (6) and when \ud835\udc44\ud835\udc61> \ud835\udc44, becomes \ud835\udefc\ud835\udc4f\ud835\udc61+ \ud835\udc63\ud835\udc44+ \ud835\udc63(\ud835\udc44\ud835\udc61\u2212\ud835\udc44). This is the same as Equation 5 (or, if \ud835\udefc= \ud835\udefd, Equation 4), plus a fixed cost \ud835\udc63\ud835\udc44, plus an additional penalty for overshooting \ud835\udc44. Since overshooting \ud835\udc44is never optimal (ignoring issues of finite batch size), the optimal stopping point actually is the same as for cost structure (\ud835\udefc, \ud835\udefc, \ud835\udefd, \ud835\udefd), so this scenario is not unique. 4.3.2 Multiplicative Cost for Positives. Another possibility is that positives take more time, and that extra time incurs cost at the usual rate for each review phase. This implies a cost structure (\ud835\udc62\ud835\udefc, \ud835\udefc,\ud835\udc62\ud835\udefd, \ud835\udefd) where \ud835\udc62> 1 is the multiplicative surcharge for the positive documents. The cost function for \ud835\udc44\ud835\udc61\u2264\ud835\udc44, is written as: \ud835\udc36\ud835\udc5c\ud835\udc60\ud835\udc61(\ud835\udc61) = (\ud835\udc62\ud835\udefc\u2212\ud835\udefc\u2212\ud835\udc62\ud835\udefd+ \ud835\udefd)\ud835\udc44\ud835\udc61+ \ud835\udefc\ud835\udc4f\ud835\udc61+ \ud835\udefd\ud835\udf0c\ud835\udc61+ (\ud835\udc62\ud835\udefd\u2212\ud835\udefd)\ud835\udc44 = (\ud835\udc62\u22121)(\ud835\udefc\u2212\ud835\udefd)\ud835\udc44\ud835\udc61+ \ud835\udefc\ud835\udc4f\ud835\udc61+ \ud835\udefd\ud835\udf0c\ud835\udc61+ (\ud835\udc62\u22121)\ud835\udefd\ud835\udc44 (7) and for \ud835\udc44\ud835\udc61> \ud835\udc44is (\ud835\udc62\u22121)\ud835\udefc\ud835\udc44\ud835\udc61+ \ud835\udefc\ud835\udc4f\ud835\udc61. When \ud835\udefc= \ud835\udefd, then (\ud835\udc62\ud835\udefc, \ud835\udefc,\ud835\udc62\ud835\udefd, \ud835\udefd) is equivalent to the additive form (\ud835\udefc+ \ud835\udc63, \ud835\udefc, \ud835\udefc+ \ud835\udc63, \ud835\udefc) with \ud835\udc63= (\ud835\udc62\u22121)\ud835\udefc> 0 (Section 4.3.1). Multiplicative cost for positives is therefore only a unique scenario when \ud835\udefc\u2260\ud835\udefd. Typically this will be \ud835\udefc> \ud835\udefd(phase one is more expensive), so (\ud835\udc62\u22121)(\ud835\udefc\u2212\ud835\udefd) is nonnegative: there is a penalty for positive documents being found in phase one. That favors active learning strategies that find only those positives most useful for training. We predict a two-phase review using a classifier-focused active learning strategy such as uncertainty sampling [31] will outperform both one-phase and two-phase reviews using relevance feedback. Figure 1(c) displays such a cost structure, with minimum cost occurring at iteration 6. 4.3.3 Elite Phase One Review. Determining privilege can be more subtle legally than determining responsiveness. If elite reviewers are used during phase one, they may be able to incorporate privilege determination into their review at no additional cost. In contrast, responsive documents discovered during phase two may require calling on an elite reviewer to make the privilege decision. We model this with the cost structure (\ud835\udefc, \ud835\udefc, \ud835\udefd\ud835\udc5d, \ud835\udefd\ud835\udc5b) where \ud835\udefc\u2265\ud835\udefd\ud835\udc5d> \ud835\udefd\ud835\udc5b. Total cost when \ud835\udc44\ud835\udc61\u2264\ud835\udc44is, \ud835\udc36\ud835\udc5c\ud835\udc60\ud835\udc61(\ud835\udc61) = \u2212(\ud835\udefd\ud835\udc5d\u2212\ud835\udefd\ud835\udc5b)\ud835\udc44\ud835\udc61+ \ud835\udefc\ud835\udc4f\ud835\udc61+ \ud835\udefd\ud835\udc5b\ud835\udf0c\ud835\udc61+ (\ud835\udefd\ud835\udc5d\u2212\ud835\udefd\ud835\udc5b)\ud835\udc44 (8) and when \ud835\udc44\ud835\udc61> \ud835\udc44is simply \ud835\udefc\ud835\udc4f\ud835\udc61. Given \ud835\udefd\ud835\udc5d> \ud835\udefd\ud835\udc5b, the coefficient for \ud835\udc44\ud835\udc61is negative, rewarding finding positive documents in phase one. This favors relevance feedback as the active learning strategy. However, phase two review is still cheaper, particularly for negatives, so as batch precision declines we expect a transition to phase two at some point will be optimal. We predict a two-phase review using relevance feedback to be optimal. 5 METHODS Our experiments test the predictions of our cost model, with an emphasis on how workflow choice, cost structure, and task properties interact. 5.1 Dataset Experimental evaluations of active learning require large, completely labeled, document collections. We simulate TAR tasks on two collections\u2014RCV1-v2 [32] and the Jeb Bush email collection [46]\u2014 widely used in TAR research [11, 18, 38, 46, 55, 56]. RCV1-v2 contains 804,414 news articles coded for each of 658 categories. We run some tests on all documents, and some on a fixed 20% random subset (160,882 documents), to study the effect of collection size. The Jeb Bush collection, after deduplication, consists of 274,124 emails to and from the governor of the US state Florida, coded for 45 categories developed in two TREC (Text REtrieval Conference) evaluations of TAR technology [18, 46]. To study the impact of task characteristics, we chose 5 random RCV1-v2 categories from each of 9 bins, based on three ranges of prevalence (class frequency) and three ranges of task difficulty (Table 1). Extremely low and high prevalence bins were omitted due the inability to keep the ratio of the bin boundaries small. Difficulty bins were based on the effectiveness of a logistic regression model trained on a random 25% of the full RCV1-v2 and evaluated by R-precision on the remaining 75%. The Jeb Bush collection has too few categories for binning, so we simply used the 41 categories with 80 or more documents. 5.2 Implementation We implemented TAR workflows using libact [57], a Python framework for active learning. One-phase and two-phase workflows were run using each of two active learning algorithms: uncertainty sampling [31] and relevance feedback [9, 45]. We used these algorithms because of their prominence in TAR research \fDocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Yang, Lewis, and Frieder Table 1: Number of categories in each of 15 bins on the full RCV1-v2 collection. Difficulty values are R-Precision when training on 25% of collection and testing on 75%. Difficulty Hard Medium Easy Prevalence (by # Pos Docs) (< 0.65) (0.85 0.65) (1.0 0.85) Too rare (< 500) 204 56 10 Rare (500 2,000) 74 56 26 Medium (2,000 8,000) 47 44 47 Common (8,000 32,000) 9 28 29 Too common (> 32,000) 3 11 14 [9, 10, 34, 35, 58], and because they make opposite choices with respect to the exploration / exploitation tradeoff in active learning [31, 40]. Supervised learning used the scikit-learn implementation of logistic regression, with an L2 regularization weight of 1.0 and the CountVectorizer as the tokenizer.20 All words were used as features, with feature value equal to the BM25-style term frequency weight [44, 56]. 5.3 Evaluation Our evaluation metric was our idealized total cost (Section 3) to reach the recall target, which we set at 0.8. In applied settings, TAR workflows use heuristics and/or sample-based estimates to decide when to stop a one-phase review [1, 8, 11, 33, 48]. Similar methods are used in two-phase workflows to decide when to switch to phase two, and where to set the phase two cutoff. One goal of our experiments was to characterize the cost landscape within which such rules operate. We therefore continued all runs until a recall of at least 0.8 was reached, guaranteeing we include the optimal stopping point for both one-phase and two-phase workflows, and computed our total cost (and its components) at each iteration. The first training batch in an active learning workflow typically includes one or more positive \"seed\" documents known to the user or found by a keyword search. To simulate this, Batch 0 of our active learning runs was a single seed document chosen at random from all positives for that category in the collection. For each category, we ran iterative active learning with 10 different seeds (to simulate varying user knowledge). The same seeds were used with both uncertainty sampling and relevance feedback. Subsequent batches were 200 documents (a typical batch size in legal review environments) chosen by active learning. This gave a total of 900 runs on full RCV1-v2, 900 runs on 20% RCV1-v2, and 820 runs on Jeb Bush. A total of 541,733 predictive models were trained. Wall clock run time given the computational resources available at our institution was 3 weeks. To study the overall benefit of a workflow, we computed the mean over a set of tasks (category/seed pairs) of the relative cost reduction achieved by using workflow A instead of workflow B. Specifically, we compute 1 \u2212\ud835\udc36\ud835\udc5c\ud835\udc60\ud835\udc61\ud835\udc34(\ud835\udc61\ud835\udc34)/\ud835\udc36\ud835\udc5c\ud835\udc60\ud835\udc61\ud835\udc35(\ud835\udc61\ud835\udc35), where \ud835\udc61\ud835\udc34and \ud835\udc61\ud835\udc35are the optimal stopping iterations for A and B. Values are thus in the range (\u2212\u221e, 1.0). 20https://scikit-learn.org/ We tested statistical significance of the relative cost reduction using the two-sample Kolmogorov\u2013Smirnov (K-S) test [21] to avoid distributional assumptions. A Bonferroni correction [15] was applied for 84 tests in Table 2 and 126 tests in Table 3. (These counts include one cost structure originally studied, but then dropped from our presentation after concluding it was economically unrealistic.) Cost structures were chosen to exemplify the non-redundant scenarios from Section 4: Uniform (1, 1, 1, 1); Expensive Training (2, 2, 1, 1) and (10, 10, 1, 1); Expensive Training with Multiplicative Positives (20, 10, 2, 1) and (25, 5, 5, 1); and Elite Phase One Review (20, 20, 11, 1). As discussed in Section 4, other scenarios are redundant from the standpoint of optimal stopping iteration, and thus the comparison between one-phase and two-phase workflows. 6 RESULTS AND ANALYSIS Our experiments tested predictions for how cost structure impacts the optimal choice of workflow style and active learning method, and examined the impact of task characteristics on this relationship. 6.1 Costs, Workflows, and Active Learning As predicted by our cost model, Table 2 shows that a two-phase workflow has lower cost than a one-phase workflow for several asymmetric cost structures. Claims that one-phase relevance feedback workflows are always superior (Section 2.2) are simply incorrect. In contrast, and also as predicted, the one-phase workflow is preferred for the uniform cost structure (1, 1, 1, 1), with a 10 to 20% reduction in cost vs. the two-phase uncertainty sampling workflow. The situation is mixed for the slightly asymmetric structure (2, 2, 1, 1): a two-phase uncertainty workflow is better on 100% RCV1-v2, but one-phase relevance feedback is better on the two smaller collections. Under any cost structure, the two-phase uncertainty workflow has a larger advantage or smaller disadvantage on full RCV1-v2 versus 20% RCV1-v2. This emphasizes a neglected point in discussions of TAR workflows: the larger a data set, the more that costs incurred to improve classifier effectiveness are amortized over many documents. Also neglected in eDiscovery discussions is the fact that relevance feedback can be used in two-phase workflows. Indeed, we find that relevance feedback dominates uncertainty sampling for two-phase workflows except for the Expensive Training with Multiplicative Positives scenarios (20, 10, 2, 1) and (25, 5, 5, 1) on full RCV1-v2. Providing more positive documents in training (which reviewers tend to prefer) can sometimes be a win/win situation. Section 6.2 shows that the underlying story is more complex, when we focus on categories with particular properties. In Table 2 a two-phase relevance feedback workflow always dominates a one-phase relevance feedback workflow. However, for (1, 1, 1, 1) that is an artifact of finite batch size: we can usually slightly reduce cost by replacing the last batch with an optimal cutoff in a second phase. The effect is small on the largest collection and would be negligible with smaller batch sizes. Finally, a one-phase workflow using uncertainty sampling always has poor effectiveness, since there is no second phase to pay back the expense of training a better classifier. \fOn Minimizing Cost in Legal Document Review Workflows DocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Table 2: Mean relative cost reduction resulting from using workflow A instead of workflow B under six cost structures, with negative values indicating workflow A is worse. 1P and 2P indicate one-phase and two-phase workflows; Unc. and Rel. indicate uncertainty sampling and relevance feedback. Values are means over 450 category/seed pairs for 100% and 20% RCV1-v2, and 330 for Jeb Bush. Italicized values are nonzero only due to a two-phase review stopping with the equivalent of a partial training batch. Values with a * indicate a statistically significant difference in distribution between A and B at 95% confidence level using a two-sample Kolmogorov\u2013Smirnov test with Bonferroni correction. Collection Workflow A vs. B (1, 1, 1, 1) (2, 2, 1, 1) (10, 10, 1, 1) (20, 10, 2, 1) (25, 5, 5, 1) (20, 20, 11, 1) 100% RCV1-v2 (\ud835\udc41= 804, 414) 2P Unc. vs. 1P Rel. -0.0982 0.0977* 0.3969* 0.4670* 0.4160* 0.2922* 2P Unc. vs. 2P Rel. -0.1340 -0.0446 -0.0591 0.0524* 0.1339* -0.0632 2P Rel. vs. 1P Rel. 0.0237 0.1091 0.3639* 0.3862* 0.3003* 0.3263* 1P Rel. vs. 1P Unc. 0.6098* 0.6098* 0.6098* 0.5694* 0.4878* 0.6098* Jeb Bush (\ud835\udc41= 274, 124) 2P Unc. vs. 1P Rel. -0.1476* -0.0022 0.2221* 0.3001* 0.3115* 0.1391* 2P Unc. vs. 2P Rel. -0.2602* -0.2049* -0.2374* -0.1675 -0.0634 -0.2668* 2P Rel. vs. 1P Rel. 0.0803 0.1634* 0.3767* 0.4071* 0.3559* 0.3211* 1P Rel. vs. 1P Unc. 0.4743* 0.4743* 0.4743* 0.4304* 0.3456* 0.4743* 20% RCV1-v2 (\ud835\udc41= 160, 882) 2P Unc. vs. 1P Rel. -0.2104 -0.0650 0.2580* 0.3472* 0.3093* 0.2460* 2P Unc. vs. 2P Rel. -0.3149 -0.3171* -0.4697* -0.3255* -0.1328 -0.3741 2P Rel. vs. 1P Rel. 0.0585 0.1628 0.4665* 0.4804* 0.3676* 0.4660* 1P Rel. vs. 1P Unc. 0.4200* 0.4200* 0.4200* 0.3863* 0.3190* 0.4200* 6.2 Impact of Task Characteristics When there are few positive examples, relevance feedback and uncertainty sampling act similarly [29]. Conversely, when faced with many easily detected positive examples, relevance feedback can drown in non-informative positive examples. Table 3 examines this effect, comparing uncertainty sampling and relevance feedback for two-phase workflows on the full and 20% RCV1-v2 collections, while also partitioning results by category prevalence and difficulty. 6.2.1 Number of Target Documents. Both higher category prevalence and larger collection size increase the number of positive examples a workflow must find to hit a recall target. Both therefore provide more opportunity to amortize the cost of any negative examples seen during training. Table 3 shows this effect is powerful: in all 36 scenarios examined (3 difficulty levels, 6 cost structures, 2 collection sizes), uncertainty sampling has a bigger advantage (or smaller disadvantage) vs. relevance feedback for Common categories than for Rare ones. Further, in 50 out of 54 comparisons (3 difficulties, 3 prevalences, 6 cost structures), uncertainty sampling improves versus relevance feedback when moving from 20% RCV1-v2 to the full RCV1-v2. The effect is particularly strong in expensive training scenarios. For example, on Medium/Common categories with cost structure (1, 1, 1, 1) uncertainty sampling shows a small improvement (from -0.0546 to 0.0200) when going from 20% RCV1-v2 to full RCV1-v2. With cost structure (10, 10, 1, 1) the improvement is much larger: from 0.0836 to 0.4626. 6.2.2 Task Difficulty. The story is less straightforward for task difficulty. If the boundary between positive and negative documents is complex but learnable, techniques like uncertainty sampling can choose informative negative examples. On the other hand, good classifier effectiveness is simply impossible for some tasks, due to noisy labeling or model limitations. In those cases, relevance feedback might dominate by prioritizing the least bad predictions. Table 3 shows that in 33 of 36 scenarios (3 prevalences, 6 cost structures, 2 collection sizes) uncertainty sampling does better relative to relevance feedback on Hard categories than Easy ones. This supports the suggestion that focusing training on classifier effectiveness is desirable on difficult tasks. The worst tasks for uncertainty sampling are Easy/Rare categories. One can quickly get a good classifier, so reviewing negative examples is almost a pure loss. One should heavily weight exploitation over exploration [40]. We display this for one run (using a seed chosen for typical behavior) of Easy/Rare category BURMA with cost structure (20,10,2,1) in Figures 2(a) and (d). Optimal effectiveness with relevance feedback comes from stopping after a single training batch of size 200, having reviewed almost no negative examples, and immediately exploiting the classifier in phase two. Uncertainty sampling must absorb many negative phase one examples (orange w/ strokes) before finding enough positive examples for switching to phase two to be optimal. Conversely, Table 3 shows that uncertainty sampling is strongly dominant on Medium/Common tasks, particularly when training is expensive. Figures 2(b) and (e) show Medium/Common task I21000 (Metal Ore Extraction). By selecting a balanced set of positives (blue/dots) and negatives (orange/strokes) examples during training, uncertainty sampling optimally stops at iteration 11, deploying an effective but imperfect classifier in the low cost second phase. Relevance feedback gorges on high cost, low value positive examples during training, with its best case corresponding to deploying a bad classifier at iteration 31. For difficult tasks, all approaches are expensive, but the averaged results in Table 3 show uncertainty sampling with a modest \fDocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Yang, Lewis, and Frieder Table 3: Mean relative cost reduction when using uncertainty sampling rather than relevance feedback in a two-phase workflow (2P Unc. vs. 2P Rel.) on full and 20% RCV1-v2 collection. Table details are as in Table 2. Means are over 50 runs: 5 categories per bin (Section 5.1) and 10 random seeds per category. Size Difficulty Prevalence (1, 1, 1, 1) (2, 2, 1, 1) (10, 10, 1, 1) (20, 10, 2, 1) (25, 5, 5, 1) (20, 20, 11, 1) 100% Common -0.0524 0.1922* 0.2089* 0.2473* 0.2343* 0.0655 Easy Medium -0.1029 0.1052* -0.0188 0.0898 0.1780* 0.0053 Rare -0.8323* -0.9031* -1.5450* -0.9403* -0.2893 -1.0047* Common 0.0200 0.2136* 0.4626* 0.5027* 0.4677* 0.2427* Medium Medium 0.0003 0.1127 0.2005 0.2703 0.2918 0.0829 Rare -0.1300 -0.0701 -0.0405 0.0060 0.0541 -0.0815 Common 0.0326 0.0829 0.2248* 0.2606* 0.2425* 0.1404 Hard Medium -0.0283 -0.0020 0.0769 0.1285* 0.1439* 0.0276 Rare -0.1131 -0.1324 -0.1014 -0.0932 -0.1182 -0.0468 20% Common -0.2137* -0.1707 -0.5981* -0.3383 -0.0524 -0.2425* Easy Medium -0.4012* -0.4491* -1.0240* -0.6919* -0.2362 -0.5315* Rare -1.4201* -1.5464* -1.9346* -1.5411* -0.9013* -1.6534* Common -0.0546 0.0266 0.0836 0.1715 0.2194* -0.0083 Medium Medium -0.2177 -0.2168 -0.2814 -0.1987 -0.0743 -0.3272 Rare -0.3091 -0.3028 -0.3599 -0.2605 -0.1610 -0.3476 Common -0.0044 0.0103 0.1121 0.1058 0.1141 0.0383 Hard Medium -0.1150 -0.1087 -0.1304 -0.0823 -0.0043 -0.1948 Rare -0.0985 -0.0962 -0.0950 -0.0937 -0.0990 -0.0996 Figure 2: Cost dynamics graphs exemplifying the interaction of active learning method and category properties for cost structure (20, 10, 2, 1). BURMA is in category bin Easy/Rare, I21000 is in Medium/Common, and I81502 is in Hard/Common. Each graph corresponds to one run (a particular seed), not an average over runs. advantage. Figures 2(c) and (f) show Hard/Common task I81502 (Banking and Financial Services). Obtaining even a mediocre classifier requires a lot of expensive training, and uncertainty sampling does a better job of grinding this out for this high prevalence category. Conversely, Table 3 shows relevance feedback is favored for low prevalence difficult categories, where there are fewer positives over which to amortize training effort. \fOn Minimizing Cost in Legal Document Review Workflows DocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Figure 3: Mean optimal stopping iteration (top row) and acceptable stopping range with 10% cost tolerance (bottom row) as asymmetry (value of x on x-axis) is varied within three cost structure families on 20% RCV1-v2. Means are over 150 runs (15 categories, 10 seeds each). 6.3 Optimal Stopping Iteration Under our idealized cost model, the optimal stopping point is the one that, for a one-phase review, minimizes the sum of review cost and failure cost. Equivalently, this is the optimal point to transition to the second phase of a two-phase review. By providing a precise definition of optimal stopping, our framework allows studying how task properties impact the difficulty faced by stopping rules. We examine two characteristics of the stopping problem: the optimal stopping iteration (Figure 3a, b, and c) and the number of iterations during which cost is near-optimal (within 10% of optimal) (Figure 3d, e, and f). The smaller the second value, the more difficult the stopping rule\u2019s task is. We average these quantities separately across tasks within the Easy, Medium, and Hard bins. The 20% RCV1-v2 subset is used to reduce computation time, since many runs must be extended well past the 0.80 recall target to capture the 10% cost range. We examine how stopping characteristics change as we vary the degree of cost asymmetry within three cost structure families: (1 + \ud835\udc65, 1 + \ud835\udc65, 1, 1), (1 + \ud835\udc65, 1, 1 + \ud835\udc65, 1), and (1 + \ud835\udc65, 1, 1, 1) with adjustable \ud835\udc65from 0.0 to 20.0. For the Expensive Training cost family (1 + \ud835\udc65, 1 + \ud835\udc65, 1, 1) we see, unsurprisingly, that the optimal stopping iteration decreases quickly as \ud835\udc65increases. Less obviously, the number of iterations during which near-optimal cost can be achieved also narrows as \ud835\udc65becomes large: the stakes become higher for stopping outside a narrow range of iterations. Unfortunately, the position of that range varies substantially with task difficulty, posing a challenge for stopping rules. For the Additive Positive cost family (1 +\ud835\udc65, 1, 1 +\ud835\udc65, 1), recall that our analysis showed that the optimal stopping point is independent of \ud835\udc65(Section 4.3.1). Figures 3a, b, and c confirm this empirically. On the other hand, larger \ud835\udc65increases the minimum cost and thus increases the range of iterations where total cost is within 10% of that minimum. This eases the stopping rule\u2019s task, particularly for uncertainty sampling (which can amortize its negative examples over a larger total cost). Finally, to exhibit the complexities possible, we consider the unusual cost family (1 + \ud835\udc65, 1, 1, 1): positives incur extra cost only during training. Since it does not matter when negatives are reviewed, the optimal stopping iteration decreases more slowly with \ud835\udc65than for (1 + \ud835\udc65, 1 + \ud835\udc65, 1, 1). The range of acceptable stopping iterations is relatively stable, but oscillates with \ud835\udc65for Hard categories. The sensitivity of (1 + \ud835\udc65, 1, 1, 1) to the number of positive training documents is a likely contributor to oscillation, but we are not sure if this is a systematic phenomenon or a peculiarity of the small set of Hard categories used. 7 SUMMARY AND FUTURE WORK Our proposed TAR cost model that accounts for cost asymmetries observed in real-world applications, both across document types and across phases of review. We show analytically and empirically that these asymmetries impact which choice of workflow and active learning method minimizes total review cost. One-phase workflows dominate when costs are uniform, while two-phase workflows are favored (providing up to 60% cost reductions) when costs are asymmetric. We also show that task characteristics interact with these choices in predictable ways, with the ability to amortize training costs across a larger number of sought documents a major factor. We also show how the cost structure impacts the optimization problem faced by stopping rules, which may give insight into their design. We hope that our results will also provide practical guidance \fDocEng \u201921, August 24\u201327, 2021, Limerick, Ireland Yang, Lewis, and Frieder to legal technology practitioners, and discourage claims of uniform superiority for one workflow or another." + } + ], + "Dawn Lawrie": [ + { + "url": "http://arxiv.org/abs/2304.12367v2", + "title": "Overview of the TREC 2022 NeuCLIR Track", + "abstract": "This is the first year of the TREC Neural CLIR (NeuCLIR) track, which aims to\nstudy the impact of neural approaches to cross-language information retrieval.\nThe main task in this year's track was ad hoc ranked retrieval of Chinese,\nPersian, or Russian newswire documents using queries expressed in English.\nTopics were developed using standard TREC processes, except that topics\ndeveloped by an annotator for one language were assessed by a different\nannotator when evaluating that topic on a different language. There were 172\ntotal runs submitted by twelve teams.", + "authors": "Dawn Lawrie, Sean MacAvaney, James Mayfield, Paul McNamee, Douglas W. Oard, Luca Soldaini, Eugene Yang", + "published": "2023-04-24", + "updated": "2023-09-24", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Cross-language Information Retrieval (CLIR) has been studied for more than three decades, first appearing at the Text Retrieval Conference (TREC) in TREC-4 [9]. Prior to the application of deep learning, strong statistical approaches were developed that work well across many languages. Though as with most other language technologies, neural methods have led to substantial improvements in information retrieval. Several factors combined to make us feel that the time was right to press for rapid progress in CLIR: \u2022 Research Community. There have been recent programs focused on CLIR such as IARPA MATERIAL1 and the Johns Hopkins Human Language Technology Center of Excellence (HLTCOE) Summer Camp for Applied Language Engineering (SCALE) 20212. Recent interest among natural language processing researchers in the related problems of cross-language question answering and development of multilingual embeddings have produced a new crop of researchers familiar with and interested in CLIR and related tasks. \u2022 Algorithms. Neural advances in the state of the art in monolingual retrieval have been appearing for several years. Improvements in cross-language IR have come just in the last year or two. \u2022 Data. The appearance of MS MARCO led to rapid advances in monolingual IR. Translations of MS MARCO into other languages have allowed training of CLIR systems. Additional resources that could also be useful for training neural CLIR models have also appeared recently, including 1https://www.iarpa.gov/research-programs/material 2https://hltcoe.org/research/scale CLIRMatrix [24],3 HC4 [17],4 WikiCLIR [23],5 and MIRACL [28].6 \u2022 Infrastructure. Earlier systems for experimental IR have recently been supplemented by systems such as PyTerrier [19]7 and Castorini8 that support neural methods, and by systems such as Patapsco [8]9 that are designed specifically for CLIR. These systems provide a base on which to build, somewhat lowering barriers to entry, and providing a source for baselines to which progress can be compared. The NeuCLIR track was designed to take advantage of this confluence of interest and resources to push the state of the art in neural CLIR forward. We expect the track to help to answer at least the following questions: \u2022 What are the best neural CLIR approaches? \u2022 How do the best approaches compare to the straightforward combination of machine translation and monolingual IR? \u2022 How do the best neural approaches compare to the strongest statistical approaches to CLIR? \u2022 Can reranking further improve retrieval effectiveness using techniques that would be impractical for full-collection retrieval? \u2022 How do the resource requirements for the various approaches compare? \u2022 What resources are most useful for training CLIR systems? \u2022 What are the best neural multilingual information retrieval (MLIR) approaches for producing a single ranked lists containing documents in several languages? NeuCLIR 2022 has helped start to answer these questions. The track will continue in 2023. The NeuCLIR track maintains an official website at: https:// neuclir.github.io. 2 TASK DEFINITION We explore three tasks in the TREC 2022 NeuCLIR track: ad hoc CLIR, reranking CLIR, and monolingual. All three tasks use the same document collections, topics, and relevance assessments. Monolingual runs use topics manually translated into the language of the documents; ad hoc and reranking runs use the original English topics. Ad hoc runs rank documents from the entire collection, while 3https://github.com/ssun32/CLIRMatrix 4https://github.com/hltcoe/hc4 5https://www.cl.uni-heidelberg.de/statnlpgroup/wikiclir/ 6https://github.com/project-miracl/miracl 7https://github.com/terrier-org/pyterrier 8https://github.com/castorini/ 9https://github.com/hltcoe/patapsco 1 arXiv:2304.12367v2 [cs.IR] 24 Sep 2023 \fLawrie et al. reranking runs rank only the 1,000 documents that appear in the output of a NIST-provided initial run. 2.1 Ad Hoc CLIR Task The main task in the NeuCLIR track is ad hoc CLIR Systems receive a document collection in Chinese, Persian, or Russian, and a set of topics written in English. For each topic, the system must return a ranked list of 1,000 documents drawn from the entire document collection of the target language, ordered by likelihood and degree of relevance to the topic. Runs that use a human in the loop for ad hoc retrieval (or had design decisions influenced by human review of the topics) are indicated as \u201cmanual\u201d runs; all others are considered \u201cautomatic.\u201d 2.2 Reranking CLIR Task The reranking task provides an initial ranked list of 1,000 retrieved documents from the document collection. Each ranked list is the output of a BM25 retrieval system, which used document translation to cross the language barrier. The run ids are coe22-bm25-td-dt-* where * is zho for Chinese, fas for Persian, and rus for Russian. The runs appear in bold in Tables 8, 9, and 10. These runs use the English title and descriptions for queries. Systems are then asked to rerank the documents to produce a new ordering that improves an evaluation metric. This task is suitable for teams that want to focus on second-stage scoring models, rather than on models which search an entire collection. 2.3 Monolingual Retrieval Task While monolingual retrieval is not a focus of the NeuCLIR track, monolingual runs can improve assessment pools and serve as good points of reference for cross-language runs. The monolingual retrieval task is identical to the ad hoc task, but it uses topic files that are human translations of the English topics into a target language in a way that would be expressed by native speakers of the language. This task is suitable for teams looking to explore monolingual ranking in languages other than English. It is also a lower barrier to entry task for teams that are interested in the track. 3 DOCUMENTS There are three components of the 2022 NeuCLIR test collection: documents, topics, and relevance judgments. In this section we describe the documents. The document collection, NeuCLIR-1, consists of documents in three languages: Chinese, Persian, and Russian, drawn from the Common Crawl news collection.10 The documents were obtained by the Common Crawl service between August 1, 2016 and July 31, 2021; most of the documents were published within this five year window. Text was extracted from each source web page using the Python utility Newspaper.11 While NIST made the documents (and topics) available for participants and will distribute them for the foreseeable future, an alternative source of the document collection can be obtained directly from Common Crawl, which is the original source. A github 10https://commoncrawl.org/2016/10/news-dataset-available/ 11https://github.com/codelucas/newspaper repository12 facilitates this by providing code to access the documents via their Universally Unique Identifiers (UUID). The process extracts the text from the documents and then matches the descriptor to ensure that both track participants and non-participants index the same documents. The collection was distributed to participants in JSONL, a list of JSON objects, one per line. Each line represents a document. Each document JSON structure consists of the following fields: id: UUID assigned by Common Crawl cc_file: raw Common Crawl document time: time of publication, or null title: article headline or title text: article body url: address of the source web page To ascertain the language of each document, its title and text were independently run through two automatic language identification tools, cld313 and an in-house tool, VALID [20], a compressionbased model trained on Wikipedia text. Documents for which the tools agreed on the language, or where one of the tools agreed with the language recorded in the web page metadata were included in the collection under the language of agreement; all others were removed. This is an imperfect process and some documents comprised of text in other languages are in the collections. The extent of the language pollution is unknown; however, annotators did sometimes encounter out-of-language documents in the pools of assessed documents. These documents were always considered not relevant. While we expected this process to make errors, we had assumed that no systems would retrieve out-of-language documents. This assumption proved to be false as some systems ranked documents in other languages highly. All documents with more than 24,000 characters (approximately 10 pages of text) were also removed, as very long documents create challenges in assessment. Additionally, very short documents were also removed, specifically: Chinese documents containing 75 or fewer characters, Persian documents containing 100 or fewer characters, and Russian documents containing 200 or fewer characters. We observed that such documents are often not genuine news articles, frequently consisting of isolated headlines or commercial advertisements. Each collection was limited in size to at most 5 million documents. After removing duplicates, the Russian collection was significantly above this threshold. Therefore, we used Scikit-learn\u2019s implementation of random sampling without replacement14 to downsample the collection. Final collection statistics appear in Table 1. 4 TOPICS NeuCLIR 2022 topics were developed to be traditional TREC-style information needs that are broader than CLIR question answering, which can be answered with a phrase or sentence. Topic development was done in two phases. First, assessors created a topic by writing an English description and searching for relevant documents in their language. Subsequently, pools were created from 12https://github.com/NeuCLIR/download-collection 13https://pypi.org/project/pycld3/ 14https://scikit-learn.org/stable/modules/generated/sklearn.utils.random.sample_ without_replacement.html 2 \fOverview of the TREC 2022 NeuCLIR Track Table 1: Document Collection Statistics for NeuCLIR-1 (token counts from Spacy) . Document Avg. Chars Median Chars Avg. Tokens Median Tokens Language count per Document per Document per Document per Document Chinese 3,179,209 743 613 427 356 Persian 2,232,016 2032 1427 429 300 Russian 4,627,543 1757 1198 301 204 track submissions for assessors to judge. This section focuses on topic development; Section 5 addresses relevance assessment. During topic development, assessors wrote the title, description, and narrative components of the topic and then examined thirty documents that they discovered through interactive monolingual search using a web interface to the Patapsco framework [8]. They recorded the number of relevant documents they discovered. Any topic where the assessor recorded more than twenty relevant documents was deemed too productive for inclusion in the collection; such topics are deleterious to collection reusability because they can lead to too many relevant documents being unjudged. With the idea of adding a multilingual task to NeuCLIR 2023, we wanted topics that would exhibit relevant documents in multiple languages. Assessors therefore also judged documents for topics developed by assessors in other languages. This process developed 137 topics. A total of 89 of these were evaluated in Chinese, 69 in Persian, and 104 in Russian. All topics were examined by one of the organizers; topics that were overly ambiguous, too similar to another topic, or judged by the organizers to be otherwise inappropriate were eliminated. A total of 114 topics remained; these were distributed to participants. Human translations of each topic into the three collection languages were produced, and each was subsequently vetted by a language expert to ensure translation quality. These translations were used for the monolingual task. Given that no language assessed all 137 topics, there was insufficient information available after the first phase of topic development by NIST to select multilingual topics. To further evaluate topics and in particular to identify topics that would have some but not too many relevant documents, additional assessors were used to judge the topics. These additional assessors, who will be referred to as non-NIST assessors, had proficiency in the language for which they provided annotation, but generally were not native speakers of the language. The non-NIST assessors judged 63 topics in Chinese, 67 in Persian, and 68 in Russian. They used the Patapsco framework to perform monolingual retrieval using BM25; however, the interface for interacting with the system was different from that used by the NIST assessors. Two main differences were that non-NIST assessors provided document judgments with the interface, rather than reporting a count; and they had access to HiCAL [2], an active learning system, to recommend documents for assessment. While non-NIST assessors were encouraged to judge thirty documents, the actual number varied between five and sixty-two, with a median of fifteen documents judged per topic. Assessors identified between zero and eighteen relevant documents with an average of 4.6 documents per topic. Because of the lack of consistency of the number of judged documents per topic, rather than using a cutoff of 20 documents out of 30 as a sign of too many relevant documents, any topic with more than 65% relevant documents was deemed too productive to contribute to a reusable collection. The viability of each topic in each language was determined using both NIST assessments and non-NIST assessments. Figure 1 shows the percentage of relevant documents found for annotated topics during initial development by the two teams of assessors. A topic was viable if at least one relevant document was found by a NIST or non-NIST assessor and the topic did not appear to have too many relevant documents to support collection reusability. If there was a disagreement about the prevalence (i.e, percentage) of relevant documents for a topic, the NIST assessors value was used. Therefore there were cases when the NIST assessor found at least one relevant document and the additional assessment did not find any. In addition, there were topics where NIST assessors identified fewer than twenty relevant documents, but the non-NIST assessors found that more than 65% were relevant. Disagreements could come from the fact that different documents were examined and because relevance is an opinion. A priority list of topics was assembled to favor topics that appeared to be viable in all three languages over topics that appeared to be viable in only two languages. Each topic was assigned to a category. Figure 2 shows the distribution for all 114 topics and the distribution of the fifty topics selected for further relevance assessment. The intent was to evaluate all fifty topics in all three languages. 5 RELEVANCE JUDGMENTS Once systems were run, relevance assessment began on the chosen subset of the 114 topics submitted for each run. In the following we describe how the judgment pools were assembled and how relevance was determined. 5.1 Creating Judgment Pools Pools were created from the top-ranked documents of submitting systems. The number of documents a run contributed to a pool was based on whether the submitting team marked the run as a baseline run. Baseline runs contributed their top twenty-five documents, while non-baseline runs contributed their top fifty documents. Thus for nDCG@20 all submissions have complete judgments. 5.2 Relevance Judgment Process Assessors used a four-point scale to judge the relevance of each document. Assessors were instructed to provide their assessment as if they were gathering information to write a report on the topic. Relevance was assessed on the most valuable information found in the document; the grades were: 3 \fLawrie et al. 0%-10% 10%-20% 20%-30% 30%-40% 40%-50% 50%-60% 60%-70% 70%-80% 80%-90% 90%-100% Relevant Percentage of Judged Documents 0 2 5 7 10 12 15 Number of T opics (a) NIST Assessments 0%-10% 10%-20% 20%-30% 30%-40% 40%-50% 50%-60% 60%-70% 70%-80% 80%-90% 90%-100% Relevant Percentage of Judged Documents (b) Non-NIST Assessments Language Chinese Persian Russian Figure 1: Percentage of documents judged relevant reported by the two groups of assessors during the preliminary topic development. 37 9 3 10 8 4 43 (a) Categorization of All T opics 35 9 2 2 2 (b) Categorization of T opics Selected For Assessment All 3 annotated by NIST or trilingual after Non-NIST annotation T opics where NIST annotators found relevant documents but Non-NIST annotators did not Farsi/Chinese bilingual topics Farsi/Russian bilingual topics Russian/Chinese bilingual topics Problematic topics Monolingual topics Figure 2: Prioritization categories for all topics and for topics selected for pooling. Very Valuable: information that would appear in the lead paragraph of a report on the topic Somewhat Valuable: information that would appear somewhere in a report on the topic Not that Valuable: information that does not add new information beyond the topic description, or information that would appear only in a report footnote Not Relevant: a document without any information about the topic The qrels use a three-point graded relevance scale:15 3 points Very Valuable 1 point Somewhat Valuable 0 points Not that Valuable or Not Relevant 15An early release of the qrels had 3-2-1-0 graded relevance judgments corresponding to the 4-point scale used by assessors. 5.3 Analysis of Topics During the assessment period, forty-seven Chinese topics, forty-five Persian topics, and forty-four Russian topics were judged. Within each language some topics had fewer than three relevant documents, while other topics had a concerningly large percentage of relevant documents in the pools. Having topics with fewer than three relevant document can have undesirable effects on the ability to statistically distinguish systems. There are three Chinese topics, four Persian topics, and three Russian topics with fewer than three relevant documents. Thus each language has at least forty topics; such is generally thought to be the minimum number of topics necessary to fairly compare systems. Identifying topics with a large number of unidentified relevant documents is more important for future use of the collection than for track participants, since every research system had its top fifty documents judged. Scores for systems are comparable and thus systems can be ranked using them. However, given the desire to 4 \fOverview of the TREC 2022 NeuCLIR Track Table 2: Inter-assessor confusion matrices, as raw label counts. Key: Very Valuable (VV), Somewhat Valuable (SV), Not that Valuable (NV), Not Relevant (NR). Chinese Second Label NR NV SV VV Official NR 7694 260 22 0 NV 165 291 71 3 SV 49 155 94 1 VV 34 99 185 20 Persian Second Label NR NV SV VV Official NR 4996 18 7 3 NV 804 25 19 7 SV 54 8 4 3 VV 12 5 0 4 Russian Second Label NR NV SV VV Official NR 4854 59 17 17 NV 620 53 27 71 SV 96 10 10 44 VV 44 13 13 126 create reusable collections, determining topics that likely have many unjudged relevant documents is important. One approach simply calculates the percentage of relevant documents in the pool and sets a cutoff (such as 10% prevalence) as too high to be confident that the relevant set is sufficiently identified. Using this cutoff would flag ten topics in Chinese, seven topics in Persian, and eleven topics in Russian. Figure 3 presents a closer investigation into the percentage of additional relevant documents found by non-baseline runs at various pool depths (the gain curve) by the automatic runs beyond those already found by depth 25 by some baseline run. We exclude topics that do not discover more relevant documents as the depth increases (e.g., the baseline pool contains the same set of relevant documents as any automatic pool up to depth 50). We use a knee detection heuristic [7] on the gain curves to identify topics that are less likely to find a sizable number of unjudged documents with a deeper pool. We calculate the maximum log slope ratio over any automatic pool depth \ud835\udc51as the indicator of curvature. Specifically, let \ud835\udc60\ud835\udc59\ud835\udc5c\ud835\udc5d\ud835\udc52\ud835\udc56,\ud835\udc57be the slope of the curve from \ud835\udc56to \ud835\udc57, i.e., (\ud835\udc5d\ud835\udc57\u2212\ud835\udc5d\ud835\udc56)/(\ud835\udc57\u2212\ud835\udc56) where \ud835\udc5d\ud835\udc58is the percentage of the relevant documents found with pool depth \ud835\udc58. The maximum log slope ratio is defined as: max \ud835\udc51\u2208[1,50] log \u0012 \ud835\udc60\ud835\udc59\ud835\udc5c\ud835\udc5d\ud835\udc521,\ud835\udc51 \ud835\udc60\ud835\udc59\ud835\udc5c\ud835\udc5d\ud835\udc52\ud835\udc51,50 \u0013 . To capture the general trend of the curve instead of sharp fluctuations, we smooth the curve by averaging a moving window of size three. If the value is 0, the curve is a straight line without a plateau; a log ratio that is close to infinity indicates a flat line toward the end. As demonstrated in Figure 3, topics with higher prevalence tend to be continuously finding more relevant documents as the pools become larger, i.e., less likely to be completely judged, which aligns with the prior heuristics on assessing the completeness by prevalence. Topics with moderate prevalence tend to be more complete but not guaranteed. However, the range of appropriate prevalence is subject to the language, and potentially the collection and participating systems. Nevertheless, these results these results suggest some considerations that we will bear in mind during topic curation next year. 5.4 Assessor Agreement As is the convention with TREC tracks, the official relevance assessments for a given topic represent a single assessor\u2019s interpretation of the topic and assessment guidelines. Nevertheless, we sought to explore whether different assessors would have determined different documents to be relevant for the topic and whether such user differences would affect the overall system rankings. We explore the former question in this section and the latter question in Section 8.5. An additional NIST assessor labeled the relevance of the pooled documents for 28 topic-language pairs (12 topics in Chinese, 8 topics in Persian, and 8 topics in Russian), using the same assessment criteria that the topic\u2019s official assessor used. Table 2 presents confusion matrices for each language. We observe that a high proportion of the differences in labels are between the Not that Valuable (NV) and the Not Relevant (NR) labels, 87% for Persian, 66% for Russian, and 41% for Chinese. These results motivate our decision to merge these labels during evaluation. Given the unbalanced nature of the relevance labels, we compute the Cohen\u2019s \ud835\udf05coefficient [5] to assess the overall agreement of the labels. We explore agreement in four settings: the original 4 relevance labels; the 3 labels used for evaluation (merging Not that Valuable and Not Relevant); binary labels used for evaluation measures like MAP (further merging Very Valuable and Somewhat Valuable); and a \u201cFuzzy\u201d setting, in which adjacent labels are considered matches. Table 3 presents the \ud835\udf05values for each of these settings alongside the established interpretations of their quality from Viera et al. [25]. We find that agreement improves for Persian and Chinese when the bottom two relevance labels are merged, and further improves for all languages in the binary setting. When we consider adjacent relevance scores as matches (the Fuzzy setting), we observe substantial agreement in Chinese, moderate agreement in Russian, and fair agreement in Persian. These results suggest that the Persian relevance labels may be biased towards the topic\u2019s specific assessor, while the Chinese and Russian labels potentially Table 3: Cohen\u2019s \ud835\udf05assessor agreement on a sample of relevance assessments, by language. \ud835\udf05values are annotated with interpretations [25] of (F)air, (M)oderate, and (S)ubstantial agreement (others are slight). Labels Chinese Persian Russian Overall 4 labels (M) 0.515 0.081 (F) 0.300 (F) 0.346 3 labels (F) 0.376 0.131 (M) 0.460 (F) 0.392 Binary (M) 0.557 0.151 (M) 0.541 (M) 0.524 Fuzzy (S) 0.777 (F) 0.326 (M) 0.591 (S) 0.674 5 \fLawrie et al. 0 10 20 30 40 50 0.4 0.6 0.8 1.0 % of Rel Found Chinese | prevalence=[0.0, 0.003) 0 10 20 30 40 50 Persian | prevalence=[0.0, 0.003) 0 10 20 30 40 50 Russian | prevalence=[0.0, 0.003) 0 10 20 30 40 50 0.4 0.6 0.8 1.0 % of Rel Found Chinese | prevalence=[0.003, 0.01) 0 10 20 30 40 50 Persian | prevalence=[0.003, 0.01) 0 10 20 30 40 50 Russian | prevalence=[0.003, 0.01) 0 10 20 30 40 50 0.4 0.6 0.8 1.0 % of Rel Found Chinese | prevalence=[0.01, 0.04) 0 10 20 30 40 50 Persian | prevalence=[0.01, 0.04) 0 10 20 30 40 50 Russian | prevalence=[0.01, 0.04) 0 10 20 30 40 50 Automatic Run Depth 0.4 0.6 0.8 1.0 % of Rel Found Chinese | prevalence=[0.04, 1.0) 0 10 20 30 40 50 Automatic Run Depth Persian | prevalence=[0.04, 1.0) 0 10 20 30 40 50 Automatic Run Depth Russian | prevalence=[0.04, 1.0) 0 1 2 3 4 5 Maximum Log Slope Ratio Figure 3: Percentage of relevant documents found with different pooling depths on the runs. The depth for the baseline runs is set to 25. Topics are grouped by prevalence, which is the percentage of the relevant document found among all judged documents. 6 \fOverview of the TREC 2022 NeuCLIR Track Table 4: MT Training Data Language # Sentences BLEU Chinese 84,764,463 31.5 Persian 11,426,143 35.1 Russian 119,856,685 34.9 generalize better across users. Further, while there can be disagreement among assessors about the exact degree of relevance, such cases are generally among adjacent labels. 6 ADDITIONAL RESOURCES The track provided three types of additional resources: translated documents, translated queries, and translations of MS MARCO into the collection languages (NeuMARCO16). Links to some other types of pre-existing resources that might be useful to participants were also provided. One additional resource the track provided was machine translations of the documents into English and the queries into Chinese, Persian, and Russian. These resources facilitated meaningful comparison across systems that used machine translation to cross the language barrier. Documents were translated using a vanilla Transformer model that was trained in-house with the Sockeye version 2 toolkit [2] using bitext obtained from publicly available corpora.17 The number of sentences used in training is given in Table 4, along with BLEU scores on the FLORES-101 benchmark [12] for each language. Query translations were obtained from Google Translate since its performance on titles was superior. While no team processed their own translations of the documents, one team produced their own translations of the queries. The track website also collected a number of multilingual and bilingual resources in the languages of the track including translations of MSMARCO passages into the document languages [3]; HC4 \u2013 a CLIR collection built over three years of Common Crawl data in the same three languages [17]; and two multilingual CLIR datasets based off of Wikipedia known as CLIRMatrix [24] and WikiCLIR [23]. 7 PARTICIPANTS Including baselines, we scored 52 Chinese runs, 60 Persian runs, and 60 Russian runs. Table 5 outlines the number of runs submitted to each of the tasks: monolingual IR, ad hoc CLIR, and reranking CLIR. A total of 12 teams submitted runs for at least one language. This track had worldwide participation, with three teams from Asia, one from Europe, one from South America, and the remainder from North America. More information about participant systems is available in the teams\u2019 notebook papers. 8 RESULTS AND ANALYSIS In this section, we summarize the results and provide some analysis on topic difficulty, reusability, and the effect on system preference order of using different annotators. 16https://huggingface.co/datasets/neuclir/neumarco 17https://opus.nlpl.eu Table 5: Number of runs submitted to each task Language Monolingual Ad Hoc Reranking Total Chinese 17 30 5 52 Persian 20 34 6 60 Russian 20 35 5 60 8.1 Overall Results The full results are presented in Tables 8, 9, and 10. The top-ranked systems all use a combination of title and description queries. Table 6 summarizes the effectiveness of systems categorized by the type of the model. Since huaweimtl team indicates that runs huaweimtl-{zh,fa,ru}-m-hybrid1 runs were ensembling systems that includes a monolingual system (i.e., using human translated queries), these three runs are marked as monolingual runs by the organizers. On average, hybrid approaches that combine dense and sparse retrieval in the system tend to provide the best nDCG@20. Both hybrid and learned-sparse (such as SPLADE [11]) models provide a recall at 1000 close to 80%. Note that the reranking runs tend to have a higher recall at 1000, which is based on a BM25 retrieval result with document translation, that should not be attributed to the reranking models. The variation among dense retrieval models is large, as we can observe in Figure 4. Several dense retrieval models are among the top-ranked systems while others are scattered throughout their peers. Sparse retrieval systems provide a moderate performance, which is mostly contributed by the baseline runs, which are contributed by the organizers and several participants [18]. The left column of Figure 4 presents the monolingual runs. Despite not being the focus of the NeuCLIR track, they enrich the pool and provide a target for the CLIR models to compare with. The highest performing CLIR system, in terms of nDCG@20, for Chinese and Persian outperformed the monolingual model for the corresponding language by about 0.015; the best Russian CLIR system achieved about the same nDCG@20 as the best Russian monolingual system. We defer the discussion on the pooling enrichment benefit of the monolingual runs to Section 8.4. 8.2 Topic Difficulty One of the objectives of topic development is to create a set of topics where the retrieval results are able to distinguish systems. Topics that are too easy or not having any relevant documents are not ideal. Figures 8, 9, and 10 are nDCG@20 boxplots of all the judged topics for each language. Topic 118 for Persian is an example of an easy topic where 75% of the runs achieve nDCG@20 over 0.80; in contrast, all runs score zero for topics 4 and 24 against the Chinese documents, indicating that these two topics are not likely to have any relevant document in the collection. In a practical sense, when no run has retrieved any relevant document, no relevant documents are judged during pooling, thus the topic is not usable for evaluating future systems. Topics, such as Topic 24 in Russian, with a wide range of scores, are ideal for distinguishing systems. However, topics such as 52 in Chinese and 4 in Persian can give future systems that can understand the complex semantics 7 \fLawrie et al. Table 6: Average effectiveness by the type of the CLIR runs. Chinese Persian Russian nDCG MAP R@1k nDCG MAP R@1k nDCG MAP R@1k Rerank 0.299 0.218 0.781 0.391 0.267 0.817 0.376 0.263 0.774 Hybrid 0.419 0.282 0.695 0.439 0.313 0.788 0.516 0.396 0.800 Dense 0.199 0.131 0.463 0.198 0.123 0.497 0.224 0.143 0.496 Learned-sparse \u2013 \u2013 \u2013 0.449 0.300 0.834 0.437 0.321 0.791 Sparse 0.283 0.207 0.657 0.290 0.195 0.712 0.294 0.212 0.679 0.0 0.1 0.2 0.3 0.4 0.5 nDCG@20 Chinese Monolingual Chinese CLIR 0.0 0.2 0.4 0.6 nDCG@20 Persian Monolingual Persian CLIR 0.0 0.1 0.2 0.3 0.4 0.5 nDCG@20 Russian Monolingual Russian CLIR Model T ype Query Source dense T rerank TD hybrid TDN sparse D learned-sparse N/A other Figure 4: Bar charts of submitted runs. Monolingual and CLIR runs are separated into subplots for clarity. 8 \fOverview of the TREC 2022 NeuCLIR Track tSNE Dimension 1 tSNE Dimension 2 Chinese tSNE Dimension 1 Persian tSNE Dimension 1 Russian (a) By Run T ype nDCG@20 0.0 0.1 0.2 0.3 0.4 0.5 Run T ype End-to-End CLIR Monolingual Other Query Trans Doc Trans tSNE Dimension 1 tSNE Dimension 2 Chinese tSNE Dimension 1 Persian tSNE Dimension 1 Russian (b) By Model T ype nDCG@20 0.0 0.1 0.2 0.3 0.4 0.5 Model T ype hybrid sparse rerank dense other learned-sparse Figure 5: tSNE graphs of nDCG@20 for each submitted run. The shade of the marker indicates the overall nDCG@20 of the run. of the queries an advantage, and, therefore, reflect the systems\u2019 improvement. Although most systems have low nDCG scores for these topics, there are relevant documents judged thanks to pooling. Future systems that are able to retrieve these rare but relevant documents will be able to obtain a higher overall score. Systems retrieved more relevant documents for topics that are not related to any country or region, such as Topic 32 (Peanut allergy treatment) and 16 (Dramatic decrease in number of bees). Topics that are more specific to the country where the language is widely spoken tend to result in retrieval of larger numbers of relevant documents. For example, Topic 4 (Corruption during construction of Vostochny Cosmodrome) is among the easiest topics for Russian; however, there are no relevant documents in Chinese, and it is extremely challenging for Persian. Such topics with disjoint interests in different languages are not particularly problematic for evaluating CLIR but this could be an important issue in future MLIR evaluations in which the document collection contains multiple languages. 8.3 Retrieval Diversity Forming a diverse set of judgments could lead to a more reusable collection for evaluating future systems, and such judgments require a diverse set of retrieval results. For each run, we form a vector with the nDCG@20 values of each topic. Thus, the size of such vectors is the number of the judged topics in the language. Figure 5 plots tSNE graphs that project the nDCG@20 vectors to two dimensions. The shade of the markers indicates the overall nDCG@20 of the run. Among different run types (Figure 5(a)), there is not a clear cluster that gathers monolingual systems, which indicates that the monolingual subtask might not provide much value for diversifying the pool. End-to-end CLIR systems (i.e., no translation involved during inference18) demonstrate two clear clusters in each language, while having a clear separation between runs that involve query translation. There are more separations among the model types. Hybrid runs cluster together in all languages, with a clear distance from the sparse runs. Several dense runs are among runs with high overall scores (darker colors), while others are among the lowest-scoring runs. This indicates that the trend appears in not only the overall scores (Figure 4) but also behavior on individual topics. Figures 11, 12, and 13 plot the retrieval similarity among all submissions, where lighter colors indicate higher similarity. For the top 100 retrieved documents (Figures 11(a), 12(a), and 13(a)), runs submitted by the same team tend to be more similar than others, which might indicate that teams are often submitting ablated runs instead of building different stacks of systems. Topranked systems also tend to be more similar to each other as they all put relevant documents at the top, with a more clear trend in Persian and Russian than in Chinese. Sparse runs also retrieve highly similar sets of documents in the top 100, especially in Persian and Russian, indicating that the ranking model might all be leveraging similar features. 18We inferred this from the submission metadata and considered systems marked using English queries and native documents as end-to-end CLIR runs. 9 \fLawrie et al. 0.0 0.1 0.2 0.3 0.4 0.5 Full Pool 0.0 0.1 0.2 0.3 0.4 0.5 Leave-Out-Unique Pool = 0.9902 Chinese | nDCG@20 0.0 0.2 0.4 0.6 Full Pool 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Leave-Out-Unique Pool = 0.9968 Persian | nDCG@20 0.0 0.2 0.4 Full Pool 0.0 0.1 0.2 0.3 0.4 0.5 Leave-Out-Unique Pool = 0.9936 Russian | nDCG@20 0.0 0.2 0.4 0.6 0.8 Full Pool 0.0 0.2 0.4 0.6 0.8 Leave-Out-Unique Pool = 0.9842 Chinese | R@1000 0.0 0.2 0.4 0.6 0.8 Full Pool 0.0 0.2 0.4 0.6 0.8 Leave-Out-Unique Pool = 0.9910 Persian | R@1000 0.0 0.2 0.4 0.6 0.8 Full Pool 0.0 0.2 0.4 0.6 0.8 Leave-Out-Unique Pool = 0.9838 Russian | R@1000 Run T ype CLIR Run Monolingual Run Figure 6: Leave-Out-Unique pool experiments. Rank correlations measured by Kendall\u2019s \ud835\udf0fare marked at the corner of each graph. 0.0 0.1 0.2 0.3 0.4 0.5 Full Pool 0.0 0.1 0.2 0.3 0.4 0.5 Leave-Out-T eam-Unique Pool = 0.9849 Chinese | nDCG@20 0.2 0.4 0.6 Full Pool 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Leave-Out-T eam-Unique Pool = 0.9808 Persian | nDCG@20 0.1 0.2 0.3 0.4 0.5 Full Pool 0.0 0.1 0.2 0.3 0.4 0.5 Leave-Out-T eam-Unique Pool = 0.9796 Russian | nDCG@20 0.4 0.5 0.6 0.7 0.8 Full Pool 0.4 0.5 0.6 0.7 0.8 Leave-Out-T eam-Unique Pool = 0.9573 Chinese | R@1000 0.5 0.6 0.7 0.8 0.9 Full Pool 0.5 0.6 0.7 0.8 0.9 Leave-Out-T eam-Unique Pool = 0.9775 Persian | R@1000 0.4 0.5 0.6 0.7 0.8 0.9 Full Pool 0.4 0.5 0.6 0.7 0.8 0.9 Leave-Out-T eam-Unique Pool = 0.9410 Russian | R@1000 Run T ype CLIR Run Monolingual Run Figure 7: Leave-Out-Team-Unique pool experiments. Rank correlations measured by Kendall\u2019s \ud835\udf0fare marked at the corner of each graph. 10 \fOverview of the TREC 2022 NeuCLIR Track For the retrieved relevant documents, the trend is similar, with top-ranked runs demonstrating higher similarity. However, the light triangle in the middle is less clear, indicating that despite providing lower recall, these runs still contribute some unique relevant documents to make the pool more diverse, which leads to a more reusable collection in the future. Note that the numerator and the denominator of the similarity are the sizes of the intersection and the union of the two runs; this similarity measure would give low similarity if the recall of the two runs are too different. 8.4 Reusability To evaluate the reusability of the collection, we conduct the LeaveOut-Unique experiment [4, 29] to test the robustness of the collection. In the experiment, we leave out one run from the construction of the pool and evaluate the run with the modified qrels. This process simulates the possibility of each run being a future run that does not participate in pooling. Since the primary purpose of the evaluation is to rank systems, the differences in the actual values of the evaluation metric after modifying the qrels are negligible if the ordering of the runs remains the same. Therefore, we calculate Kendall\u2019s \ud835\udf0fon the rank of the systems for quantitatively evaluating the correlation. Figure 6 demonstrate the experiment results. Each dot indicates a run where the x-axis indicates the evaluation metric on the full qrels and the y-axis is the modified version. The results illustrate that most of the runs are still on the diagonal, indicating that the collection is indeed robust and can fairly evaluate future systems that do not contribute to the pools with Kendall\u2019s \ud835\udf0fclose to 0.99. Since the variation between the runs submitted by a team is small, we additionally conduct a Leave-Out-Team-Unique experiment where we remove all submissions from the same team when evaluating a run. Such experiments are more aggressive than the Leave-Out-Unique experiments but provide a more adequate assessment of the reusability. Figure 7 presents the results. The correlation between the full pool and the modified pool is lower with 0.98 for nDCG@20 and 0.95 for recall at 1000. However, we argue that the correlation is still high enough for fairly evaluating future systems. 8.5 Assessor Effect on System Preference Order Given the variable agreement levels among assessors observed in Section 5.4, we explore whether the ranking of submitted systems differs when using the alternative assessments. To this end, we compare the nDCG@20, RBP, MAP, R@100, and R@1000 of the systems using official judgments and the second judgments over the 28 re-assessed topic-language pairs. We measure the rank correlation of the systems using Spearman\u2019s \ud835\udf0cand Kendall\u2019s \ud835\udf0fstatistics, and present the results in Table 7. We observe a very strong correlation for nDCG@20, RBP, and MAP (\ud835\udf0c> 0.83 and \ud835\udf0f> 0.62) and a strong correlation for R@100 and R@1000 (\ud835\udf0c> 0.68 and \ud835\udf0f> 0.50). Noting that using only 28 topics in this analysis induces a greater degree of random variation than would be the case for the full topic set, these results suggest that although assessors sometimes disagree on relevance labels, the system preference order may not change much if a different assessor provided the labels. Table 7: Correlation between systems when measured using the official assessments and the second assessor\u2019s labels. All correlations are significant at \ud835\udc5d< 0.001. Language Measure Spearman\u2019s \ud835\udf0c Kendall\u2019s \ud835\udf0f Chinese nDCG@20 0.951 0.829 RBP 0.960 0.843 MAP 0.975 0.883 R@100 0.971 0.881 R@1000 0.960 0.846 Persian nDCG@20 0.829 0.650 RBP 0.818 0.628 MAP 0.831 0.652 R@100 0.687 0.509 R@1000 0.756 0.575 Russian nDCG@20 0.928 0.777 RBP 0.878 0.708 MAP 0.894 0.728 R@100 0.732 0.561 R@1000 0.708 0.544 9 FUTURE TRACK DIRECTIONS The TREC NeuCLIR track will continue in 2023. The CLIR task will continue, with new topics for the same collections in the same three languages. In this way, we hope to support both improved designs and improved training, since training data from this first year of the track will be available with characteristics closely approximating those of the 2023 CLIR task. We are considering three additional tasks: \u2022 Multilingual Information Retrieval (MLIR). In this task, systems would be asked to create a single ranked list over all three test collection languages (Chinese, Persian and Russian), and would be evaluated using the relevance judgments for all three languages. A similar MLIR task was evaluated in the Cross-Language Evaluation Forum (CLEF) and found to be challenging [10]. \u2022 Non-English Queries (NEQ). In this task, systems would perform CLIR or MLIR using queries that are not in English. The present NeuCLIR test collection can support NEQ tasks with Chinese, Persian or Russian queries, and generation of topics in a small number of additional languages may also be possible. \u2022 CLIR for Technical Documents (CLIR-TD). CLIR could be of value in many domain-specific applications, including for example law, medicine, or engineering. As a first step toward evaluating domain-specific applications of CLIR we are considering the addition of a pilot task in which the goal is to perform CLIR for technical documents in a single non-English language (either Chinese, Persian or Russian) using English queries. A promising direction is to use repositories of non English academic text, such as CNKI19 or Wanfang20 for Chinese, or the Russian Science 19https://www.cnki.net 20http://www.wanfangdata.com 11 \fLawrie et al. Citation Index21 for Russian, or to select dual-language biomedical content that is indexed in PubMed. Starting three new tasks in the same year would risk dividing participating research teams in ways that would work against our goal of being able to compare results and foster progress on specific tasks, however. For this reason, we want to use our planning session at TREC 2022 to discuss at least these four questions with participants: \u2022 What changes to the CLIR task would further enhance the ability of participating teams to achieve their research goals? \u2022 Which of the additional tasks we have listed are of greatest interest to participants (and thus most likely to attract a critical mass of participation)? \u2022 Are there other new tasks that we should be considering that might be an even higher priority for NeuCLIR in 2023? \u2022 Are there additional outreach opportunities that might attract a substantial number of new participants, and if so which new tasks might be most likely to help attract those new participants? We also plan to re-evaluate tools and methodology for reporting carbon footprint of participating systems. We note that, in 2022, only three teams reported their energy usage; many cited the lack of tools compatible with their system, while others said that keeping track of all stages of a track submission is too onerous and error prone (e.g., one might forget to log an experiment). Further, some ambiguity existed in how external resources should be accounted for; for example, should energy required to run systems that are shared with other projects in a lab be counted? Possible improvements in energy tracking include: \u2022 Ask participants to measure impact only while preparing their submission: this would exclude any energy used during the training of neural models or indexing of document collections. Organizers could still use the information collected during a run submission to estimate the total energy usage. \u2022 Explicitly formalize alternative metrics to energy reporting: this could include compute-hours, hardware, or other information that could be used to estimate energy impact. \u2022 Defer energy reporting from run submission time to notebook submission time: this would give teams more time to analyze their impact without having to sacrifice time in the weeks immediately preceding runs submission. 10" + } + ], + "Orion Weller": [ + { + "url": "http://arxiv.org/abs/2403.15246v3", + "title": "FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions", + "abstract": "Modern Language Models (LMs) are capable of following long and complex\ninstructions that enable a large and diverse set of user requests. While\nInformation Retrieval (IR) models use these LMs as the backbone of their\narchitectures, virtually none of them allow users to provide detailed\ninstructions alongside queries, thus limiting their ability to satisfy complex\ninformation needs. In this work, we study the use of instructions in IR\nsystems. First, we introduce our dataset FollowIR, which contains a rigorous\ninstruction evaluation benchmark as well as a training set for helping IR\nmodels learn to better follow real-world instructions. FollowIR repurposes\ndetailed instructions -- also known as narratives -- developed for professional\nassessors to evaluate retrieval systems. In particular, we build our benchmark\nfrom three collections curated for shared tasks at the Text REtrieval\nConference (TREC). These collections contains hundreds to thousands of labeled\ndocuments per query, making them suitable for our exploration. Through this\nprocess, we can measure how well IR models follow instructions, through a new\npairwise evaluation framework. Our results indicate that existing retrieval\nmodels fail to correctly use instructions, using them for basic keywords and\nstruggling to understand long-form information. However, we show that it is\npossible for IR models to learn to follow complex instructions: our new\nFollowIR-7B model has significant improvements after fine-tuning on our\ntraining set.", + "authors": "Orion Weller, Benjamin Chang, Sean MacAvaney, Kyle Lo, Arman Cohan, Benjamin Van Durme, Dawn Lawrie, Luca Soldaini", + "published": "2024-03-22", + "updated": "2024-05-07", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Modern language models (LMs) are extensively tuned to be able to follow user instructions faithfully (Chung et al., 2022; Ouyang et al., 2022a; Rafailov et al., 2023; Wang et al., 2023b; Ivison et al., 2023) and safely (Bai et al., 2022; Bianchi et al., 2024). Through these capabilities, LMs are able to successfully tackle a broad range of tasks (Chiang et al., 2024; Liang et al., 2023; Yang et al., 2023; Jimenez et al., 2024; Zeng et al., 2023), even when not explicitly fine-tuned for them. In contrast to the broader LM community, information retrieval (IR) practitioners and researchers have yet to fully exploit instruction-tuned models. Thanks to their ability to effectively estimate semantic similarity between query and documents, LMs have been adopted as the main backbone of neural retrieval architectures (Karpukhin et al., 2020; Khattab & Zaharia, 2020; Reimers & Gurevych, 2019). However, the vast majority of these systems are fine-tuned to operate exclusively as text spans similarity estimators (Khattab & Zaharia, 2020; Izacard et al., 2021; Nogueira & Cho, 2019; Pradeep et al., 2023; Ma et al., 2023). Moving past these ad-hoc search systems to retrieve with instructions would enable support for complex information needs. For example, imagine a researcher seeking to identify papers that must contain numerous qualities to be relevant (from a given venue, using a arXiv:2403.15246v3 [cs.IR] 7 May 2024 \fFigure 1: How do standard retrieval queries differ from instructions (or narratives)? Instructions contain more specific details about what is relevant, include less directly-relevant background information, and often have directives about what documents are not relevant, using negation. %\u2019s are how often a certain type of content appears in the original TREC instructions used in FOLLOWIR. particular class of methods, etc.) while also making sure to avoid conditions that would make the document not-relevant (using negative sentiment, using datasets from a given domain, etc.). Recent work has started to move towards search with instructions, but this topic is still understudied with only a handful of papers (Su et al., 2022; Asai et al., 2022; Muennighoff et al., 2024). In particular, we find their use of instructions to be narrow: instructions are typically short (fewer than 10 words) and repetitive (only one instruction per dataset e.g., Su et al. (2022); Asai et al. (2022); Li & Li (2023); Xiao et al. (2023)). Further, these works lack evaluation datasets that explicitly measure instruction following\u2014instead focusing on standard ad-hoc retrieval benchmarks. To address these gaps we introduce FOLLOWIR, which consists of (1) a benchmark that explicitly measures the instruction following ability of retrieval models, and (2) training data that includes diverse and realistic instructions. Our key intuition is to leverage instructions developed for professional annotators of IR systems in order to study the capabilities of instruction-following IR models. These instructions are used by annotators to judge document relevance for a given query. Fortunately, the IR field is rich with such data, as these instructions\u2014also known as narratives\u2014are created for all queries in any well-constructed IR dataset. In particular, we use narratives developed for shared tasks at the Text REtrieval Conference1 (TREC). These instructions are thorough and complex, including minute details about what makes a document relevant vs not-relevant. Thus if annotators can use these TREC instructions to annotate document relevance, so should instruction-following retrieval models (example query and instruction pairs are shown in Figures 1 and 2). We use three deeply-judged2 TREC collections as the basis of our evaluation set: TREC Robust 2004 (Voorhees, 2005), TREC Common Core 2017 (Allan et al., 2017), and TREC News 2021 (Soboroff et al., 2020). These collections have been thoroughly annotated in order to evaluate recall in retrieval, with hundreds to thousands of documents judged as relevant or not-relevant. We take the instructions given to the professional annotators and alter them slightly, manually re-annotating the relevant documents. We then have paired instructions, which can be used to test how models respond to changing instructions; that is, we measure if models update the set of relevant documents to match to the new altered instructions. As there are no existing methods to compare pairwise queries in IR, we develop a new evaluation framework to do so, measuring rank-wise score changes (which we call p-MRR) of documents given a pair of different instructions with the same query. Results on FOLLOWIR indicate that current models generally fail to follow instructions in retrieval unless they are 3B+ parameters or have not been trained for retrieval. Our analysis shows that these failures are due to two phenomena: (1) 1trec.nist.gov 2i.e., that a large number of documents have been judged relevant or non-relevant, see Section 2.1 for more. 2 \fQuery:\u00a0Identify positive accomplishments of the Hubble telescope since it was launched.\u00a0 Instruction:\u00a0Documents are relevant that show\u00a0 the Hubble telescope has produced new data, better quality data than previously available, ....,\u00a0 are relevant if they relied solely on the Hubble.\u00a0 Documents limited to the problems of the telescope are be irrelevant. Details of repairs to the telescope without reference to positive achievements would not\u00a0 ... Doc 1:\u00a0The first pictures of the\u00a0 emerging universe from the US Cosmic Explorer (Cobe) satellite\u00a0with ideas from the Hubble Space Telescope, have inspired new cosmological theories .... Doc N:\u00a0Photographs of a giant\u00a0 \u00a0 \u00a0 \u00a0 storm on Saturn taken by the Hubble Space Telescope reveal that the storm has grown so much since it was discovered in September ... it is several times larger than Earth ... Does the model change with the altered instruction? Original Altered Doc N Doc 1 ... Doc 1 Doc N ... Figure 2: A visual depiction of the pairwise evaluation framework: models are evaluated on the query with the original instruction, and then on the query with the altered instruction. If the model correctly understands the instructions, it will change which documents are relevant w.r.t. the alteration (right). Note that the real-world instructions (left) given to TREC annotators includes fine-grained details about what relevance is, as well as instructions containing negation (in bold). models are not used to long instructions, and (2) models use instruction for keyword search rather than as a definition of relevance. To further progress in building retrieval models that can understand instructions, we build a training set of real-world human-used instructions and fine-tune a model on them (FOLLOWIR-7B). Our results show marked improvement on FOLLOWIR for both standard IR metrics and for p-MRR, indicating a starting point for future progress on instruction following. In summary, we contribute the following: (1) a benchmark for evaluating instruction following in retrieval (FOLLOWIR) consisting of human annotations on top of three already highly-judged corpora, (2) analysis of why current models fail to understand instructions, and (3) training data for teaching retrieval models to follow instructions along with a new open-sourced IR model, FOLLOWIR-7B, that can handle long instructions in IR.3 2 Related Work 2.1 TREC Conferences The United States National Institute of Science and Technology (NIST) created the TREC organization in 1993. Each year TREC sponsors many tracks, or shared tasks, on a given dataset. These tracks range from a variety of topics: anywhere from standard ad-hoc retrieval on news (Soboroff et al., 2018; Soboroff, 2021) to more complex domains such as legal retrieval (Baron et al., 2006; Oard et al., 2008), or retrieval-augmented generation/report-generation (Lawrie et al., 2024). As part of this process, NIST sponsors annotations for these collections. Typically, this is done by pooling a set of results (runs) from a variety of retrieval models and then annotating them in rank order until funding runs out. To help facilitate annotation, track organizers provide a narrative (or instruction) for each query that will be given to the annotators\u2014however, IR models are only ever given the query. As evaluating total recall would require annotating every document in the collection for every query (which is not feasible for collections with millions of documents), recall error is tested using post-hoc sampling and annotation. Although not every query and document pair can be evaluated, recall for queries is very high. We build off the rigorous evaluation done at TREC by using several of their collections to build FOLLOWIR. 2.2 Instructions for LMs Instruction-following LMs have been popularized by models such as InstructGPT (Ouyang et al., 2022a), FLAN (Wei et al., 2022), and T0 (Sanh et al., 2022). They have become a large area of interest for the natural language processing community (Touvron et al., 2023a; Jiang et al., 2023; Groeneveld et al., 2024; Black et al., 2022). There has been much work in evaluating if they can 3Links to the code, data, and models are available at https://github.com/orionw/FollowIR 3 \fDataset # Q |Q| |I| Rel. D/Q # Q |I| Rel. D/Q TREC News \u201921 (Soboroff et al., 2020) 50 15.3 40.1 50.1 32 46.9 19.2 TREC Core \u201917 (Allan et al., 2017) 50 16.6 44.0 180.0 20 53.5 32.7 TREC Robust \u201904 (Voorhees, 2005) 249 11.9 68.2 69.9 52 75.2 19.8 Table 1: FOLLOWIR evaluation set statistics before (left) and after (right) annotation. We use a subset of the queries in three popular TREC tracks for variety in queries and documents. |Q| is the word length of the queries and |I| is the word length of the instructions. Rel. D/Q indicates the number of relevant annotated documents in the collection, excluding irrelevant annotations. As designed, there are less relevantly-judged documents in the FOLLOWIR portion (as the annotations change the relevance of documents on purpose for evaluation). generalize to new instructions (Weller et al., 2020; Wang et al., 2022c; Ouyang et al., 2022b), if we can train them to follow instructions without human-annotated data (Wang et al., 2022b; Qin et al., 2023), and applying them to various domains (Zhao et al., 2021; Singhal et al., 2023; Shaghaghian et al., 2020). As the IR community uses LMs in their pipelines, we seek to broaden the scope of IR to include instructions, aligning it with the broader NLP community. 2.3 Instructions for Retrieval Using instructions in retrieval is a nascent area of exploration. Su et al. (2022) and Asai et al. (2022) were two of the earliest works that trained a retrieval model to use instructions along with the query. However, these instructions are typically very short, such as \u201cRetrieve a Wikipedia paragraph that answers this question.\" Recent work incorporates instructions in smaller models (Xiao et al., 2023; Chen et al., 2023, 2024) as well as others which use Llama (Touvron et al., 2023a; Weller et al., 2023) or Mistral (Jiang et al., 2023) as the backbone of a larger retrieval model that can use instructions: GritLM (Muennighoff et al., 2024) trains Mistral to do both generation and embedding, while Wang et al. (2023a) uses Mistral for embeddings only. Despite this flurry of activity, these efforts do not have an explicit instruction-related retrieval benchmark to evaluate on. Instead, they evaluate on standard retrieval benchmark suites such as MTEB (Muennighoff et al., 2022) and BEIR (Thakur et al., 2021) which do not contain instructions. Thus, these newer instruction retrieval models hand-write a few instructions, where typically each instruction is applied to an entire dataset, irrespective of the query. This makes these instructions generic: focused only on the task format, format of the \u201cdocument\" (paragraph, sentence, etc.), and the broad domain. Note that because of this, no current instructions contain any extra background information or negation (Weller et al., 2024) which are commonly found in real-world instructions (see Figure 1 for an example of these differences). In work concurrent to ours, Oh et al. (2024) also propose a dataset to evaluate instructions in retrieval models. Their dataset uses the MS MARCO collection (Nguyen et al., 2016), and differs in several crucial aspects: it only has one relevant document per query (e.g., sparsely judged), is GPT-4 generated and validated, focuses on the background of the user (\u201cI am a school teaching looking for ...\"), and evaluates using the lowest score over N instructions for the same query (measuring robustness). In contrast, we use highly-judged corpora to ensure we can measure recall, use professionally generated instructions, have human-validated relevance judgements, propose a new paired evaluation protocol, and provide a training dataset and model for teaching instruction-following. 3 Building FOLLOWIR We derive FOLLOWIR from three TREC collections: TREC News 2021 (derived from the Washington Post v4 corpus; Soboroff et al., 2020), TREC Robust 2004 (from news articles in Disks 4 and 5 collections; Voorhees, 2005), and TREC Common Core 2017 (from the New York Times Annotated corpus; Allan et al., 2017). Each of these was professionally assessed to include hundreds of annotations per query (see Table 1), with 50-180 relevant documents per query on average (and many more not-relevant annotations). Each of these TREC tracks includes instructions for the professional annotators that we now also give to the models. Although using these alone can provide some indication of how well models can 4 \ffollow instructions, it doesn\u2019t explicitly test their instruction following ability. To more carefully isolate this in our benchmark, we test whether models can respond to small changes in the instruction. To accomplish this, we ask two expert annotators to modify the TREC instructions. However, doing this in a naive way would require re-annotating all the document judgements, a non-trivial task requiring immense annotation efforts.4 Instead, we task the annotators with making instructions more specific by including additional constraints that narrow the relevance definition. These transformations cause some previously relevant documents to become non-relevant without introducing any new relevant documents from the pool. Therefore, only those documents that were deemed relevant by the original TREC assessors need to be re-annotated. This makes the annotation tractable, with only dozens or hundreds of documents to re-annotate per query instead of a collection of thousands. We annotate a subset of the original TREC queries due to cost and overlap: we sample 50 queries from TREC Robust 2004 that do not overlap with TREC Common Core (as Common Core used 50 queries from Robust04 on a new collection), and 30 queries from TREC News 2021. Table 1 shows dataset statistics of judged documents and the final benchmark size. Annotators were asked to change the instructions so that the number of relevant documents was cut roughly in half, thus including a sizeable number of changed relevance judgements. We note that although the number of queries seems small by NLP standards, 30-50 queries is both effective (Webber et al., 2008) and standard in the IR community due to the expense of careful annotation over many documents per query. Due to differences in retriever quality, if we evaluate by searching over the full collection, each model will retrieve a different number of relevant documents. However, because we evaluate instruction following based on changing the document relevance, models that do poorly in the initial retrieval will have fewer documents which change relevance in the instruction-following evaluation. To rectify this, we instead turn to a reranking task where we include all relevant documents, and use a pool of five models5 to select the top non-relevant documents. To be able to freely distribute the data due to fair-use laws, we chunk the documents into 400-word passages with 200-word overlap and select the highest scoring passages using MaxP (Dai & Callan, 2019). This enables us to distribute our data, which we do by extending the MTEB evaluation framework (Muennighoff et al., 2022). 3.1 Evaluation Metrics for FOLLOWIR Our benchmark provides two ways of measuring instruction following: (1) standard retrieval metrics when using the instructions with the queries and (2) pairwise evaluation of instruction following. For (1), we use typical IR evaluation metrics but use the instruction along with the query: these metrics are mean average precision (MAP) for Core17/Robust04 and normalized discounted cumulative gain at 5 (nDCG@5) for News21. For (2) we use our novel pairwise evaluation metric that measures the delta in scores when following the modified instructions instead of the original.6 Our new pairwise evaluation metric, p-MRR, measures rank-wise changes between queries. In developing this metric we had the following desiderata: it should compare the results of the original instruction to those of the new instruction, it should have a standardized range from worst possible change in instruction-following score (i.e., \u22121) to best possible instruction-following score (i.e., 1) with an option for no change when using different instructions (i.e., 0), and finally should take into account the document rank so that changes from rank 1 to rank 2 are more prominent than changes from rank 99 to 100. Given the above qualifications, we use the following equation applied to each changed relevance document per query (where MRR is mean reciprocal rank, Rog is the rank of the document when using the original instruction and Rnew is the new rank): p-MRR = \uf8f1 \uf8f2 \uf8f3 MRRog MRRnew \u22121 if Rog > Rnew 1 \u2212MRRnew MRRog otherwise (1) For the final score, we average first within a given query and then over all queries in the corpora\u2014i.e., macro-averaging across queries, to handle the different number of relevant documents per query. 4NIST\u2019s budget is $1\u20132 million USD/year: trec.nist.gov/pubs/2010.economic.impact.pdf 5We use BM25, BGE-base, E5-base-v2, TART-Contriever, and INSTRUCTOR-xl. 6Note that we do not show standard retrieval results on the modified instruction\u2019s relevant document set, as standard retrieval scores cannot be directly compared across different query relevance annotations (qrels). 5 \fRobust04 News21 Core17 Average Model MAP p-MRR nDCG p-MRR MAP p-MRR Score p-MRR No-Instruction IR E5-base-v2 13.4 -6.7 20.9 -2.0 14.0 -2.9 16.1 -3.9 Contriever 19.7 -6.1 22.9 -2.8 15.3 -2.5 19.3 -3.8 MonoBERT 21.0 -9.4 25.1 -0.8 18.4 -0.2 21.5 -3.5 BM25 12.1 -3.1 19.3 -2.1 8.1 -1.1 13.2 -2.1 MonoT5-base 15.7 -6.2 11.0 +5.0 12.2 -4.1 13.0 -1.8 E5-large-v2 17.4 -4.2 24.3 +0.9 17.0 +0.1 19.6 -1.1 MonoT5-3B 27.3 +4.0 16.5 +1.8 18.2 +1.8 20.7 +2.5 Instruction-IR TART-Contriever 14.3 -9.0 21.8 -3.0 13.3 -3.0 16.5 -5.0 INSTRUCTOR-base 17.2 -10.4 22.1 -1.8 15.5 -1.1 18.3 -4.4 E5-mistral 23.1 -9.6 27.8 -0.9 18.3 +0.1 23.1 -3.5 BGE-base 16.8 -6.5 20.0 -0.1 14.6 -2.7 17.1 -3.1 INSTRUCTOR-xl 19.7 -8.1 26.1 -0.9 16.8 +0.7 20.9 -2.8 BGE-large 17.5 -7.8 22.3 +0.6 15.0 +0.1 18.3 -2.4 GritLM-7B 28.6 -1.7 24.4 -1.0 20.8 +2.6 24.6 -0.0 TART-FLAN-T5-xl 24.6 -0.7 12.8 +2.0 17.0 +2.8 18.1 +1.4 APIs OpenAI v3 Large 27.2 -5.8 27.2 -2.0 21.6 -0.2 25.3 -2.7 Cohere v3 English 22.3 -3.6 28.3 +0.2 20.6 +2.8 23.7 -0.2 Google Gecko 23.3 -2.4 29.5 +3.9 23.2 +5.4 25.3 +2.3 Instruct LMs FLAN-T5-base 6.4 +5.3 6.1 -0.1 6.5 -3.3 6.3 +0.6 Llama-2-7B-chat 6.3 +2.0 1.7 +0.2 5.4 +2.8 4.5 +1.7 FLAN-T5-large 14.7 +3.9 8.0 +8.9 11.4 +1.3 11.4 +4.7 GritLM-Reranker 9.7 +6.1 10.2 +3.4 9.8 +8.6 9.9 +6.0 Mistral-7B-instruct 23.2 +12.6 27.2 +4.8 19.7 +13.0 23.4 +10.1 FollowIR-7B 24.8 +13.7 29.6 +6.3 20.0 +16.5 24.8 +12.2 Table 2: Evaluating instruction-following on FOLLOWIR. Introduced in this work, p-MRR is a pairwise evaluation metric measuring instruction following when instructions change, ranging from \u2212100 to 100 (higher is better). Generally only models with over 3B parameters or instruction-tuned LMs that haven\u2019t been trained on retrieval tasks show success at following retrieval instruction. 4 Evaluating Instruction Following In this section we describe the models we evaluate, their results on FOLLOWIR, and ablations performed to better understand the behavior of current models. 4.1 Evaluation Settings We evaluate a wide variety of IR models (trained with and without instructions), including neural models ranging from 100 million to 7 billion parameters. We evaluate on the original TREC instructions in the FOLLOWIR benchmark and then on the new instructions, showing both standard IR metrics and the new pairwise metric p-MRR. We group models into four categories: No Instructions in Training These retrieval models did not see instructions in training and typically aren\u2019t given them: this includes Contriever (Izacard et al., 2021), E5 (Wang et al., 2022a), MonoBERT Nogueira et al. (2019), MonoT5 (Nogueira et al., 2020), and BM25 (Robertson et al., 1995). Instructions in IR Training Most retrieval models using instructions received roughly one instruction per retrieval dataset, which generally defined the domain (e.g., \u201cFinancial\"), document size (sentence, passage, etc.), and task format. This includes INSTRUCTOR models (Su et al., 2022), the bi-encoder TART model trained from Contriever (Asai et al., 2022), the reranker TART trained from FLAN-T5 (Chung et al., 2022), E5 Mistral-Instruct (Wang et al., 2023a), and GritLM (Muennighoff et al., 2024). We also include BGE models (Xiao et al., 2023) in this category, although they are trained with only one instruction total for each broad task (retrieval, clustering, etc.). API Models We use three of the best performing API embedding models: Cohere\u2019s v3 English, Google\u2019s Gecko (Lee et al., 2024) and OpenAI\u2019s Text-Embedding-v3-Large. It is mostly unknown 6 \fMonoBERT E5-base-v2 MonoT5-base Contriever E5-large-v2 BM25 MonoT5-3B 10 5 0 No Instruction Training E5-mistral INSTRUCTOR-b BGE-base TART-dual BGE-large INSTRUCTOR-xl GritLM-7B TART-FLAN 2.5 0.0 2.5 5.0 p-MRR compared to using the query only Uses Instructions in Training OpenAI v3 Cohere v3 Google Gecko FLAN-T5-large GritLM-Reranker Mistral-7B-instruct FollowIR-7B 5 0 5 Instruct-T uned LLMs and APIs Instruction Setting Keywords Short Instruction Full Instruction Figure 3: Score difference between using no instructions to using instructions formatted as keywords, short text, or the full text. While models that can correctly use instructions see gains with the additional information, most other models see decreasing performance as instruction length increases. what these models\u2019 training procedures were\u2014including if they were trained on instructions or not\u2014thus we place them in a distinct category. However, we note that Google\u2019s model did explicitly train with instructions, as mentioned in their technical report. Instruction-Tuned LMs We also evaluate several instruction-tuned LMs to be used as rerankers, including FLAN-T5 (Chung et al., 2022), Llama v2 (Touvron et al., 2023b), and Mistral-Instruct-v0.2 (Jiang et al., 2023). We evaluate these models in the same fashion as MonoT5 rerankers, comparing the true and false tokens. Note that these models were not trained on any retrieval-specific data. 4.2 Results Table 2 shows the main results on FOLLOWIR, with the standard IR score shown (either MAP or nDCG@5) as well as the pairwise evaluation metric, p-MRR. No-Instruction IR Models We see that the no-instruction models range widely in standard IR metrics (in terms of nDCG@5 and MAP) but generally have negative scores for p-MRR (up to \u22123.9). The only non-instruction model to score positively on average is MonoT5-3B (+2.5 p-MRR). Instruction IR Models We again see that these models have generally negative scores, with the exception being GritLM (with scores averaging roughly zero) and TART-FLAN-T5-xl which has slightly positive scores for two of the three datasets (with an average of +1.4 p-MRR). API Models We see that the API models perform strongly in terms of standard IR metrics, with OpenAI\u2019s and Google\u2019s models performing the highest overall. However, Cohere\u2019s and OpenAI\u2019s models perform poorly at instruction-following with negative scores (\u22120.2 and \u22122.7 on average, respectively) whereas Google Gecko has positive scores (+2.3) likely a result of training on more instruction-focused data. Instruct-Tuned LMs In contrast to the previous results, all instruction-tuned LMs show positive results for instruction following, although they have the widest range of performance using standard 7 \fSciFact NFCorpus FiQA Model OG \u2206w/Key. OG \u2206w/Key. OG \u2206w/Key. NoInstruction BM25 67.9 -1.7 32.2 -5.1 23.6 -1.6 E5-base-v2 71.9 -2.7 35.4 -2.5 39.9 -0.4 Contriever 64.9 +0.4 31.7 +0.0 24.5 -3.2 MonoT5-base 73.1 -0.6 35.6 -0.9 41.2 -0.3 Uses Instruction TART-Contriever 67.6 -0.3 33.4 -5.3 31.8 -0.4 INSTRUCTOR-base 57.8 +1.0 31.6 -0.4 39.2 -0.1 BGE-base 73.2 -0.5 35.5 +0.0 40.8 -2.3 TART-FLAN-xl 74.2 +1.6 33.9 +0.4 39.6 -0.3 INSTRUCTOR-xl 62.4 +0.2 36.0 -0.6 46.9 +0.8 E5-Mistral 77.1 -5.1 38.8 +0.3 56.7 -6.5 Table 3: Ablation on BEIR benchmarks for models that do poorly with longer instructions, comparing their original short instructions vs domain keywords extracted from those instructions (see Appendix D for a list). If models had learned to use the instructions correctly we would see a divergence between the behavior of instruct and non-instruct models, however, for both we see that using keywords instead of the instruction results in comparable performance (\u00b1 one point). IR metrics (ranging from very poor scores to some of the higher scores). We see that the best performing model in this category is FOLLOWIR\u20137B, which we describe in more detail in Section 5. Overall We see that the only models that show positive results at following instructions are either IR models with over 3B parameters or those that have been explicitly trained to follow instructions (e.g. FLAN-T5), without any retrieval-specific supervision. This aligns with work in the natural language processing community which has shown that the instruction-following ability improves with scale (Brown et al., 2020) and supervised instruction-tuning (Longpre et al., 2023). 4.3 Analysis Why do so many models fail to correctly follow instructions when they do well on typical IR metrics such as nDCG and MAP? We answer this question by ablating several components that may impact results: (1) whether IR models are not used to text that cannot be used for simple keyword search (i.e. instructions) and (2) whether they are unused to the length of the longer instructions (as current instruction IR models have been trained on much shorter instructions). To test these, we compare the original query-only result to those where we additionally give the model either the full instruction, a shorter instruction, or keywords from the instruction. We gather these short instructions and keywords by prompting GPT-4-Turbo-1106 to generate them from the original full instruction (for TREC data) or otherwise use the original short instructions given by the authors of the model (for BEIR data). For the full prompt text, please see Appendix E. We show results for these ablations in Table 3, where positive scores indicate that adding information improves the model while negative scores indicate a drop in performance. We see a consistent trend where models that did poorly on longer instructions perform better on keywords and shorter instructions than with the full instruction. However, models that are able to follow instructions see better results with the additional information, on average. These results show that models are (1) using the instruction text as keywords (as performance is higher when using only keywords) and (2) are not able to utilize the extra information in the instructions (as they generally decrease in performance with this additional information). We also confirm that these results hold on datasets outside of TREC collections and show results on three BEIR datasets: SciFact, NFCorpus, and FiQA. We show in Table 3 the original score (using the short instructions from their papers) and the change in score when using just keywords from the instruction (again extracted from GPT-4). We show results only for models which performed poorly for instruction-following. We see that the scores for keywords vs the short instruction are generally similar, with most models seeing a change of around \u00b1 1 point, except for the strongest of the non-instruction-following models, E5-Mistral, seeing a larger drop on some datasets. 8 \fOverall We find overall (on both TREC and BEIR datasets) that models use instructions for keyword matching and are unused to longer instructions that may contain slightly less relevant words. 5 Teaching Instruction Following Is it possible to improve model performance in following instructions? We show that fine-tuning on a training set of longer instructions can provide a method for doing so. We start by gathering a training set to teach models. We collect all TREC narratives (i.e., instructions) from tasks not in FOLLOWIR, consisting of 1836 pairs of queries and narratives. However, we note that this does not provide any positive or negative documents for fine-tuning. In order to obtain documents for training, we prompt GPT-3.5-Turbo-1106 to generate relevant and not-relevant documents, generating roughly two relevant and non-relevant instances per query. However, these synthetic documents are noisy and contains errors w.r.t. the labels\u2014to remedy this, we perform a round of filtering and use the best performing open-source model from Table 2 (Mistral-7B-Instruct-v0.2) to score each of the generated documents according to the instruction. We then filter the documents according to whether Mistral correctly predicts the generated label, and finally balance the relevant and non-relevant samples, choosing only one relevant and non-relevant document per query. Our total is \u223c1800 training instances on \u223c1200 unique query/instructions pairs. We then train our instruction-following model, FOLLOWIR-7B, by fine-tuning Mistral-7B-Instructv0.2 on our data using the Llama-Factory framework (Hiyouga, 2023) with LoRA (Hu et al., 2021). Full training hyperparameter details are found in Appendix A. Model Robustness@10 BM25 26.9 TART-Contriever 47.5 RepLLaMa 52.6 E5-Mistral 55.4 Mistral-7B-instruct 35.3 FollowIR-7B 71.5 Figure 4: Performance on the InstructIR benchmark using their \u201cRobustness@10\" scores, e.g. the min nDCG@10 score across 10 instructions. Upper portion is bi-encoders while lower is rerankers. When we evaluate this model on FOLLOWIR (Table 2), we find that the scores consistently improve. Compared to the original Mistral-7BInstruct-v0.2, our model improves on both standard IR metrics (+6.0% relative improvement) and on instruction following (+20.8% relative improvement). We also show that this improvement holds on the concurrent InstructIR dataset (Table 4), where FollowIR-7B scores double the base Mistral-7B scores (71.5 Robustness@10 vs 35.3) and is the top scoring model overall. Thus, we can see that it is possible to train IR models to be better instruction followers. 6" + }, + { + "url": "http://arxiv.org/abs/2309.08541v2", + "title": "When do Generative Query and Document Expansions Fail? A Comprehensive Study Across Methods, Retrievers, and Datasets", + "abstract": "Using large language models (LMs) for query or document expansion can improve\ngeneralization in information retrieval. However, it is unknown whether these\ntechniques are universally beneficial or only effective in specific settings,\nsuch as for particular retrieval models, dataset domains, or query types. To\nanswer this, we conduct the first comprehensive analysis of LM-based expansion.\nWe find that there exists a strong negative correlation between retriever\nperformance and gains from expansion: expansion improves scores for weaker\nmodels, but generally harms stronger models. We show this trend holds across a\nset of eleven expansion techniques, twelve datasets with diverse distribution\nshifts, and twenty-four retrieval models. Through qualitative error analysis,\nwe hypothesize that although expansions provide extra information (potentially\nimproving recall), they add additional noise that makes it difficult to discern\nbetween the top relevant documents (thus introducing false positives). Our\nresults suggest the following recipe: use expansions for weaker models or when\nthe target dataset significantly differs from training corpus in format;\notherwise, avoid expansions to keep the relevance signal clear.", + "authors": "Orion Weller, Kyle Lo, David Wadden, Dawn Lawrie, Benjamin Van Durme, Arman Cohan, Luca Soldaini", + "published": "2023-09-15", + "updated": "2024-02-26", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "main_content": "Introduction Neural information retrieval (IR) systems routinely achieve state-of-the-art performance on tasks where labeled data is abundant (Karpukhin et al., 2020; Yates et al., 2021). When limited or no data is available, neural models fine-tuned on data-rich domains are used in zero-shot manner (Thakur et al., 2021; Rosa et al., 2022b). However, shifts in distribution of queries and documents can negatively impact their performance (Lupart et al., 2023). To mitigate this effect, language models (LMs) can be used to expand queries or documents from 1Code and data are available at https://github.com/ orionw/LM-expansions \u2217Work performed during internship at AI2. 20 30 40 50 60 70 Baseline Score (nDCG@10) -5 +0 +5 +10 +15 +20 When Using Expansion Type of Shift (Dataset) No Shift (TREC DL 2019) Domain Shift (FiQA) Query Shift (ArguAna) Figure 1: LM-based query and document expansion methods typically improve performance when used with weaker models, but not for stronger models. More accurate models generally lose relevance signal when expansions are provided. Each point is a value in Table 1. unseen domains (Dai et al., 2022; Gao et al., 2022; Jagerman et al., 2023; Jeronymo et al., 2023; Wang et al., 2023a). These techniques input queries and/or documents into an LM to generate additional content, which is combined with original text to facilitate relevance matching. For example, Doc2Query (Nogueira et al., 2019c) uses an LM to generate likely queries for documents in the collection. Meanwhile, HyDE (Gao et al., 2022) uses an LM to generate a fictitious relevant document for a user query. As LMs are often trained on more domains than typical rankers, LM-based expansion leverages this encoded knowledge to bridge out-ofdistribution gaps. IR researchers have long proposed methods to expand queries and documents (Rocchio Jr, 1971; Lavrenko and Croft, 2001; Abdul-Jaleel et al., 2004). However, we note that LM-based expansions are qualitatively different from traditional arXiv:2309.08541v2 [cs.IR] 26 Feb 2024 \fexpansion techniques. While the latter are largely non-parametric, using thesauri or relevance signals from the collection,2 LM-based expansions can leverage knowledge encoded in their model weights. Finally, while many comparative analyses of statistical expansion techniques exist (Hust et al., 2003; Bhogal et al., 2007; Carpineto and Romano, 2012), no equivalent work has been conducted for LM-based approaches. Many works have proposed specific LM-based expansions, but these approaches are generally tested only a small subset of retrieval methods (small bi-encoder models or BM25) or only work on specific domains (Gao et al., 2022; Wang et al., 2023a; Zhu et al., 2023). We thus seek to answer the following: RQ1: How do different models impact query and document expansion (\u00a73)? Across all types of IR models and architectures, performance is negatively correlated with gains from expansion: after a certain score threshold these expansions generally hurt performance (as they blur the relevance signal from the original documents). RQ2: How do different distribution shifts impact these results (\u00a74)? Our main results hold for all types of shift \u2013 better models are harmed by expansion \u2013 except for long query shift, where expansions generally help most-to-all models. RQ3: Why do expansions hurt stronger IR models (\u00a75)? We find that query and document expansions introduce new terms, potentially weakening the relevance signal of the original text. Overall, this work aims at answering the following question: when should one use LM-based expansions? Through our investigation, we provide evidence to help practitioners answer this question. Our results run counter to the common intuition that query and document expansion are helpful techniques in all cases; instead, they show that LM expansions generally benefit weaker rankers, but hurt more accurate rankers. Further, analysis over twelve datasets shows that whether a given model benefits from expansion varies depending on task; datasets with pronounced distribution shifts (e.g., very long queries) are more likely to benefit. 2For example, pseudo relevance feedback (PRF) uses topk retrieved documents to expand queries. Thus, PRF relies on the quality of the initial retrieved set; generally, the better the retrieval, the better the expansion. We note that this is not necessarily the case for LM-based expansions/PRF: parametric knowledge encoded in model weights affect terms selected for expansion (in contrast to classic PRF that typically selects new terms from the top relevant documents from the collection). 2 Experimental Settings We provide an overview of document and query expansion methods used in the reminder of the manuscript, and describe our experimental setup. We choose expansion techniques according to two criteria: (i) their overall performance, as claimed in papers introducing them, and (ii) whether they can used with any retrieval model. While there exists more specific techniques for particular architectures, such as ColBERT-PRF (Wang et al., 2023c,b), we use text-based expansions from LMs to ensure generalizability of our findings. We generate expansions using gpt-3.5-turbo3 as it is inexpensive and shows strong performance in previous work (Wang et al., 2023a; Jagerman et al., 2023). Since using LMs to generate expansions for large collections would be prohibitive, we restrict our expansions to the reranking setting, e.g. the top 100 documents per query found from BM25 following Asai et al. (2022).4 Following established practices, we use these expansions for zero-shot out-of-domain retrieval. Although it is possible that training with expansions may further increase their effectiveness, this limits their generalizability since it requires re-training retrieval models for each expansion technique and LM. 2.1 Query Expansion We use three types of query expansion, selecting the best methods from previous work. HyDE from Gao et al. (2022) provides taskspecific instructions for the LM to generate a document that would answer that question. We use prompts from their work when available. Chain of Thought from Jagerman et al. (2023) was inspired by Wei et al. (2022); it prompts the model to reason before giving the answer. The step-by-step reasoning is then used to expand the 3We use version gpt-3.5-turbo-0613. To show that our results generalize beyond this specific language model, we include results using other open/API LMs (gpt-4-0613, Claude V2, Llama2 70b Chat) in Appendix A that show the same conclusion. Prompts and example input/output can be found in Appendix D and E. We also explore the placement of these augmentations (should we prepend/append/replace the original query and documents?) in Appendix B and show that this also makes little difference. 4As of September 2023, even just a single document expansion method using gpt-3.5-turbo on the DL Track 2019 collection would cost thousands of dollars. Thus we rerank the top 100 docs for each dataset. We show in Appendix C and Table 10 that our observations hold up to 10,000 documents. \f0 20 40 60 ArguAna 0 20 40 60 FiQA 20 40 60 80 100 GooAQ T ech 0 20 40 60 NFCorpus 20 40 60 80 100 NDCG @ 10 Quora 0 20 40 60 80 100 SciFact Refute 0 20 40 TREC-CT 20 40 60 80 TREC-DL19 20 40 60 80 TREC-DL20 0 20 40 60 T oT 0 20 40 T ouche 40 60 80 WikiQA ColBERTv2 Contriever FT DPR MonoT5-3B MonoT5-Small Figure 2: Effect of expansion over twelve datasets. For each dataset, markers show base performance for models, while the boxplot indicates the range of changes in scores for document and/or query expansion. Across all datasets and models, we note a consistent trend: models with lower base performance benefit from expansion; higher performing rankers generally suffer when expansion techniques are used. DL Track 2019 FiQA Arguana Type Model No Exp QE DE Both No Exp QE DE Both No Exp QE DE Both First Stage DPR 38.4 +6.6 +3.1 +10.8 14.4 +4.7 +1.7 +5.7 34.9 -7.1 +1.6 -4.4 Contriever 49.0 +3.5 +4.0 +8.1 21.3 +3.6 +1.6 +5.1 45.8 -0.1 +2.9 -3.2 BM25 51.2 -4.0 23.6 +4.5 30.0 -5.4 Contriever FT 62.3 +1.6 -0.2 +0.6 29.6 +3.2 +0.6 +3.8 48.8 -3.6 +2.0 -2.5 E5 Base v2 67.3 -3.4 -0.9 -3.7 37.8 -0.6 -3.8 -2.5 51.1 -8.4 +2.6 -5.7 MPNet Base v2 68.3 -6.0 -2.9 -6.8 44.5 -4.1 -3.5 -5.7 47.6 -5.1 +5.3 -0.7 E5 Small v2 69.1 -4.8 -1.9 -6.8 36.4 +0.4 -2.9 -0.6 46.1 -8.7 +2.7 -9.8 GTE Large 70.0 -4.5 -1.3 -4.5 41.2 -2.0 -4.1 -3.2 56.8 -8.8 -0.9 -9.0 E5 Large v2 70.1 -5.7 -1.7 -7.6 38.6 -0.9 -2.7 -3.2 48.9 -5.9 +3.2 -3.4 Rerankers MonoT5-Small 66.6 -2.0 -2.8 -2.8 34.3 +0.1 -0.6 -0.3 21.1 +22.7 -3.0 +22.2 MiniLM-2-v2 68.0 -3.2 -4.1 -5.1 27.5 -2.0 +0.6 -15.8 15.2 +11.4 +10.8 +11.2 SPLADEv2 70.1 -4.3 -3.7 -5.6 33.4 +1.3 -0.2 +1.2 45.0 -4.5 -1.3 -4.0 MonoBERT 70.4 -4.6 -2.0 -4.8 36.2 +0.2 -0.7 +0.0 50.1 -5.7 +2.5 -9.3 MiniLM-4-v2 70.6 -3.0 -2.5 -4.9 33.8 +1.5 -0.3 +1.2 43.4 +0.4 +1.0 -0.8 MonoT5-Base 71.5 -3.2 -1.4 -5.2 39.2 -1.2 -1.2 -0.9 27.0 +20.0 +0.7 +18.7 MonoT5-3B 71.7 -2.8 -2.0 -5.0 45.9 -3.8 -3.2 -5.6 42.4 +6.8 -1.9 +5.2 ColBERTv2 71.8 -4.2 -2.8 -6.4 33.8 -0.4 -0.3 -0.7 47.4 -5.2 -0.6 -4.8 MiniLM-12-v2 72.0 -4.3 -4.5 -5.6 35.5 -0.4 -0.5 +0.0 33.2 +12.0 +1.1 +9.8 MonoT5-Large 72.2 -4.0 -1.8 -5.6 42.8 -2.3 -2.3 -3.1 31.2 +14.8 -2.0 +14.8 LLAMA 72.6 -2.9 -4.9 -7.7 40.0 -3.7 -4.9 -5.8 52.6 -3.9 -6.9 -9.4 LLAMAv2 72.8 -4.2 -4.9 -9.3 41.1 -3.6 -7.4 -7.9 52.3 -1.5 -8.2 -7.0 LLAMAv2-13B 73.6 -4.5 -5.4 -7.3 41.2 -4.5 -4.9 -7.0 49.4 -2.1 -6.0 -4.9 Table 1: Best expansion strategies across different models. QE stands for query expansion (Q-LM PRF), DE for document expansion (Doc2Query), and Both for the combination (Q-LM PRF + Doc2Query). Colors indicate a positive or negative delta over scores for no expansion. Models with higher base scores are generally harmed by expansions while weaker models benefit from them. Llama models follow MonoT5 in fine-tuning on MSMarco. original query. Many works have shown the effectiveness of this approach (Jagerman et al., 2023; He et al., 2022; Trivedi et al., 2022). LM-based Pseudo Relevance Feedback (Q-LM PRF). PRF is a classical IR method to expand a query using terms from top retrieved documents. We use an LM to generate a list of terms from the top 3 documents ranked by a bi-encoder model (Contriever). Through a second invocation, the LM updates the query to include the new terms. LM-aided PRF has been shown to be broadly effective (Mackie et al., 2023; Jagerman et al., 2023). 2.2 Document Expansion Doc2Query. There are fewer widespread LM document expansion techniques, with the main one being Doc2Query (Nogueira et al., 2019c). Work has found that improving the question generation model results in higher scores, hence we use ChatGPT instead of T5 for our experiments (Nogueira et al., 2019a). See Appendix A for results using alternative LMs for document expansion. LM-based Document PRF (D-LM PRF). Similar to the Q-LM PRF technique above, we propose \fAxis Dataset # Queries # Docs Avg. Judged/Q Q Len D Len In-Domain TREC DL Track 2019 (Craswell et al., 2020) 43 8,841,823 212.5 5.4 56.6 TREC DL Track 2020 (Craswell et al., 2021) 54 8,841,823 207.9 6.0 56.6 Domain Shift FiQA-2018 (Maia et al., 2018) 648 57,600 2.6 10.9 137.4 Gooaq Technical (Khashabi et al., 2021) 1,000 4,086 1.0 8.3 44.5 NFCorpus (Boteva et al., 2016) 323 3,633 38.2 3.3 233.5 Relevance Shift Touch\u00e9-2020 (Bondarenko et al., 2020) 49 382,545 19.0 6.6 293.7 SciFact Refute (Wadden et al., 2020) 64 5,183 1.2 12.1 214.8 Long Query Shift Tip of My Tongue (Lin et al., 2023) 2,272 1,877 1.0 144.3 100.5 TREC Clinical Trials \u201921 (Roberts et al., 2021) 75 375,580 348.8 133.3 919.5 ArguAna (Wachsmuth et al., 2018) 1,406 8,674 1.0 197.1 170.3 Short Doc Shift WikiQA (Yang et al., 2015) 369 26,196 1.2 6.3 25.1 Quora (Iyer et al., 2017) 10,000 522,931 1.6 9.5 12.5 Table 2: Statistics of datasets in this work. Avg. Judged/Q is the number of relevant documents per query. Length is measured in words. The TREC DL Track uses the MS MARCO dataset (Nguyen et al., 2016). DL 2019 Track DL 2020 Track Type Model DPR Contriever FT MonoT5-3B DPR Contriever FT MonoT5-3B \u2212 No Expansion 38.4 62.3 71.7 39.2 57.5 68.3 Query HyDE +18.8 +9.3 -4.0 +13.2 +7.4 -5.8 CoT +12.6 +2.7 -6.7 +5.5 +4.2 -9.3 Q-LM PRF +6.6 +1.6 -2.2 +6.3 +2.7 -3.0 Doc D2Q +3.1 -0.2 -1.2 +3.1 +1.3 -1.9 D-LM PRF -1.1 -15.5 -23.6 -2.6 -9.1 -19.3 Both HyDE + D2Q +21.9 +9.0 -4.5 +15.0 +6.2 -5.4 CoT + D2Q +15.1 +0.8 -7.3 +7.2 +4.2 -8.1 Q-LM PRF + D2Q +10.8 +0.6 -4.2 +8.1 +3.7 -3.3 HyDE + D-LM PRF +16.7 -3.1 -22.8 +11.4 +1.2 -17.9 CoT + D-LM PRF +10.9 -10.9 -25.0 +4.1 -4.4 -21.8 Q+D LM PRF +6.8 -5.6 -14.4 +4.5 -2.4 -11.8 Table 3: In-Domain performance on the TREC Deep Learning Tracks, according to various types of expansions, showing that expansion typically helps weaker models (like DPR) but hurts stronger models (especially large reranker models like MonoT5-3B). Colors indicate a positive or negative delta over scores for no expansion. a document expansion that draws pseudo-relevance from related queries instead of related documents. In this setting, where there exists a set of unjudged user queries, we show the LM the top 5 mostsimilar queries and ask it to expand the original document to better answer the relevant queries. 3 RQ1: How Do Different Models Impact Query and Document Expansion? Experimental Setting To understand the efficacy of LM-based expansions, we employ a wide variety of neural retrieval models: DPR (Karpukhin et al., 2020); ColBERT v2 (Santhanam et al., 2022); SPLADE v2 (Formal et al., 2021a); MonoBERT (Nogueira et al., 2019b); several MonoT5 (Nogueira et al., 2020), E5 (Wang et al., 2022b), and MiniLM models (Wang et al., 2020); GTE (Li et al., 2023); all-mpnet-v2-base (Reimers and Gurevych, 2019); Llama 1 & 2 models (Touvron et al., 2023a,b), which we fine-tune on MS MARCO. Due to the exponential combination of models and datasets, we evaluate all models on three representative datasets in Table 1 (we provide a comprehensive description of all datasets in \u00a75); then, we use five representative models (DPR, Contriever, ColBERTv2, MonoT5-small, and MonoT5-3B) on a larger suite of datasets (see Figure 2). We present results for expansion technique as absolute increase/decrease in nDCG@105 points over a baseline with no expansion, which we highlight in grey in all tables. Values above zero (e.g. greater than the base version) are highlighted blue while values below the base are highlighted red. Color intensity is scaled linearly according to the 5Traditional expansion techniques increase recall of retrieval systems. However, LM-based expansions have been shown to also improve precision (Jagerman et al., 2023). Thus, we use the official, precision-oriented metric for BEIR, nDCG. \fFiQA-2018 GooAQ Technical NFCorpus Type Model DPR Contriever FT MonoT5-3B DPR Contriever FT MonoT5-3B DPR Contriever FT MonoT5-3B No Expansion 14.4 29.6 45.9 42.5 71.0 80.2 24.1 34.6 39.2 Query HyDE +3.6 -0.3 -14.7 +3.1 +3.8 -10.0 +0.3 +0.0 -5.9 CoT +3.6 +0.4 -13.2 +2.0 +2.1 -9.7 -0.7 -0.6 -4.5 Q-LM PRF +4.7 +3.2 -3.8 +6.4 +1.9 -3.4 +0.2 -0.4 -2.7 Doc D2Q +1.7 +0.6 -3.2 +6.4 +3.0 -1.1 +1.3 +0.6 -0.5 D-LM PRF +3.3 +1.6 -12.5 +3.8 +0.6 -11.4 +0.3 -0.3 -0.7 Both HyDE + D2Q +4.5 +0.4 -14.8 +8.2 +5.2 -7.4 +1.6 +0.1 -7.2 CoT + D2Q +4.4 +0.2 -13.4 +7.2 +3.8 -6.9 +0.8 +0.0 -5.6 Q-LM PRF + D2Q +5.7 +3.8 -5.6 +10.9 +4.2 -4.1 +1.4 -0.1 -3.0 HyDE + D-LM PRF +5.8 +1.2 -14.8 +5.3 +2.7 -14.2 +0.8 +0.1 -6.3 CoT + D-LM PRF +6.2 +1.7 -14.9 +3.6 +1.9 -13.6 -0.1 -0.2 -4.2 Q+D LM PRF +7.3 +4.6 -8.4 +7.9 +3.5 -6.4 +0.2 +0.0 -2.8 Table 4: How different expansions affect results on datasets that measure Domain Shift. Colors indicate a positive or negative delta over scores for no expansion. Notice that models with higher base scores are generally harmed by expansions while weaker models benefit from them. Touche-2020 Scifact-Refute Type Model DPR Contriever FT MonoT5-3B DPR Contriever FT MonoT5-3B No Expansion 23.0 24.8 32.6 33.9 76.4 82.1 Query HyDE -0.3 +4.8 -5.9 -9.1 -0.9 -12.3 CoT +0.3 +5.1 -7.4 -7.6 +0.3 -8.8 Q-LM PRF +0.6 +3.9 -1.3 +6.5 +1.1 -1.7 Doc D2Q -0.2 +0.0 -0.9 +2.0 -1.8 +0.9 D-LM PRF -0.2 -1.2 -8.3 +2.5 -4.6 -16.5 Both HyDE + D2Q -0.1 +5.0 -3.0 -6.1 -1.0 -16.6 CoT + D2Q +0.3 +2.6 -5.4 -6.5 -1.1 -16.9 Q-LM PRF + D2Q -0.1 +1.0 -2.0 +9.1 +1.3 -1.1 HyDE + D-LM PRF +0.5 +1.4 -10.1 -5.2 -2.9 -17.6 CoT + D-LM PRF -0.2 +0.8 -8.4 -7.2 -1.5 -19.3 Q+D LM PRF +0.3 +2.5 -2.7 +7.6 -2.5 -4.0 Table 5: How different expansions affect results on datasets that measure Relevance Shift. difference between the base value and the min/max (i.e., more saturation for the highest/lowest values). We use default hyperparameters for all models, except for the length of the queries, which we set at 512 for BERT-based models and 1024 for T5 and Llama models. Effect of Different Models Our results with all models (Figure 1) show a consistent pattern: as base performance on a task increases, the gains from expansion decrease. We also see this trend from Table 1 (note that ArguAna and FIQA results are sorted by nDCG score on MS MARCO; negative trend is clearly observable in Figure 1). Interestingly, these results do not depend on the model architecture: this is true for bi-encoders, late-interaction models, neural sparse models, and cross-encoders (of all sizes). However, do these results hold for other datasets? In Figure 2, we show that this pattern is consistent over a wide range of datasets. Models whose base score is higher (such as MonoT5-3B) are negatively impacted by expansions. 4 RQ2: How Do Different Distribution Shifts Impact Results? Experimental Setting We evaluate how query and document expansion are impacted by different distribution shifts: in-domain/no shift (MS MARCO), domain shift (e.g. medical, code, legal), relevance shift (finding the opposite or a counterargument), and format shift (extremely long queries or very short documents). Datasets and their descriptive statistics are in Table 2. We use three representative models for these experiments. In-Domain We use two datasets that test performance on the MS MARCO collection: TREC Deep Learning6 2019 and 2020 tracks (Craswell et al., 6Despite the different names, TREC DL 2019 and 2020 use the same document collection as MS MARCO, albeit with new queries and relevance judgements. \fTip of My Tongue TREC CT 2021 Arguana Type Model DPR Contriever FT MonoT5-3B DPR Contriever FT MonoT5-3B DPR Contriever FT MonoT5-3B No Expansion 13.4 38.3 39.5 16.4 26.7 25.8 34.9 48.8 42.4 Query HyDE +3.0 -9.4 -26.8 +0.3 +2.1 +4.2 -4.5 -5.4 +15.8 CoT +2.1 -9.5 -23.3 +2.3 +3.0 +3.0 -5.8 -5.3 +11.3 Q-LM PRF -2.9 -1.9 +6.4 +2.2 +0.6 -0.1 -7.1 -3.6 +8.3 Doc D2Q +1.6 -3.2 -8.5 +0.3 -1.3 -1.8 +1.6 +2.0 -2.1 D-LM PRF +5.5 +2.9 +0.9 -0.7 -0.9 +0.6 +2.3 +3.5 -2.5 Both HyDE + D2Q +3.6 -10.7 -29.7 +0.4 +2.1 +2.7 -2.8 -2.5 +12.9 CoT + D2Q +2.2 -10.6 -25.3 +2.3 +1.5 -0.1 -4.3 -3.0 +10.6 Q-LM PRF + D2Q -1.8 -4.7 +2.1 +0.7 -0.9 -0.2 -4.4 -2.5 +6.9 HyDE + D-LM PRF +6.0 -7.2 -32.6 +0.0 +1.0 +3.2 -3.0 +1.0 +10.3 CoT + D-LM PRF +5.3 -7.4 -25.8 +1.9 +2.7 +1.0 -4.0 +0.9 +8.8 Q+D LM PRF +0.7 +1.6 +6.4 +0.6 -1.0 +0.4 -4.0 -0.2 +3.3 Table 6: How different expansions affect results on datasets that measure Long Query Format Shift. Colors indicate a positive or negative delta over scores for no expansion. Unlike previous results, all models benefit from expansions on all three datasets. We conclude that, in the case of significant query shift, expansion is useful. WikiQA Quora Type Model DPR Contriever FT MonoT5-3B DPR Contriever FT MonoT5-3B No Expansion 47.2 68.6 75.9 68.4 86.7 83.9 Query HyDE +16.4 +3.6 -1.6 -15.4 -13.8 -8.2 CoT +9.8 -0.9 -6.1 -32.3 -31.5 -35.4 Q-LM PRF +11.9 -2.2 -4.2 -13.8 -11.4 -7.0 Doc D2Q +5.4 -1.8 -1.7 -6.2 -3.7 +0.0 D-LM PRF -2.8 -10.8 -21.4 -10.0 -15.6 -17.0 Both HyDE + D2Q +17.7 +2.1 -2.7 -11.4 -10.1 -7.1 CoT + D2Q +11.3 -1.5 -6.9 -25.7 -26.3 -32.5 Q-LM PRF + D2Q +13.0 -1.1 -6.2 -9.4 -8.7 -6.9 HyDE + D-LM PRF +12.6 -6.2 -18.0 -21.1 -22.1 -20.2 CoT + D-LM PRF +7.0 -10.3 -19.0 -35.6 -36.8 -41.4 Q+D LM PRF +9.5 -6.1 -10.8 -19.4 -19.6 -17.8 Table 7: How different expansions affect results on datasets that measure Short Document Format Shift. Models with higher base scores are generally harmed by expansions while weaker models benefit from them. 2020, 2021). All retrieval models considered train on MS MARCO, hence these are in-domain. Domain Shift In this setting models must generalize from training domain (web documents from MS MARCO) to new domains, such as legal or medical text. This type of shift is made difficult by specialized vocabulary in these domains. We use NFCorpus (medical) (Boteva et al., 2016), GooAQ Technical (code) (Khashabi et al., 2021), and FiQA2018 (finance) (Maia et al., 2018). Relevance Shift This setting is characterized by a difference in how relevance is defined. Rather than topical relevance over web pages, queries in these datasets ask for counterarguments or documents refuting its claim. We use two datasets that search for refutations or counterarguments: Touch\u00e92020 (Bondarenko et al., 2020) and a subset of SciFact (Wadden et al., 2020) whose gold documents refute the queries claims. Format Shift Another type of shift is the length of inputs: generally, queries are short and documents span over one to multiple paragraphs. However, there are situations where queries could be document-sized or the documents could be short. This shift tests whether models can generalize to new length formats. We consider two sets of datasets: for shift to long query we use the \u201cTip of My Tongue\u201d dataset introduced by Lin et al. (2023), TREC Clinical Trials Track 2021 (Roberts et al., 2021), and ArguAna (Wachsmuth et al., 2018). For shift to short document, we use Quora (Iyer et al., 2017) and WikiQA (Yang et al., 2015). 4.1 Results by Type of Shift Table 3 shows results for in-domain data on the 2019 and 2020 Deep Learning TREC Tracks. We see that weaker models improve with different expansion types, with DPR improving for almost every expansion and the stronger Contriever showing \f...The most likely tool to use in this case would be a Home Equity Line of Credit (HELOC). This is a line of credit for which the full amount is backed by home equity ... Most likely your financial institution will apply a factor ... Original Expanded Query: Is it possible to take a mortgage using Bitcoin as collateral? ... suggest that they borrow the money to invest with you. They can use their bitcoins as collateral for the loan. That way, they get the same benefit and your company doesn't go out of business if the price of bitcoin drops ... Doc B Doc A 1. Doc A 2. Doc B Query:\u00a0What are the risks and maximum amount involved in obtaining a\u00a0Home Equity Line of Credit (HELOC)\u00a0using Bitcoin as collateral? Ranked List 1. Doc B 2. Doc A 3.\u00a0 \u00a0 ... 3.\u00a0 \u00a0 ... Ranked List Figure 3: An example of expansions obscuring the relevance signal. The non-relevant document in red (\u00d7) was ranked higher than the relevant blue (\u2713) document due to the phrase \u201cHome Equity Line of Credit\u201d being added to the query. The left side shows the original query and documents while the right side shows the ranking. minor improvements for some combinations. However, when we move to the stronger models (e.g., MonoT5-3B), we find that all of these gains disappear and expansions hurt the model. We find that this trend holds in most other categories of shift: Table 4 for domain shift, Table 5 for relevance shift, and Table 7 for short document shift. Note that Figure 2 also shows this visually. The exceptions to this pattern occur in format shift: on Quora (Table 5), all models are harmed by expansion; for long query shift (Table 6), expansions generally help most models. When we examine why expansions help for the latter, we find that the transformations typically shorten queries to more closely resemble models\u2019 training data (e.g., for ArguAna the query changes from a long document to a shorter sentence that summarizes it). As IR models are not typically trained on long queries, it is an open-question of whether additional training would make this category of shift easier for models and thus make expansions less helpful. 5 RQ3: Why Do Expansions Hurt? Sections 3 and 4 show that strong IR models do not benefit from expansions. But what causes this effect? Here, we explore whether model size (\u00a75.1) is linked to our findings, and perform a qualitative error analysis (\u00a75.2). 5.1 Drop in Performance Independent of Size One possible argument is that larger models are able to estimate relevance better when using unaltered queries and documents, as they have learned a more refined relevance model during their training. To verify this hypothesis, we test two different families of models: MonoT5 and E5. If model size is the cause, we would expect to see larger models gain less from expansions for both families. However, Figure 5 shows that model scale is inversely correlated with gains from expansion for the MonoT5-family, but not the E5-family. The crucial difference between them7 can be attributed to the E5 models having similar performance scores across sizes whereas T5 has a much wider range: T5 differs by 21 nDCG@10 points on ArguAna from 3B to small while E5 differs by only 3 points from large to small. Thus, we see that model size impacts gains from expansions only in tandem with the correlation between model size and base score. 5.2 Error Analysis If model size is not the reason for our finding, what could be causing it? To gain an intuition on the failures of LM expansion, we annotate 30 examples from three datasets where performance declines when expanding queries and documents. We find that out of the 30 examples, two are false negatives, i.e., relevant documents that are unjudged and not labeled as relevant (both from FiQA). Of the remaining 28, all errors are due to the expansions adding irrelevant terms that dilute relevance signal, or including erroneous keywords that make irrelevant documents appear relevant. Figure 3 shows an example of how query expansion added the term \u201cHome Equity Line of Credit\u201d and distracted from the main focus of the question (using bitcoins as collateral). Thus, it is likely that, without the noise LM-based expansions introduce, well tuned rankers can accurately estimate relevance of subtly different documents. We can visualize this in Figure 4, where we note a general downward shift of the rankings of relevant documents in the top-10 positions for TREC DL 2019. We find that most expansions shifts the ranking by a few positions, while some expansions shift the relevant document ranks to be out of the top 10 (i.e. the cluster at -10 in Figure 4). 7Another obvious difference is that E5 is a bi-encoder while MonoT5 is not. However, previous work (Muennighoff, 2022) has shown that bi-encoders also improve with scale. \f0 TREC DL 2019 10 5 0 5 10 Relevant Doc Position Change Figure 4: The change in rank for relevant documents in the top 10 when using expansions. Negative values indicate lower ranks (e.g. -5 indicates that the rank of the relevant document went down 5 when using expansions). We see that expansions cause relevant documents to be ranked lower. Figure 6 in the Appendix shows other datasets with similar results. Small Base Large 3B Model Size -6 -4 -2 +0 +2 +4 +6 Using Expansion, nDCG@10 Model Family E5 MonoT5 . Figure 5: Model scale does not explain negative effect of LM-based expansions. While larger MonoT5 models perform worse, all E5 model sizes are equally impacted 6 Discussion Our results indicate three phenomena regarding expansion using LMs: (i) expansion generally benefits weaker models, such as DPR, while better performing rankers, such as T5, are penalized; (ii) exceptions are observed in case of severe distribution shift, e.g. very long queries; (iii) when model scores decrease, the cause is generally expansion weakening the original relevance signal. This implies that despite their broad capabilities, LMs should not be used to augment strong performing IR models without careful testing. The strong performance of rerankers for generalization confirms previous work by Rosa et al. (2022a). Further, Table 3 indicates this characterization of LM expansion also holds on in-domain data (no shift). Interestingly, our experiments find that the only distribution shift that consistently needs expansion is long query format shift; we found no equivalent result for domain, document, or relevance shift. Future work may examine whether improved training techniques on longer queries can overcome this or whether longer queries are innately more difficult. 7 Related Work Large Scale Analyses in Neural IR Comprehensive analyses in retrieval have provided great insight into practical uses of retrieval. These include many aspects of information retrieval, including interpretability (MacAvaney et al., 2022), domain changes (Lupart et al., 2023), syntax phenomena (Chari et al., 2023; Weller et al., 2023), and relationship between neural and classical IR approaches (Formal et al., 2021b; Chen et al., 2022). Generalization in Neural IR As retrieval models have become more effective, attention has turned to improving and evaluating the way that IR models generalize to out-of-distribution datasets (e.g. not MS MARCO-like corpora). One prominent example of this is the BEIR dataset suite (Thakur et al., 2021), which is commonly used for retrieval evaluation. Much other work has proposed new datasets for types of shift (e.g. MTEB (Muennighoff et al., 2023) among others (Han et al., 2023; Ravfogel et al., 2023; Weller et al., 2023)), as well as many new modeling strategies for better zeroshot retrieval (Dai et al., 2022; Wang et al., 2022a). We follow these works by showing different types of shift and whether these types of shift change the results for LM-based expansion techniques. Effect of Scale on Neural IR Models IR models typically improve with scale (Nogueira et al., 2020) but are also heavily constrained, due to the requirement of processing documents for live search. Thus, most first-stage IR models typically use a BERT backbone (Santhanam et al., 2022; Izacard et al., 2021) while reranker models have scaled to billions of parameters (Nogueira et al., 2020). However, work on scaling bi-encoder architectures has also shown performance gains from scale (Muennighoff, 2022). Due to the effectiveness of larger models, recent work has shown that a better firststage model does not lead to improvements over a BM25 + reranker pipeline (Rosa et al., 2022a). Thus, for our experiments we use BM25 as first stage retrieval and show results reranking those. Query and Document Expansion in IR Query and document expansion have a long history in IR, with early techniques such as expanding query terms using dictionaries or other hand-built knowledge sources (Smeaton et al., 1995; Liu et al., 2004) as well as techniques that use corpus-specific information such as pseudo-relevance feedback (Rocchio Jr, 1971). These expansions are limited as they \fare either hand-crafted (and thus limited in scope) or involved automatic techniques that may introduce spurious connections between words. LMbased query and document expansions on the other hand can rely on their extensive linguistic knowledge which goes well beyond hand-crafted rules. Despite this however, they still suffer from spurious and superfluous additions, as shown in Figure 3. However, LM-based expansions have been shown to be successful in a variety of applications (Zheng et al., 2020; Weller et al., 2022; Wang et al., 2023a; Jagerman et al., 2023), which provided inspiration for this work. 8" + }, + { + "url": "http://arxiv.org/abs/2305.13252v2", + "title": "\"According to ...\": Prompting Language Models Improves Quoting from Pre-Training Data", + "abstract": "Large Language Models (LLMs) may hallucinate and generate fake information,\ndespite pre-training on factual data. Inspired by the journalistic device of\n\"according to sources\", we propose according-to prompting: directing LLMs to\nground responses against previously observed text. To quantify this grounding,\nwe propose a novel evaluation metric (QUIP-Score) that measures the extent to\nwhich model-produced answers are directly found in underlying text corpora. We\nillustrate with experiments on three corpora (Wikipedia, PubMed, and the U.S.\nlegal tax code) that these prompts improve grounding under our metrics, with\nthe additional benefit of often improving end-task performance. Furthermore,\nprompts that ask the model to decrease grounding (or to ground to other\ncorpora) indeed decrease QUIP-Score, indicating the ability of LLMs to increase\nor decrease grounded generations on request.", + "authors": "Orion Weller, Marc Marone, Nathaniel Weir, Dawn Lawrie, Daniel Khashabi, Benjamin Van Durme", + "published": "2023-05-22", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction As the deployment of Large Language Models (LLMs) in real-world applications continues to grow, their tendency to generate false content (Ji et al., 2022) poses significant risks to downstream users. Recent work has attempted to address this issue by augmenting them with retrieval (Shuster et al., 2021; Sun et al., 2023; Borgeaud et al., 2022); however, these models still struggle with hallucination problems in practice (Liu et al., 2023). This work explores the intriguing possibility of steering LLMs by prompting them to quote more of the curated sources of information they have memorized during pre-training, thereby reducing their tendency to generate false information. As illustrated in Figure 1, we explore whether adding phrases such as \u201cAccording to Wikipedia\u201d can guide LLMs to quote from Wikipedia, which is 1We publicly release all code at https://github.com/ orionw/according-to * Authors contributed equally \u00a0 \u00a0Memorized Documents from Pre-training\u00a0 The enzymes found in saliva are essential in beginning the process of digestion of dietary starches and fats. These enzymes also play a role in breaking down food particles djdjks s\u00a0break down starch to form other molecules\u00a0fo, thus protecting teeth from bacterial decay An amylase is an enzyme that catalyses the hydrolysis of starch (Latin amylum) into sugars. Amylase is present in the saliva of humans and some other mammals, where it begins the chemical process of digestion.\u00a0 With grounding prompt Without grounding prompt The initial digestion of starch happens in the mouth through our saliva.\u00a0The enzymes found in saliva are essential in beginning the process of digestion of dietary starches.\u00a0These include amylase, which is\u00a0an enzyme that catalyses the hydrolysis of starch (Latin amylum) into sugars. Salivary amylase\u00a0begins the chemical process of digestion. The initial digestion of starch occurs in the mouth.\u00a0 Enzymes in saliva break down starch into smaller molecules. In what part of the digestive tube do\u00a0you expect the initial digestion of starch?\u00a0 According to Wikipedia, in what part of the digestive tube do\u00a0you expect the initial digestion of starch?\u00a0 \ud83e\udd16 Figure 1: Prompting LLMs to respond with quotes directly from pre-training data (shown in purple). Prompting increases the proportion of quoted information. presumably observed in the pre-training corpus. We find empirical evidence that this is attainable using current LLMs (both open and closed source). Our study is inspired by two recent research areas. First, larger LLMs can be more effectively guided using natural language prompts (Ouyang et al., 2022; Wan et al., 2023; Ganguli et al., 2023). Second, as LLMs grow in size, their ability to remember facts and statements from pre-training improves (Kandpal et al., 2022; Tirumala et al., 2022; Carlini et al., 2023, 2020). Thus, we seek to steer LLMs to use their memorization for a positive purpose: producing more grounded outputs. A key step in this study is quickly determining whether generated outputs overlap significantly with pre-training data; i.e., efficiently performing membership testing via a DATA PORTRAIT (Marone and Van Durme, 2023). We design a new metric called QUIP-Score, short for Quoted Information Precision, which builds on DATA PORarXiv:2305.13252v2 [cs.CL] 26 Feb 2024 \fTRAITs and takes advantage of its speed and efficiency. QUIP-Score then calculates n-gram overlap, quantifying how much of a passage is formed of spans that are exactly contained in the corpus. To illustrate according-to prompting, we perform experiments based on the task of opendomain question answering (ODQA), for which provenance-grounded answers are of particular importance. We collect human-authored prompts designed to steer generations toward information grounded in our target corpora (Wikipedia, PubMed, and the U.S. legal tax code). We observe that across all human-authored prompts, we can increase the amount of overlap with the chosen corpora by 5-105% while maintaining or even improving the downstream performance. We show results across numerous datasets and models, including both openand closed-sourced LLMs. Interestingly, we also observe the opposite phenomenon \u2013 it is possible to discourage LLMs from grounding via prompts that either discourage grounding or encourage grounding to other corpora. For example, we find this can decrease overlap with Wikipedia while lowering performance on downstream tasks that rely on Wikipedia content. We conduct scaling experiments on different model sizes, which indicate that as size increases, so does the effectiveness of our proposed approach. This suggests that hallucinations may diminish with further scaling of instruction following LLMs. In summary, we present according-to prompting, a simple and effective approach to improving an LLMs\u2019 ability to generate more factual information. Additionally, we introduce QUIP-Score, an efficient metric for measuring groundedness of LLM generations against their pre-training corpus. We experiment with various prompting strategies across models, datasets, and scaling trends, and we find that according-to methods consistently improve groundedness under our introduced metric. 2 Related Work Memorization in LLMs. Large language models have been observed to memorize their training data (Carlini et al., 2020; Chang et al., 2023, among others). This is problematic when web-scraped training data contains sensitive personal data or lowquality information sources (Dodge et al., 2021; Luccioni and Viviano, 2021). However, it can be beneficial for models to memorize content from carefully curated and trusted corpora, where careful de-duplication (Lee et al., 2022a) and curation strategies (Feng et al., 2022) can improve language model quality (Gao et al., 2020). Work on analyzing memorization has proposed measuring n-gram overlap against the first page of Google Search results as a proxy for memorization, using exact matches (Carlini et al., 2020) and BLEU (Levy et al., 2021). We measure quoting (and thus, memorization in closed-book generation settings) building off of Marone and Van Durme (2023) who propose using membership testing tools that they label DATA PORTRAITs. As one implementation, they use a Bloom Filter (Bloom, 1970) for storing n-grams. We use this method for checking membership in a corpus as it allows us to build a fast, lightweight, and scalable metric for measuring quotation against large amounts of data (see Section 3.1 for details). Hallucination and grounding. Numerous studies (De Cao et al., 2021; Li et al., 2022; Weller et al., 2023) have demonstrated that LLMs struggle with both hallucination and factuality, leading to frequent inaccuracies and outright falsehoods. Previous research has attempted to alleviate this problem in various ways, including retrieving grounded documents before generation (Sun et al., 2023; Borgeaud et al., 2022; Mallen et al., 2023; Weller et al., 2022), applying new decoding approaches (He et al., 2022), post hoc tuning of LLMs (Menick et al., 2022; Lee et al., 2022b), and analyzing the model\u2019s output training data (Han and Tsvetkov, 2022; Park et al., 2023). Crucially, these works have a common thread: showing that grounding LLM generations results in fewer hallucinations (Lazaridou et al., 2022; Andriopoulos and Pouwelse, 2023). Our work focuses on a subset of grounding, quoting, and is driven by the simple premise that anything quoted is grounded and not hallucinated. Our work therefore builds off the established research and is complementary to it, as we investigate a novel yet straightforward approach to steer LLMs towards more factual responses. Attribution. A related line of work is attribution of generated text to their sources (Rashkin et al., 2021; Bohnet et al., 2022). Our work is related to this literature in that, our approach allows provable attribution to macro-level sources of information, such as Wikipedia or medical articles. However, we do not focus on offering any fine-grained attribution to the originating source documents. Given these \fdistinctions our focus here is different from \u2013and complementary to\u2013 the attribution literature. LLM Steerability via prompting. The larger LMs become, the easier they are to steer with natural language prompts (Kandpal et al., 2022; Carlini et al., 2023; Mishra et al., 2022a; Srivastava et al., 2023). Several works (Mishra et al., 2022b; Chung et al., 2022; Wang et al., 2022b; Wan et al., 2023) have shown that larger instructiontuned models are more easily steered than smaller and non-instruction-tuned models. This is desirable in our setting, as we seek to use these capabilities of LLMs for a novel application of steerability: quoting more from a given corpus. Improving LLMs through prompting. Much recent work has focused on improving LLM performance on various benchmarks by improving the prompt given to the model. A sub-genre of these works includes those that ask the model to produce text before generating the answer, such as Chainof-Thought (Wei et al., 2022) or Recitation-based Generation (Sun et al., 2022). We differ from these works by generating the answer first, then the explanation, indicating that our performance gains are not due to the same phenomena. Furthermore, our paper\u2019s focus is on improving LLM\u2019s ability to quote, rather than improving end-task performance. 3 Methodology Defining Grounding There are many definitions of grounding in the community (Bohnet et al., 2022; Mallen et al., 2023). While acknowledging the broad scope of the term, we adopt a narrow definition: we call generated text grounded with respect to a corpus if it is an exact quotation from the corpus. This is more stringent than some definitions because it does not count semantic grounding, e.g. when lexical forms do not match; however, quotation is one form of grounding that is intuitive and simple to measure.2 Hence, we use quoting and grounded interchangeably. 2We leave it to future work to expand our metric to the semantic grounding case, as semantic grounding (e.g. finding paraphrases) while matching the generations over an entire corpus is non-trivial; using retrieval systems biases the model towards lexical match (even for dense retrieval, c.f. MacAvaney et al. (2022)) and existing work in attribution/grounding does not scale to allow grounding to numerous (2+) passages. 3.1 QUIP-Score: Measuring Grounding to Pre-Training Data In order to understand grounding and quoting from model pre-training data, we need a metric to measure quoting. An intuitive approach is to use an ngram measure, which can compare n-grams found in an LLM\u2019s generation to those in a corpus. Such a quotation metric must be efficient to scale to large reference corpora. Problems with existing N-gram metrics Existing n-gram metrics like BLEU or ROUGE store counts of n-grams from the references. However, storing counts requires the use of data structures like a conventional hashtable, which is computationally difficult for a large corpus like Wikipedia. We estimate naively scaling sacrebleu (Post, 2018) to use Wikipedia as a reference would consume \u223c1.5 TB of RAM (Appendix C). QUIP-Score To enable efficient measurement of quoting from pre-training data, we start with a Bloom filter-based DATA PORTRAIT (Marone and Van Durme, 2023), which allows for both faster and more memory efficient boolean membership queries than allowed by methods that use a hashtable to store counts. The Bloom filter approach enables one-time indexing of a large corpus with constant time lookups. We define our new metric, QUIP-Score, as the character n-gram precision of overlap between generated output and the pre-training corpus.3 More formally, for generation Y and text corpus C: QUIP(Y ; C) = P gramn\u2208Y 1C(gramn) |gramn \u2208Y | , where 1(.) is an indicator function implemented with the DATA PORTRAIT: 1 if gramn \u2208C else 0. Thus, a score of 0.5 would indicate that 50% of the generated text n-grams are found in the pre-training corpus. We macro-average this quantity over a set of generations to obtain a single performance number for a given test dataset. QUIP-Score Implementation We build the DATA PORTRAIT on the version of Wikipedia included in the Pile,4 as it allows for us to exactly test the pre-training data included in many mod3QUIP scores are not comparable across datasets, as they are specific to a given corpus. This is acceptable for our experiments that compare generations against one corpus. 4wikipedia/20200301.en \fels like GPT-J5 (See \u00a76 for experiments applying QUIP-Score to other corpora). We use characterbased n-grams as opposed to token-based, as different models have different tokenization schemes. Furthermore, character-based n-gram metrics have widespread usage in machine translation with metrics like chrF/chrF++ (Popovi\u00b4 c, 2015, 2017). We chose 25 character grams for the sketch6 (approximately 5 words) as we found it empirically gave meaningful results (neither too small nor too large an n-gram). Note that because the DATA PORTRAIT checks for exact matches it is sensitive to orthographic variation (e.g. case, whitespace), We view QUIP-Score as a lower bound on actual quoting performance. 3.2 Validity of QUIP-Score As QUIP-Score is an n-gram metric, it inherits many of the same qualities of established metrics like BLEU and ROUGE. Further, many previous works have established the connection between higher amounts of grounding and fewer hallucinations (\u00a72). Building upon these previous studies, we establish that QUIP-Score (1) accurately measures quoting like other n-gram metrics and (2) is correlated with fewer hallucinations. We first conduct a straightforward experiment: what is the QUIP-Score when measuring entirely quoted documents (e.g. exact Wikipedia pages) vs documents that are not necessarily quotes (e.g. from the Pile)? We randomly sample 1000 documents from each. We find that the average QUIPScore for Wikipedia documents is 99.9%7 with a standard deviation of 0.1% while on the Pile it is 17.0% \u00b1 0.8%. Thus we can see that QUIP-Score correctly measures full quotations and that random text has approximately 17% QUIP-Score. Next, we consider partial, contextual quotations as found in LLM generations from NQ. We bin generations by QUIP-Score ranges, sampling 50 from each bin. We then conduct two manual analyses: (1) how much of the generations are a quotation (none, some, majority, or all/nearly all) and (2) whether the generation is a hallucination (using gold provenances and answers, plus Google Search when unsure). Table 1 shows that as QUIP-Score 5Note, for several models evaluated here (e.g. OpenAI models) the exact Wikipedia version trained on is unknown. 6Not having multiple n-gram sizes like BLEU typically does allows us to significantly reduce memory consumption and had similar results to averaging across sizes. 7QUIP-Score is 99.9 due to a single very short sampled document, where length < n-gram size QUIP-Score None Some Major. All Halluc. 0.0 \u2013 0.25 12% 76% 12% 0% 20% 0.25 \u2013 0.5 0% 16% 84% 0% 22% 0.5 \u2013 0.75 0% 0% 80% 20% 12% 0.75 \u2013 1.0 0% 0% 48% 52% 6% Table 1: Random sampled generations from NQ, binned by QUIP-Score. As QUIP-Score increases, quoting increases and hallucinations decrease. Major. stands for Majority, while Halluc. stands for Hallucination %. increases, the amount of quotations increases and the amount of hallucinations decreases. We do not expect these results to be surprising, as they have been demonstrated by a large amount of literature on n-gram metrics (Belz and Reiter, 2006; Reiter and Belz, 2009; Popovi\u00b4 c, 2015), and by the grounding and hallucination literature (Lazaridou et al., 2022; Borgeaud et al., 2022; Andriopoulos and Pouwelse, 2023). However, this analysis empirically demonstrates that using quoting for grounding and QUIP-Score as the n-gram metric retains these desired properties. 4 Grounding via according-to Prompting The previous results show 1) that we can efficiently measure quotation rate and 2) that more quotations correlate with fewer hallucinations. Next, we seek to improve knowledge grounding by causing LLMs to quote directly from trusted resources seen during training.8 We hope to access helpful memorized content: strings copied from high-quality or trusted documents. We induce this behavior by taking a normal task prompt (e.g. an ODQA question) and appending an instructional phrase that encourages grounding such as \u201cRespond by using information from Wikipedia in your response\".9 We call this strategy according-to prompting. Our experiments measure the change in QUIP-Score of generations from a according-to prompt vs one without the extra instruction (i.e. a null prompt). To verify that prompts can both increase and decrease grounding, we also include prompts that are anti-grounding (e.g. \u201cRespond by using information from [another source] in your response\" or \u201cRespond without using any information from Wikipedia.\u201d) This allows us to test the hypothesis that models can ground (or not ground) to a 8Since we want to know what the LLM recalls on its own, we specifically do not use any retrieval models. 9We tried appending, prepending, and their combinations in early experiments and found that appending the grounding/anti-grounding prompts performed the best. \fgiven corpus when asked because of the semantic meaning of the prompt, rather than the length of the prompt. As prompting is notoriously brittle (e.g. changing the phrasing can affect the results) we provide a number of grounding and anti-grounding prompts to test whether these prompts provide consistent gains or are merely prompting artifacts (see Table 2 for the list of prompts used). 4.1 Datasets We use a variety of datasets to test if LLMs are consistent and to check whether grounding affects the end-task performance of a given dataset. To best measure the grounding of the output however, the model generations must be long enough to have many n-grams that can be measured. Thus, we test on long-form question answering (QA), and for datasets that do not lend themselves well to longform output (e.g. short-form QA) we ask the models to generate both the answer and a corresponding explanation whose n-grams can be measured. Note that our purpose is not to improve stateof-the-art performance on these tasks, as our main research question is to analyze the grounding of model outputs. However, we note that according-to prompting often achieves competitive or improved performance compared to other prompting baselines, as it naturally correlates with the ability to answer questions from the grounded material. We use the following datasets, each of which targets factual knowledge in Wikipedia: ELI5 (Fan et al., 2019) (the KILT Petroni et al. (2021b) version), Natural Questions (Kwiatkowski et al., 2019), TriviaQA (TQA) (Joshi et al., 2017), and HotpotQA (Yang et al., 2018). These datasets comprise a mixture of shortand long-form plus singleand multi-hop QA. \u00a7A provides further details. 4.2 Models and Prompting We test a wide array of models in our experiments including most OpenAI models (Wang et al., 2023), T5-based models (T5 adapted to language modeling, Raffel et al. 2020; Lester et al. 2021 and FLANT5 Chung et al. 2022), GPT-J instruction tuned10 (Wang and Komatsuzaki, 2021), and Koala (Geng et al., 2023) (a Llama variant, Touvron et al. 2023). By doing so, we provide (1) results on both open and closed-source models, (2) results for models using many variations of instruction-tuning data, and (3) models ranging from 220 million param10https://huggingface.co/nlpcloud/instruct-gpt-j-fp16 eters to 175B models. Note that our experiments consist solely of providing prompts to the models and do not include fine-tuning (as the goal is to see what these models can do zero-shot). For short-form QA datasets, we prompt models to produce an answer plus an explanation, then measure QUIP-Score of the latter. We found smaller models (e.g. < 15B parameters) were not able to follow instructions to provide both answer and explanation in a parseable format from just one prompt. Thus, we do two-step prompting with them, first for the answer, then for the explanation (and appending the grounding prompt, if used). \u00a7B.2 provides prompting details and full text of the prompts used. 5 Results We first analyze a wide range of according-to prompts on ChatGPT. We then test the null prompt and the best performing according-to prompt on a variety of other models for further analysis. Table 2 shows results for different prompts using ChatGPT. There is a clear trend under which all according-to prompts perform similarly or improve upon QUIPScore compared to the null. QUIP-Scores for the anti-grounding prompts are the same or worse than the null prompt (i.e. no additional text) and significantly worse than the according-to prompts. Surprisingly, we find that according-to prompts also perform similarly, and sometimes even better than, the null prompt on end task performance (e.g. up to a 6% improvement on NQ, 2.5% on HotpotQA). This is not the case for ROUGE-L on ELI5, as that metric measures lexical similarity to Reddit, rather than similarity to Wikipedia. We use these results on ChatGPT to inform our next experiments, using the null prompt and the best grounding prompt (\u201cRespond to this question using only information that can be attributed to Wikipedia\u201d) in our future experiments due to cost. 5.1 Results from Other Models We show the relative difference of the grounding prompt over the null prompt for more models in Table 3, which further confirms our findings (for the absolute instead of relative numbers, see Appendix B.2). For example, using the grounding prompt with Text-Davinci-003 improves over the null prompt by around 15% QUIP-Score and 520% for the specific task. For all models evaluated, the grounding prompt improves in both end-task performance and QUIP-Score by 5-105%. \fPrompt TQA NQ Hotpot ELI5 (appended after the question) QUIP EM QUIP EM QUIP F1 QUIP R-L \u2205(no additional prompt) 31.6 77.8 32.8 32.9 28.3 35.7 24.1 22.7 grounding prompts \"Based on evidence from Wikipedia:\" 31.1 77.3 32.8 34.0 28.1 35.9 26.3 22.3 \"As an expert editor for Wikipedia, I am confident in the following answer.\" 31.7 73.2 33.0 30.2 28.7 35.3 25.5 22.7 \"I found some results for that on Wikipedia. Here\u2019s a direct quote:\" 31.7 70.1 33.8 27.6 28.1 33.1 27.2 21.0 \"Reference Wikipedia when answering the following question.\" 32.8 75.9 34.6 34.4 28.9 35.9 25.7 22.0 \"Answer according to Wikipedia.\" 33.6 78.8 34.3 34.8 29.2 36.6 26.5 21.7 \"Go to https://www.wikipedia.org and find direct quotes to answer the question. Response: \"\" 34.5 72.7 32.9 31.7 30.4 35.5 25.8 20.4 \"Respond by using information from Wikipedia in your response.\" 34.9 76.3 35.3 32.9 29.9 36.1 26.3 21.9 \"Respond to this question using only information that can be attributed to Wikipedia.\" 35.7 76.6 37.0 33.9 30.4 36.2 28.0 21.5 antigrounding \"Respond by using information from Reddit in your response.\" 26.1 75.8 26.5 31.6 22.4 35.0 21.9 22.2 \"Respond by using information from Github in your response.\" 26.7 74.3 28.2 32.4 23.2 33.7 24.3 22.0 \"Respond without using any information from Wikipedia in your response.\" 30.4 76.9 32.0 32.0 26.8 32.9 24.7 22.1 Zero-Shot No-Retrieval SOTA 68.2 24.9 44.6 22.7 Retreival-Augmented SOTA 89.4 60.4 51.4 26.5 Table 2: Impact of various prompts on the grounding (QUIP-Score) and performance scores, using ChatGPT (\u00a75). The top row is the null prompt (no additional prompt other than the question), the middle section includes grounding prompts, and the last section includes anti-grounding prompts. We find that grounding prompts generally improve the QUIP-Score while anti-grounding prompts generally reduce QUIP-Score. Colored cells indicate changes (gains, losses, or the same) relative to the null row. ELI5 ROUGE-L (R-L) is based on similarity to Reddit rather than Wikipedia. See \u00a7B.1 for sources of SOTA results. TQA NQ Hotpot ELI5 Model QUIP EM QUIP EM QUIP F1 QUIP R-L Text-Davinci-003 +14.7% +5.3% +14.7% +20.6% +14.4% +7.2% +16.5% -3.8% GPT-4 +17.6% -2.3% GPT-J Instruct +12.1% +15.2% +13.9% +18.1% -2.5% Koala 7B +5.1% +6.3% +5.0% +35.5% +14.6% FLAN-T5 XXL +43.3% +41.5% +20.7% +105.2% +48.4% Table 3: Percent improvement of according-to over null prompt. The according-to prompt improves performance in nearly every dataset and metric by 5-15%. We omit EM/F1 scores of smaller models for which our prompting methods yield the same answer for grounding and null (\u00a74.2). Due to cost, we only evaluate GPT-4 on ELI5. Thus, our findings hold for a wide variety of models and model sizes \u2013 even when prompts are not tuned for the specific model being prompted, indicating the generality of our approach. 5.2 Impact of Model Size Does model size impact their ability to quote from their pre-training data? We answer this question using QUIP-Score in Figure 3, which shows that smaller models perform the same (for FLAN-T5 models) or worse (for OpenAI models) with a grounding prompt as opposed to the null prompt. However, larger models perform significantly better with the grounding prompt as opposed to the null prompt, for both OpenAI models and FLANT5 models. We can conclude that a model\u2019s ability to quote from its pre-training data improves with size. 5.3 Impact of Entity Popularity Another potential factor influencing generation of memorized content is the popularity of the entities mentioned in a question (Kandpal et al., 2022; Carlini et al., 2023). Previous work has shown that entity co-occurrence (as measured by the number of times in the pre-training set that the entities in the question and in the answer co-occur in the same passage) is strongly correlated with task performance (Kandpal et al., 2022). We use their code and data (from the Pile) to explore whether QUIPScore correlates with co-occurrence frequency. Due to the imbalance between co-occurrence counts, we sample 400 instances (or as many as available) from each dataset and co-occurrence frequency bin.11 We measure the QUIP-Score on these instances using the output generations from ChatGPT on both grounding and null prompts. Figure 2 shows that QA entity popularity is positively correlated with QUIP-Score for both grounding and null prompts, more so for grounding. We find that the model better recalls information from Wikipedia when QA entities frequently co-occur. 11See Kandpal et al. (2022) for frequency bin design details. \f0 (0, 10] (10, 100] (100, 1000] (1000, 10000] Frequency in Pre-training Data 0.0 0.2 0.4 QUIP-Score TriviaQA Null Grounded 0 (0, 10] (10, 100] (100, 1000] (1000, 10000] Frequency in Pre-training Data 0.0 0.2 0.4 QUIP-Score Natural Questions Figure 2: Impact of entity popularity on QUIP-Scores, showing that models are better able to quote pre-training text about popular entities. The x-axis shows how many times the given entity relationship was found co-occurring in pre-training data. Bars indicate 1 standard error. We use the ranges following (Kandpal et al., 2022). base large xl xxl Model Size 0.15 0.20 0.25 0.30 QUIP-Score Prompt Type Grounded Null ada babbage curie davinci Model Size 0.15 0.20 0.25 0.30 QUIP-Score Figure 3: Model size vs QUIP-Score performance using FLAN-T5 (top) and OpenAI (bottom) models. As model scale increases, so does performance. At smaller model sizes, the grounding prompt is not more effective than the null prompt, but gains efficacy with model size. Error bars indicate 1 standard error. 5.4 Impact of Instruction Tuning One potential reason for why these models can recall their pre-training data on request is a better capability to instruction-follow. We test this hypothesis in Figure 4 that compares T5-11B compared to FLAN-T5-11B. We find that instruction-tuning does help, as the QUIP-Scores for T5-v1.1-Adapt are similar between grounding and null prompts, while the FLAN-T5 model has a large difference between the null and grounding prompt (roughly 2x better). 5.5 Qualitative Examples Figure 5 shows example outputs from a variety of models. When queried with according-to prompts, the model generates much larger chunks of texts that occur in Wikipedia (shown in purple). T5 FLAN-T5 Model Size 0.00 0.05 0.10 0.15 0.20 0.25 0.30 QUIP-Score Prompt Type Null Grounded Figure 4: Comparing instructed-tuned FLAN-T5 XXL to non-instruction tuned T5-v1.1-Adapt XXL. Note that grounding has a larger impact on instruction-tuned models as compared to non-instruction tuned. It is important to note that although the text may be grounded, it does not mean the output generation is correct with respect to the question. For example, the TriviaQA example shows that both models predicted the incorrect quote for Smokey the Bear, although the according-to explanation was better grounded in Wikipedia. 6 Grounding to Other Corpora In the previous experiments we\u2019ve shown that LLMs can ground to Wikipedia upon request. However, are according-to prompts limited to the Wikipedia domain only? To answer this question we build two more DATA PORTRAITs, one on PubMed articles and one with the U.S. legal tax code applying to tax year 2022. We then evaluate on datasets where having such background knowledge would be helpful and evaluate both the downstream task performance and QUIP-Score, similar to the previous Wikipedia experiments. Datasets. We use datasets in the MultiMedQA benchmark suite (Singhal et al., 2022) to test \fPrompt Generated Text 1. Question (ELI5) Why is it that every time we put something down our throat toothbrush, finger etc. we start to gag but when we but food and water down our throat we don\u2019t gag at all?, Null Prompt (ChatGPT) This is because our gag reflex is triggered by objects that touch the back of our throat and activate sensory receptors called the pharyngeal receptors. When we eat or drink, the muscles in our throat automatically contract and move food and liquid down to the stomach, bypassing the pharyngeal receptors and avoiding the gag reflex. However, when we put non-food objects like a toothbrush or finger in our throat, they can activate these receptors and trigger the gag reflex. Grounded Prompt (ChatGPT) The gag reflex is a natural protective mechanism of the body that can be triggered by stimulation of the soft palate, the back of the tongue or the back of the throat. It is a reflex contraction of the muscles of the back of the throat that can cause vomiting or choking. The reflex can be more easily triggered by foreign objects, such as toothbrushes or fingers, that are not natural to the body, compared to food and water which are familiar substances that we regularly ingest without triggering the reflex. Additionally, the shape and texture of the foreign object may also contribute to triggering the gag reflex. 2. Question (NQ) In what part of the digestive tube do you expect the initial digestion of starch? Null Prompt (ChatGPT) The initial digestion of starch occurs in the mouth; enzymes in saliva break down starch into smaller molecules. Grounded Prompt (ChatGPT) The initial digestion of starch happens in the mouth; Salivary amylase, an enzyme found in the mouth, is responsible for breaking down starch into simpler sugars. 3. Question (ELI5) Why do towels that feel so rough on the skin dry you so much better, but towels that feel soft don\u2019t dry well at all? Null Prompt (Davinci) Towels that feel rough on the skin are usually made of a coarser fabric, such as terry cloth, which is made of loops of cotton yarn. This fabric is more absorbent than softer fabrics, such as velour or microfiber, which are made of finer fibers. The loops of terry cloth fabric create more surface area for the towel to absorb water, which is why it is more effective at drying than softer towels. Grounded Prompt (Davinci) Towels that feel rough on the skin are usually made of a material with a higher absorbency, such as cotton or linen. These materials are able to absorb more water than softer materials, such as microfiber or polyester. The absorbency of the material allows it to draw moisture away from the skin more effectively, resulting in better drying. Soft towels, on the other hand, are usually made of materials with lower absorbency, which means they are not able to absorb as much water and therefore do not dry as well. 4. Question (TriviaQA) According to Smokey The Bear, Remember... only you can what? Null Prompt (GPT-J-Instruct) \u201cOnly you can see me\"; Because fire requires a fuel source to burn, and smoke is the byproduct of a fire\u2019s burning process. Therefore, since smoke is created whenever a fire burns, the presence of smoke indicates that a fire is burning somewhere. Grounded Prompt (GPT-J-Instruct) \u201cOnly you can remember\"; According to Smokey The Bear, memory is an internal mental process that is unique to each person. It is the ability to remember past experiences, along with the ability to recognize and interpret the information that is stored in our memories. This explains why, despite the fact that memory techniques vary between individuals, memory is a universal quality that all humans share. Therefore, the answer to the question is \u201cyes\". 5. Question (HotpotQA) Which magazine is based out of more countries, Adventist World or Baseball Digest? Null Prompt (FLAN-T5 XXL) adventist world; Adventist World is a magazine that is based out of many countries. Grounded Prompt (FLAN-T5 XXL) adventist world; Adventist World is a monthly magazine published by the Seventh-day Adventist Church. It is based in the United States, Canada, and many other countries. Figure 5: Example generations from various considered models. Purple text was found in Wikipedia. Note that for non-ELI5 datasets, models were prompted to generate the answer, a semicolon, and then the explanation (see \u00a74.2). Note that better grounding to Wikipedia does not always imply correct answers (see Question 4). Null Grounded Dataset QUIP EM QUIP EM According to PubMed... PubMedQA 54.1 48.2 59.6\u2191(+5.5) 49.6\u2191(+1.4) MedQA 45.1 53.3 45.9\u2191(+0.8) 54.0\u2191(+0.7) MedicationQA 36.7 N/A 39.6\u2191(+2.9) N/A According to the U.S. Tax Code... SARA 4.4 52.0 13.3\u2191(+8.9) 55.0\u2191(+3.0) Table 4: Results with ChatGPT using according-to prompts for PubMed (top) and the U.S. legal tax code (bottom). according-to prompts consistently improve quoting on the non-Wikipedia domains while maintaining task performance. MedicationQA does not have an automated evaluation metric, so only QUIP is reported. grounding to PubMed: PubMedQA (Jin et al., 2019) a reading comprehension task over PubMed abstracts, MedQA (Jin et al., 2020) consisting of multiple-choice questions from the US Medical Licensing Exam, and MedicationQA (Abacha et al., 2019) which asks open-domain questions about patient medications. Although these last two are not directly sourced from PubMed, they contain information that is likely to be found in it. Note that we do not give the model the abstract as typically done in PubMedQA, but instead evaluate closed-book in order to measure quotes from model parameters. In the legal domain, we use the SARA dataset (Holzenberger et al., 2020) consisting of tax cases to be evaluated using natural language inference.12 Results. The results in Table 4 with ChatGPT show that according-to prompts improve end-task performance and QUIP-Scores. On SARA, QUIPScores more than triple, while also minorly increas12As these datasets have different formats, e.g. NLI and multiple choice, we change the prompt slightly to accommodate them (Appendix D). We use the test set for all datasets. \fing performance. In the medical domain, grounding to PubMed improves performance slightly as well, and improves QUIP scores on all datasets. 7 Discussion and Future Implications Our results strongly suggest that LLMs can be steered via prompting to increase the amount by which they quote human-authored sources in their training data. This finding has strong implications not just for our considered tasks, but also for a wide array of other task spaces in which provenance grounding is important. We note that our according-to prompting strategy is orthogonal to other directions in LLM grounding, including using retrieval augmentation, and as according-to prompting is simple and generally increases both grounding and task performance we would encourage future research to try our approach in tandem. 8" + }, + { + "url": "http://arxiv.org/abs/2305.07614v2", + "title": "NevIR: Negation in Neural Information Retrieval", + "abstract": "Negation is a common everyday phenomena and has been a consistent area of\nweakness for language models (LMs). Although the Information Retrieval (IR)\ncommunity has adopted LMs as the backbone of modern IR architectures, there has\nbeen little to no research in understanding how negation impacts neural IR. We\ntherefore construct a straightforward benchmark on this theme: asking IR models\nto rank two documents that differ only by negation. We show that the results\nvary widely according to the type of IR architecture: cross-encoders perform\nbest, followed by late-interaction models, and in last place are bi-encoder and\nsparse neural architectures. We find that most information retrieval models\n(including SOTA ones) do not consider negation, performing the same or worse\nthan a random ranking. We show that although the obvious approach of continued\nfine-tuning on a dataset of contrastive documents containing negations\nincreases performance (as does model size), there is still a large gap between\nmachine and human performance.", + "authors": "Orion Weller, Dawn Lawrie, Benjamin Van Durme", + "published": "2023-05-12", + "updated": "2024-02-26", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "Introduction Recent work in natural language processing (NLP) has shown that language models (LMs) struggle to understand text containing negations (Ravichander et al., 2022; McKenzie et al., 2022) and have poor performance compared to humans. This unresolved problem has downstream implications for information retrieval (IR) models, which use LMs as the starting backbone of their architectures. However, work on negation in IR has mainly focused on pre-neural (e.g. no LM) retrieval (Kim and Kim, 1990; McQuire and Eastman, 1998; Averbuch et al., 2004; Kim et al., 2019), with no research into how negation affects modern neural IR. This failure to understand negation in IR can lead to devastating consequences in high stakes 1Code and data are available at https://github.com/ orionw/NevIR Had a seizure Now what? Hold the person down or try to stop their movements. Put something in the person's mouth (this can cause tooth or jaw injuries) Administer CPR or other mouthto-mouth breathing during the seizure. Give the person food or water until they are alert again.\u00a0 Figure 1: Negation is something not well understood by IR systems. This screenshot shows Google Search making a deadly recommendation because of its failure to catch the negation in the article (e.g. \u201cdo not ...\"). situations, like the prominent case where Google Search told users what to do during a seizure by listing off bullet points from a website that was specifically specifying what not to do (Figure 1). One can easily imagine other serious failure cases in high-stakes domains such as law, education, or politics. Even for casual everyday usage, a lack of understanding of negation by neural IR ignores an entire category of user queries, such as \u201cWhere should I not stay in [vacation town]?\", \u201cWho did not win an Oscar in 2023?\", or \u201cWhat information has OpenAI failed to release about GPT-4?\" We aim to fill this gap in the literature by providing a benchmark for Negation EValuation in Information Retrieval, dubbed NevIR (pronounced \u201cnever\"). NevIR builds off of existing work in negation (Ravichander et al., 2022) by using 2,556 instances of contrastive document pairs that differ only with respect to a crucial negation. We then crowdsource query annotations for the two documents in each pair, where each query is only relevant to one of the respective documents and is irrelevant to the other document (Figure 2). By doing so, we can test whether models correctly rank the documents when accounting for the negation. We find that nearly all IR systems ignore the negation, generally scoring one document of the arXiv:2305.07614v2 [cs.IR] 26 Feb 2024 \fBecause it is resistant to corrosion, nickel was occasionally used as a substitute for decorative silver. Nickel was also occasionally used in some countries after 1859 as a cheap coinage metal (see above), but in the later years of the 20th century, it was replaced by cheaper stainless steel (i.e. iron) alloys, except in the United States and Canada. Because it is resistant to corrosion, nickel was occasionally used as a substitute for decorative silver. Nickel was also occasionally used in some countries after 1859 as a cheap coinage metal (see above), but in the later years of the 20th century, it was replaced by cheaper stainless steel (i.e. iron) alloys, throughout the United States, Canada, and elsewhere in the Americas. Doc #2 Doc #1 What countries did not replace nickel with iron alloys in the 20th century? What countries replaced nickel with iron alloys in the 20th century? Doc #1 Doc #2 IR Model Ranked List 1 Doc #1 2 Doc #2 Doc #1 Doc #2 IR Model Ranked List 1 Doc #1 2 Doc #2 Figure 2: An example instance and the evaluation process. The initial documents from CondaQA (Ravichander et al., 2022) are used to create the queries via Mechanical Turk. The lower half shows the pairwise accuracy evaluation process, where the model must rank both queries correctly. In this example, the IR model scored zero paired accuracy, ranking Doc #1 above Doc #2 in both queries (and failing to take into account the negation). two higher for both queries. Furthermore, state-ofthe-art models perform nearly the same or much worse than randomly ranking the document pairs. We provide analysis of these results, showing that bi-encoder representations of the two documents are nearly identical despite negation words and that late-interaction models such as ColBERT ignore negation words in the MaxSim operator. We also show that continued fine-tuning of IR models on negation data provides some gains on NevIR, but still leaves significant room to improve (while also slightly hurting performance on traditional benchmarks such as MSMarco). We hope that our analysis will spur increased attention to the problem of negation in information retrieval and provide a dataset for IR training and evaluation. 2 Background 2.1 Motivation Information Retrieval (IR) is a broadly defined task of finding relevant pieces of information based on a query in natural language. The specifics of IR can vary broadly across languages, domains (e.g. legal), and purposes (e.g. counterarguments, lists, general factoids). Note that many of these specialized cases would be improved through a better understanding of negation, such as lists, counterarguments, and domain-specific language (e.g. legal or medical). Along with the improvement from neural IR, there has been a surge of interest in retrievalaugmented language models, such as RAG (Lewis et al., 2020), FiD (Izacard and Grave, 2021), and SeeKeR (Shuster et al., 2022). In just the last few months, generative retrieval has been productionized, with systems such as Google\u2019s Bard, Bing Chat, and You.com.2 These systems combine IR models with large language models, enabling them to find and generate responses to queries on the fly. Thus, as LMs and IR systems become more intertwined and used in production, understanding and improving their failure cases (such as negation) becomes crucial for both companies and users. 2.2 Neural IR Since 2020, neural models for information retrieval have generally outperformed traditional sparse methods (such as BM25) in most situations (Karpukhin et al., 2020; Khattab and Zaharia, 2020). Given a large collection of training data, these models are optimized using a contrastive loss in order to learn how documents are related to a 2https://bard.google.com/, https://www.bing.com/new, and https://you.com \fgiven query. These methods provide several advantages over sparse methods, including the ability to go beyond simple lexical matches to encode the semantic similarity of the natural language text. Recent work has focused on the ability of neural models to generalize to new domains, without any domain-specific training data (e.g. zero-shot). One prominent benchmark for this type of work is the BEIR dataset suite (Thakur et al., 2021) which evaluates models\u2019 generalization on a range of diverse IR datasets. Our work provides both zero-shot (no model fine-tuning) and standard train/test splits to accommodate both paradigms. 2.3 Negation in NLP Negation has also been an area where LMs typically perform below average (Li and Huang, 2009; He et al., 2017; Hartmann et al., 2021; Ettinger, 2020). Recent work on negation in NLP has shown that although LMs struggle with negation, it does improve with model scaling and improved prompting techniques (McKenzie et al., 2022; Wei et al., 2022). Despite scale improvements, these works (and other follow up works, c.f. Ravichander et al. (2022); Hossain et al. (2022)) have shown that LMs still struggle with negation and are in need of new datasets and methods to improve performance. As modern IR models use LMs as the backbone of their architectures, it is intuitive that negation will pose problems to IR systems as well. This problem is compounded as IR models are not able to scale to larger LMs as easily, due to efficiency and latency constraints on processing large amounts of documents in real-time. 2.4 Negation in IR Negation has been a weak point for information retrieval methods throughout the years. Early work in information retrieval (Kim and Kim, 1990; Strzalkowski et al., 1995) has demonstrated the difficultly of negation for non-neural methods like TFIDF (Sparck Jones, 1972) and BM25 (Robertson et al., 1995) when used out of the box. To the best of our knowledge, there is little to no published work on negation for neural models. The most similar area in IR is that of argument retrieval (Wachsmuth et al., 2018; Bondarenko et al., 2022), also included in the BEIR dataset, whose aim is to find a counterargument for the given query. However, these datasets implicitly ask the model to find the counterargument to the query through the task design and specifically don\u2019t include negation in the query. So although argument retrieval datasets contain a larger amount of negations compared to standard IR datasets like MSMarco (Nguyen et al., 2016), negation is not a conscious choice in the design of either the documents or the queries and is confounded by the implicit task definition. In contrast, we explicitly provide and measure the impact of negation on both documents and queries. Another recent work by Opitz and Frank (2022) incorporates features from Abstract Meaning Representation (AMR) parsing (including negation, as one of many) to improve SBERT training. However, they only evaluate negation for AMR parsing (and on AMR datasets) whereas we focus on negation in IR and create a benchmark for ranking. 2.5 Contrastive Evaluation Contrastive evaluation has emerged as a promising evaluation technique: constructing datasets that consist of minor differences but that test crucial distinctions (Gardner et al., 2020; Kaushik et al., 2019). For IR specifically, this has included testing sentence order (Rau and Kamps, 2022), lexical structures (Nikolaev and Pad\u00f3, 2023), general axiom creation (V\u00f6lske et al., 2021), paraphrases, mispellings, and ordering (Penha et al., 2022), LLM-based query and document expansion (Weller et al., 2023a), and factuality, formality, fluency, etc. (MacAvaney et al., 2022). We follow these works by evaluating not on a classical IR evaluation corpus, but rather with paired queries and documents. 3 Creating NevIR We test negation in neural IR using a contrastive evaluation framework, which has shown great utility in understanding neural models (Section 2.5). 3.1 Contrastive Documents We start by collecting pairs of documents that differ as minimally as possible but include negation, using the CondaQA (Ravichander et al., 2022) dataset as a starting point. CondaQA consists of \u201cin-thewild\" natural paragraphs that contain negation and human-edited versions of those paragraphs that either paraphrase, change the scope of the negation, or undo the negation. For our IR benchmark, we exclude the paraphrase edits, as they do not provide different semantic meanings for comparison. Thus, this allows us to compare the effect of the negation between document pairs with a minimal lexical \fStatistic Train Dev Test # Pairs 948 225 1383 Question 1 Length 10.9 11.1 11.0 Question 2 Length 11.2 11.4 11.4 Average Length Diff 0.95 1.05 1.01 Document 1 Length 112.5 113.0 113.7 Document 2 Length 115.6 116.8 116.8 Average Length Diff 4.39 4.71 4.16 Table 1: NevIR statistics, where length is measured in words. Note that the average length differences only take into account total length; for the distribution of unique word differences see Figure 3. difference (see Table 1 and Figure 3 for statistics). 3.2 Collecting Contrastive Queries To test whether IR models correctly rank the documents, we collect natural language queries for those document using workers on Amazon\u2019s Mechanical Turk. We ask workers to create one query for each of the two paragraphs, with four constraints: 1. The answer to the queries are the same for both paragraphs 2. The question is answered by a span (e.g. not a yes/no or boolean answer) 3. The question contains enough information to identify the relevant passage from a collection of documents (e.g. it contains relevant entity names, not just \u201cwhen was he born?\") 4. The question can only be answered by one of the two paragraphs (thus making the other paragraph irrelevant) Note that boolean questions would be relevant to both documents, and hence they were excluded. To help annotators understand the task, we allowed them to test their queries against a small neural cross-encoder model (all-mpnet-base-v2 from Reimers and Gurevych (2019)) but did not require them to. The annotation interface is in Appendix A. Through a series of initial pilot HITs, we found that annotators would typically quote verbatim from the passage and use the words that were only present in only one document. To prevent models from exploiting this shallow heuristic, we included a 5th constraint: not allowing workers to use any word in the query that was only present in one of the two documents. Note that this was an effective but not perfect constraint (as is shown by TF-IDF\u2019s 2% performance in Table 2), as any non-exact string match including subwords, plural versions, etc. would pass this validation check. We recruited annotators with greater than 99% HIT acceptance rate and greater than 5000 completed HITs. All annotators participated in two paid trial HITs where their work was assessed before moving on. Workers were paid $2.5 USD for approximately six minutes per HIT, for an average of $15 USD per hour. Overall, we had 28 unique annotators with an average of 91 query pairs each. 3.3 Dataset Statistics Dataset statistics are in Table 1, showing that the average number of words is around 11 for questions and 113 for documents. The average difference in word length between questions and documents is 1 and 4 respectively, showing that items in each pair are nearly the same length. The distribution of unique word differences between queries and documents is in Figure 3 and shows that most queries have small differences of 2 to 5 words, although some differ only by a single negation word and some differ by more than five. The difference between the two documents is much more variable, with about 5-10 different words between them. 3.4 Human Performance To verify that this dataset is trivial for humans, we asked three annotators to perform the ranking task on 10 randomly sampled test instances. In all three cases, all human annotators ranked all queries correctly, indicating the simplicity of the task. 4 Experimental Settings 4.1 Metric In early investigations we observed that IR models tended to rank one document above the other for both queries. This motivates our usage of a pairwise accuracy score to avoid score inflation when models don\u2019t actually understand the negation. We start by having the IR model rank both documents for each query. Then, if the model has correctly ranked the documents for both queries (flipping the order of the ranking when given the negated query) we know that the model has correctly understood the negation and the pair is marked as correct. 4.2 Models We evaluate a wide variety of models in order to show a comprehensive evaluation across common \f0 5 10 15 # of Different Words 0 100 200 300 400 Count 0 10 20 30 40 # of Different Words 0 25 50 75 100 125 Count Figure 3: The distribution of the number of different (e.g. unique) words between the queries (left) or documents (right) in each pair. The average length differences are shown in Table 1. neural IR model types. We note that although there are other models we do not use (as well as many different strategies for model training), all the major types of retrieval models are accounted for here. We evaluate on the following IR model categories: Sparse We evaluate sparse IR models that use the bag-of-words representation during retrieval. This includes TF-IDF (the only non-neural IR method, here as a baseline), and two variants of SPLADE v2++ (Formal et al., 2022, 2021; Lassance and Clinchant, 2022), the ensemble distillation and selfdistillation methods. Note that other variants of SPLADE perform worse than these two methods. We do not include BM25 as implementations of BM25 perform similar to TF-IDF due to the small collection and lexical similarity within the pair. Late Interaction Late interaction models like ColBERT (Khattab and Zaharia, 2020; Santhanam et al., 2022b) embed documents and queries into one vector for each sub-word token. At inference time, these models need to compute a MaxSim operation between query vectors and document vectors to determine similarity. We use both ColBERT v1 and v2 in our experiments.3 Bi-Encoders Another common category of IR models are bi-encoders, which embed both documents and queries into a single vector representation. At inference time the similarity is computed via a simple dot product or cosine similarity. Due to the popularity of this category, we include a broad spectrum: models from Sen3We reproduce ColBERT v1 weights from their repository. We do not use PLAID (Santhanam et al., 2022a) or quantization as there are only two documents in the collection per query and thus no efficiency requirements. tenceTransformer (Reimers and Gurevych, 2019) trained on MSMarco and/or Natural Questions, DPR (Karpukhin et al., 2020), CoCondenser (Gao and Callan, 2022), and RocketQA (Qu et al., 2021; Ren et al., 2021). Note that these models span a wide variety of pre-training tasks, base models, and complex training/additional fine-tuning strategies like hard negative mining and distillation. Cross-Encoders Cross-encoders encode both the document and query at the same time, computing attention across both pieces of text. This type of representation is the most expressive but also the most time-intensive, especially for larger models. We use various SentenceTransformer cross-encoders including those trained on MSMarco and various NLI datasets (Demszky et al., 2018; Williams et al., 2018; Cer et al., 2017), RocketQAv2 cross-encoders (Qu et al., 2021; Ren et al., 2021), as well as MonoT5 cross-encoders (Nogueira et al., 2020). Note that MonoT5 models are significantly larger (up to 33x larger for 3B) and more expensive than the other cross-encoders.4 Random We include a baseline that randomly ranks the two documents. Since there are two pairs, the expected mean pairwise accuracy is 25% ( 1 2 \u22171 2). 5 Results 5.1 Main Results The main results are presented in Table 2. We see that the more expressive the representation, the better the models generally perform. 4T5 models are also typically used for generative retrieval (GR) (Tay et al., 2022); thus we do not evaluate GR methods since (1) T5 is evaluated with MonoT5 already and (2) GR has been shown to be unable to scale to standard-sized collections (Pradeep et al., 2023) and is not used in practice. \fType Data Params Model Name Score Random N/A 0 Random 25% Sparse N/A N/A TF-IDF (Pedregosa et al., 2011) 2.0% MSMarco 110M SPLADEv2 ensemble-distill (Formal et al., 2022) 8.0% MSMarco 110M SPLADEv2 self-distill (Formal et al., 2022) 8.7% Late Interaction MSMarco 110M ColBERTv2 (Santhanam et al., 2022b) 13.0% MSMarco 110M ColBERTv1 (Khattab and Zaharia, 2020) 19.7% Bi-Encoders NQ 219M DPR (Karpukhin et al., 2020) 6.8% MSMarco 110M msmarco-bert-base-dot-v5 6.9% MSMarco 110M coCondenser (Gao and Callan, 2022) 7.7% MSMarco 85M RocketQA v2 (Ren et al., 2021) 7.8% NQ 66M nq-distilbert-base-v1 8.0% MSMarco 110M all-mpnet-base-v2 8.1% MSMarco 66M msmarco-distilbert-cos-v5 8.7% MSMarco 170M RocketQA v1 (Qu et al., 2021) 9.1% QA Data 110M multi-qa-mpnet-base-dot-v1 11.1% Cross-Encoders MSMarco 85M RocketQA v2 (Ren et al., 2021) 22.4% STSB 355M stsb-roberta-large 24.9% MSMarco 303M RocketQA v1 (Qu et al., 2021) 26.3% MSMarco 61M MonoT5 small (Nogueira et al., 2020) 27.7% MNLI 184M nli-deberta-v3-base 30.2% QNLI 110M qnli-electra-base 34.1% MSMarco 223M MonoT5 base (default) (Nogueira et al., 2020) 34.9% MSMarco 737M MonoT5 large (Nogueira et al., 2020) 45.8% MSMarco 2.85B MonoT5 3B (Nogueira et al., 2020) 50.6% Table 2: Results for pairwise contrastive evaluation using paired accuracy. All models are from sentence-transformers (Reimers and Gurevych, 2019) unless otherwise cited. Data indicates the main source of training data for the model, while score indicates Pairwise Accuracy (see Sec 4.1). Note that RocketQA includes both a cross-encoder and bi-encoder for both versions. TF-IDF scores were designed to be low in the task instruction (Section 3.2). No bi-encoder architecture scores higher than 12% paired accuracy despite the method of pretraining (e.g. CoCondenser) or the type of contrastive training data (MSMarco, NQ, etc.) with most models performing in the 5-10% range. In the sparse category, we see that TF-IDF scored only 2% paired accuracy. Since we did not allow annotators to use words that were in only one of the paragraphs, this is to be expected.5 For neural sparse models, all SPLADEv2++ models perform similarly to the bi-encoders, at around 8% paired accuracy. The late interaction style models perform significantly better than bi-encoders and sparse models, with ColBERTv1 scoring 19.7% and ColBERTv2 scoring 13.0%. Due to the nature of this model 5Note that the 2% performance, instead of 0%, is due to our annotation interface not restricting partial matches (e.g. \u2018version\" vs \u201cversions\", \u201cpart\" vs \u201cparting\" etc.). we are able to visualize the MaxSim operator to understand its performance (Section 5.3). The cross-encoder models performed the best, with MonoT5 (the default \u201cbase\" version) performing at 34.9% paired accuracy (and the largest version at 50.6%). Interestingly, the cross-encoders trained on NLI datasets generally performed better than cross-encoders trained on MSMarco, likely due to the fact that MSMarco contains little negation while NLI datasets typically do have negation. Overall, despite the strong scores of these models on various standard IR benchmarks, nearly all models perform worse than randomly ranking. Only a handful of cross-encoder models perform better, and they are the slowest and most expensive category of retrieval models. Even these models however, perform significantly below humans and have far from ideal performance. \fQNLI-Electra MonoT5-base MonoT5-3b NLI-DeBERTa ColBERTv2 ColBERTv1 RocketQA v2 CE SPLADEv2 DPR Multi-qa-mpnet RocketQA v2 BE 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Proportion Incorrect Prediction for Pair Was Always Ranks Negative Document First Always Ranks non-Negative Document First Reversed Ranking for Both Figure 4: Error analysis of the model predictions, detailing whether models preferred (e.g. by ranking first for both queries) the document with negation (green), the edited non-negation document (orange), or predicted the reversed ranking for both queries (blue). Models that performed better generally preferred negation documents when they made incorrect predictions while bi-encoder models were more balanced in their errors. 5.2 How does model size affect the results? We note that Table 2 includes different sizes of MonoT5. We see that as model size increases, so does the accuracy (from around 28% with MonoT5small to around 51% for MonoT5-3B). This aligns with results shown in the natural language processing community about model size (McKenzie et al., 2022; Wei et al., 2022; Ravichander et al., 2022; Weller et al., 2023b). However, unlike NLP, IR is typically more latency constrained. Thus, models like MonoT5-3B are only feasible for re-ranking and not for firststage retrieval (c.f. Section 7 for more discussion). 5.3 ColBERT analysis As ColBERT models provide token-level vectors and use the MaxSim operator, we are able to visualize whether the max operator pays attention to the negation words (Figures 9 and 10 in the appendix, due to space constraints). We find in all sampled instances that the MaxSim operator in ColBERTv1 ignores negation words, not selecting them as the max for any query token. Thus, with default training this is a crucial flaw when it comes to processing negation, which causes its less-thanrandom performance. However, it is possible to fine-tune these representations to put more weight on the negation words so that the MaxSim correctly identifies them, as seen in Section 6. 5.4 Error Analysis We conduct an error analysis to determine which document models prefer for a given pair. Models can prefer (e.g. rank highest in both queries) the document with negation, the edited non-negation document, or predict the reversed rank for both queries. Figure 4 shows that the models trained on NLI (and cross-encoders) greatly preferred the document with negation, while bi-encoder models tended to prefer them equally. Reversed rankings are uncommon, with bi-encoder models having the highest percentage (e.g. RocketQA at \u223c20%). 6 Fine-Tuning on NevIR Table 2 shows that models trained on standard IR training datasets do not show strong results on NevIR. However, none of the standard IR datasets include much negation in their queries (potentially due to production systems biasing users, c.f. Section 7). Thus, in this section we fine-tune IR models on NevIR\u2019s training set to see how negationspecific training data improves performance. We use the top performing model from nonsparse categories: multi-qa-mpnet-base-dot-v1 from SentenceTransformers, ColBERTv1, and MonoT5-base from PyGaggle. We fine-tune them using SentenceTransformers, the original ColBERTv1 code, and the original PyGaggle code. We train for 20 epochs and evaluate them on NevIR test and MSMarco dev after each epoch. Figure 5 shows that fine-tuning on negation data improves performance significantly, but still leaves a large gap to perfect (and the human score of) 100% paired accuracy. As would be expected, the large MonoT5 model quickly learns and then overfits to the data (while quickly losing perfor\f0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Epoch 0.2 0.4 0.6 Pairwise Accuracy Model ColBERTv1 MonoT5 base Multi-qa-mpnet 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Epoch 0.1 0.2 0.3 MRR@10 Model ColBERTv1 MonoT5 base Multi-qa-mpnet Figure 5: How fine-tuning on NevIR\u2019s training set affects results on NevIR and MSMarco: upper shows NevIR\u2019s pairwise accuracy scores on test while training for up to 20 epochs, lower shows MSMarco dev MRR@10 scores. For QNLI-electra-base see Appendix E. mance on MSMarco). Interestingly, ColBERT takes much longer to learn (due to the MaxSim operator), slowly increasing over nearly 20 epochs to learn what the bi-encoder model quickly learned in less than 3. However, we find that ColBERT has a much lower and slower drop in ranking scores on MSMarco (Figure 5 lower). We show visualizations of the MaxSim operator before and after NevIR training in Appendix D, illustrating that before training the MaxSim operator ignores negation, while after training it learns to correctly include it. 7 Discussion and Implications Implication for Current Systems IR model\u2019s performance on NevIR indicates that first stage retrievers do not take negation into account when doing retrieval. Thus, to perform well on negation with current models, expensive cross-encoder rerankers are necessary but not sufficient to achieve good results. Furthermore, our analysis indicates that in order to best learn negation (and significantly improve their performance), models should incorporate negation into their training data. Thus, when high precision for negation retrieval is not needed (e.g. some first stage retrieval settings), current models may be effective, as they will retrieve lexically similar documents regardless of negation. However, in order to have high-precision retrieval with negation (and documents with both negation and non-negation have high lexical overlap), expensive cross-encoders are the only current models that perform better than random ranking. NevIR provides the only dataset for measuring and improving retrieval with negation. Implications for Current Users Anecdotally, most users tend to avoid using negation queries in production IR systems like Google Search. This may be a self-reinforcing problem, as users have found poor results when they use negation in search and hence avoid using negations in the future. For example, the webpage for the University of Utah article that is shown in Figure 1 has since been updated and currently includes no negation words. Thus, it is unclear whether queries with negation are less common because of people\u2019s actual information needs or because production systems have biased users (and content creators) into an avoidance of negation. We hope that by introducing a benchmark for IR evaluation we can help enable these types of queries in the future. 8" + }, + { + "url": "http://arxiv.org/abs/2212.10002v3", + "title": "Defending Against Disinformation Attacks in Open-Domain Question Answering", + "abstract": "Recent work in open-domain question answering (ODQA) has shown that\nadversarial poisoning of the search collection can cause large drops in\naccuracy for production systems. However, little to no work has proposed\nmethods to defend against these attacks. To do so, we rely on the intuition\nthat redundant information often exists in large corpora. To find it, we\nintroduce a method that uses query augmentation to search for a diverse set of\npassages that could answer the original question but are less likely to have\nbeen poisoned. We integrate these new passages into the model through the\ndesign of a novel confidence method, comparing the predicted answer to its\nappearance in the retrieved contexts (what we call Confidence from Answer\nRedundancy, i.e. CAR). Together these methods allow for a simple but effective\nway to defend against poisoning attacks that provides gains of nearly 20% exact\nmatch across varying levels of data poisoning/knowledge conflicts.", + "authors": "Orion Weller, Aleem Khan, Nathaniel Weir, Dawn Lawrie, Benjamin Van Durme", + "published": "2022-12-20", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "main_content": "Introduction Open-domain question answering (ODQA) is the task of answering a given question based on evidence from a large corpus of documents. In order to do so, a system generally first retrieves a smaller subset of documents (typically between 5-100) and then answers the question based on those documents. Previous research in ODQA has resulted in many well-curated datasets that evaluate a model\u2019s ability to answer questions on a wide array of topics (Kwiatkowski et al., 2019; Joshi et al., 2017; Dunn et al., 2017; Yang et al., 2015). However, most internet users search across lesscarefully curated sources, where malicious actors are able to affect articles that may be used by an ODQA system (Figure 1). Furthermore, even in 1Code and data will be made public at https:// github.com/orionw/disinformation-defense * Authors contributed equally Retrieved Documents Obama was born in\u00a0Honolulu, Hawaii\u00a0Kenya.\u00a0 After\u00a0graduating from Columbia\u00a0University ... Obama was born at the Medical\u00a0Center for\u00a0 Women and Children in\u00a0Honolulu, Hawaii\u00a0Kenya\u00a0... Obama (born 1961 in\u00a0Hawaii) is an American\u00a0 politician who served as the 44th President\u00a0\u00a0... \u00a0Where was Barack Obama born? Q Q1 Q2 What is Barack Obama's city of birth? What hospital was Barack Obama born in?\u00a0 Poisoned ODQA\u00a0 Our Method CAR Hawaii Hawaii 1 2 3 Q 3 1 2 \u00a0Where was Barack Obama born? 5 2 3 3 1 4 Kenya Query Augmentation Kenya Hawaii Figure 1: An example of a poisoning attack on an opendomain question answering (ODQA) pipeline with our method (Lower) vs a standard system (Upper). The passages have been adversarially poisoned to change Obama\u2019s correct birthplace to be incorrect. Our proposed defense method uses query augmentation to find new contexts that are less likely to be poisoned (#4 and #5). It then uses a novel confidence-based aggregation method (CAR) to predict the correct answer. curated knowledge sources like Wikipedia, we frequently see attacks (e.g. malicious edits/fake pages) that have even impacted production QA systems.2 Recent work has recognized the potential for bad actors to influence automated knowledge-intensive NLP systems that involve retrieval: Du et al. (2022) explored how poisoned information affects automated fact verification systems using sparse nonneural information retrieval systems, while Chen et al. (2022); Longpre et al. (2021); Pan et al. (2023) 2For examples of disinformation attacks on popular entities that motivate our approach see Appendix A or the \u201cReliability of Wikipedia\" or \u201cVandalism on Wikipedia\" pages. arXiv:2212.10002v3 [cs.CL] 26 Feb 2024 \fhave studied the effect of knowledge conflicts and poisoning attacks on ODQA pipelines. All of these works have illustrated that poisoning attacks significantly decrease system performance, even when using state-of-the-art models; however, only Pan et al. (2023) has even briefly considered the task of defending against poisoning attacks (which are becoming increasingly common, see Appendix A for real-life examples) and their proposed method, majority voting over different documents, provides only minor gains. We seek to fill this gap by proposing a simple but effective defense against these attacks. Building on the intuition that information is usually available in multiple places and that it is unlikely that all sources (or pages) will be poisoned, we propose a novel query augmentation scheme to gather a larger set of diverse passages. We also propose a new confidence method to decide when to use the newly gathered contexts vs the original, which we call Confidence from Answer Redundancy (CAR). Our proposed approach involves no gradient updates, can easily be applied to existing frameworks, and uses a simple resolution approach to arrive at the predicted answer. Together, our methods can provide gains of nearly 20 points in exact match, helping to reduce the negative effects of data poisoning and disinformation attacks on ODQA. 2 Experimental Details We seek to mimic realistic disinformation attacks on a curated knowledge source; thus, for our experiments we use Wikipedia as the knowledge collection for both original and augmented queries, and simulate an attack on each question independently. We follow Du et al. (2022) and poison the entirety of each Wikipedia page that corresponds to each of the retrieved passages.3 We vary the amount of poisoned pages from 1 to 100.4 Note that we do not poison the entire corpus, as poisoning millions of pages is beyond the scope of common attacks. 2.1 Data For our experiments we use Natural Questions (NQ) (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017), two popular datasets for open3e.g. if at least one of the 100 retrieved passages was from Obama\u2019s Wikipedia page, the rest of his page is poisoned 4As 100 passages are given to the models (so 100 is all passages see Appendix F for why scores are non-zero). We also experimented with poisoning random retrieved passages in the top 100 and found similar results (Appendix D) domain question answering. Furthermore, previous research on conflicts in ODQA has used these datasets in their experiments (Chen et al., 2022). The Natural Question dataset was gathered by collecting real-user queries typed into Google Search, while TriviaQA was collected by scraping question and answer pairs from trivia websites, and then matching the answers to Wikipedia passages. We simulate the data poisoning through the code available from Longpre et al. (2021), which introduced the problem in ODQA and has been used in subsequent work (Chen et al., 2022). Their method uses the answers to the questions to suggest an entity of the same type, using SpaCY NER (Honnibal and Montani, 2017), which is then used to replace the correct answer in the text. This allows for entity substitutions that keep the semantic order of the context, such as replacing dates with dates, people with people, numbers with numbers, etc. 2.2 Models We use two SOTA models: Fusion-in-Decoder (FiD) and ATLAS. FiD is an encoder-decoder architecture that generates an answer by first retrieving and encoding N passages and then concatenating them and giving them to the decoder (Izacard and Grave, 2021). FiD uses DPR for retrieval (Karpukhin et al., 2020). ATLAS (Izacard et al., 2022b) is currently the state-of-the-art model on Natural Questions and TriviaQA. This model also uses fusion in the decoder and has a T5 backbone, but uses Contriever (Izacard et al., 2022a) for retrieval and does joint end-to-end training. For information on hyperparameters see Appendix B. 3 Proposed Method 3.1 Query Augmentation We hypothesize that in cases of conflicting evidence in large corpora for factoid based questions, there will generally be more evidence for the correct answer than for incorrect ones. For example, imagine the question \u201cWhere was Barack Obama born?\" with a corresponding attack to his Wikipedia page (see Figure 1). As Wikipedia contains redundant information, alternate questions that find contexts on other pages (e.g. his mother Ann Dunham\u2019s page) will still find the right answer. To create these alternate questions that will still find the correct answer but with more diverse passages, we propose a query augmentation scheme that has similarities to query expansion in informa\fNatural Questions 0 20 40 60 80 100 # of New Passages in Augmented Questions 0 200 400 600 Count TriviaQA 0 20 40 60 80 100 # of New Passages in Augmented Questions 0 50 100 Count Figure 2: Number of new passages retrieved per augmented question (e.g., a question in the 100 bin would have 100 new contexts not retrieved by the original). Natural Questions is on top and TriviaQA on bottom. tion retrieval (IR) (Singhal et al., 2001; Carpineto and Romano, 2012; Wei et al., 2022; Claveau, 2021). We generate these new questions for each original question by prompting GPT-3. We use davinci-002 from Brown et al. (2020), but one can alternatively use open-source language models for similar results: see Table 2 and Appendix K for results with Vicuna v1.5 (using Llama 2). These query augmentations are not necessarily paraphrases as they strive to be as different as possible while still leading to the correct answer. They are also not identical to classic query expansion from IR either, as they do not intend to solely broaden the query scope but rather to find diverse contexts from questions of any scope. For each query in the dataset, we prompt GPT-3 with the following: \"Write 10 new wildly diverse questions with different words that have the same answer as {Original Question}\", thus generating approximately 10 augmented questions per original question (c.f. Table 1 for three examples of generations). Finally, we retrieve the 100 most relevant contexts for those augmented questions. Note that if searching with the augmented questions retrieves a passage from a Wikipedia page that was already poisoned from the initial set of When was the last time anyone was on the moon? When was the last time anybody walked on the moon? When was the last manned mission to the moon? When was the last time a human was on the moon? In which year did Picasso die? When did Picasso die? How old was Picasso when he died? What was Picasso\u2019s cause of death? What is the largest city in Turkey? What city in Turkey has the most people? What is the most populous city in Turkey? What is the most urbanized city in Turkey? Table 1: Example question augmentations with the original question on top (see Appendix M for more). 100 (see Section 2) we return the poisoned text following Du et al. (2022). When we compare these newly retrieved passages to the passages retrieved by the original question we find that they do provide a more diverse set of passages. Figure 2 shows the distribution of new passages retrieved, with almost all retrieving at least 20 or more new passages and a substantial amount having an entirely new set of 100 passages. 3.2 Confidence from Answer Redundancy In order to identify the best augmented queries with their corresponding new passages, we derive a novel method, CAR, for measuring ODQA confidence. CAR measures how often the predicted answer string occurs in the retrieved contexts (usually 100 contexts). For example, if the predicted answer appears only once in all 100 contexts, this may mean that the retriever was not able to find many documents relevant to the query, especially as popular entities (those asked about in NQ and TriviaQA) are generally found in many articles. Overall, the more frequently the predicted answer appears in the contexts, the more likely that the retrieval was both successful and plentiful (e.g. redundant). In practice, given a set of documents D, we set a hyperparameter k to determine the cutoff for CAR (in practice we use k = 5, found by tuning on the dev set). If the model retrieves more than k unique passages that contain the predicted answer string, we classify the model as confident and vice versa. We use this as part of our resolution method below. 3.3 Answer Resolution We use the following methods to combine (or not combine) the original question with the augmented questions, with shortened names in italics. Note \f0 20 40 60 80 100 Number of Poisoned Articles 0.0 0.2 0.4 0.6 0.8 1.0 Exact Match FiD on TriviaQA 0 20 40 60 80 100 Number of Poisoned Articles 0.0 0.2 0.4 0.6 0.8 1.0 Exact Match FiD on Natural Questions Resolution Original Random Majority Vote Redundancy Data Type Original C New C 0 20 40 60 80 100 Number of Poisoned Articles 0.0 0.2 0.4 0.6 0.8 1.0 Exact Match ATLAS on TriviaQA 0 20 40 60 80 100 Number of Poisoned Articles 0.0 0.2 0.4 0.6 0.8 1.0 Exact Match ATLAS on Natural Questions Resolution Original Random Majority Vote Redundancy Data Type Original C New C Figure 4: Data poisoning and defense strategies using ATLAS (Lower Figure) and FiD (Upper Figure). See Appendix N for equivalent table version of these plots. Left shows TriviaQA, right shows Natural Questions. C stands for context. 100 poisoned articles indicates all contexts are poisoned; performance is non-zero because the models ignore the contexts or the poisoning failed to recognize all aliases (\u00a7G). Note that Redundancy greatly outperforms the majority vote baseline from Pan et al. (2023). Scores plateau after around 40 poisoned articles as that is around when all 100 retrieved passages are poisoned (see Appendix G for a discussion of article vs passage). that methods one through three are baselines for our newly proposed technique: (1) use the original question only, e.g. the \u201cdo-nothing\" baseline (2) randomly pick one new augmented question (3) take a majority vote of the augmented question\u2019s predictions (e.g. the method from Pan et al. (2023)) or (4) use answer redundancy, described in the following paragraph. We also attempted several variants of these options that underperformed and are not included for clarity (Appendix I). Our proposed method for answer resolution, redundancy, uses CAR to effectively combine both the original question and the new augmented questions. We use CAR to decide whether to choose the original question\u2019s prediction, and if not, use a majority vote over the predictions from the augmented questions that are confident (filtered using CAR). By doing so, we retain performance from the original question and passage set when confident, while otherwise backing off to the augmentation. All methods except the baseline can use either the original (Original C) or new (New C) sets of passages as context and we show both options in our results. Further, majority vote and redundancy can choose between either the new or original questions during inference (we use original, after tuning, see Appendix B for more details). 4 Results Figure 4 highlights our key findings using FiD and ATLAS (for results in table form, see Appendix N). Following (Longpre et al., 2021; Chen et al., 2022), all results are filtered by those that the model originally predicted correctly, thus making the original method have by definition 100% EM at the 0-article poisoning level. We show results in EM, as is typically done in previous work (Izacard and Grave, 2021; Izacard et al., 2022b), however, F1 results are nearly identical and can be found in Appendix O. As expected and shown in previous work (Pan et al., 2023; Chen et al., 2022), we find that as the amount of poisoned data given to the model increases, performance decreases. We also find that resolution methods that use the new contexts (New C) outperform those that use the original contexts, confirming the intuition behind our proposed \fNumber of Poisoned Articles Context Type Resolution 1 2 3 5 10 20 40 50 100 Original C Majority Vote -0.6 -0.8 1.0 -0.7 -0.4 0.2 0.0 0.0 0.0 Original 1.0 -0.1 1.4 1.8 1.1 1.1 1.0 0.9 0.8 Random -5.6 -5.6 -4.9 -2.7 -1.9 -0.9 -0.3 -0.2 -0.4 Redundancy 0.2 -0.1 0.4 0.4 0.8 0.9 0.7 0.6 0.5 New C Majority Vote 4.7 3.2 2.8 2.9 2.3 1.9 2.5 2.3 2.3 Random 2.6 1.8 1.2 2.4 1.9 2.4 2.7 2.7 1.8 Redundancy 1.3 -0.4 1.7 3.4 2.7 3.0 3.1 2.9 2.9 Table 2: Difference between GPT-3 and Vicuna v1.5 (using Llama 2) generations as query augmenters for NQ with FiD (positive scores indicate GPT-3 is better). Results in EM. Results are comparable to GPT-3 DaVinci in Figure 4. method of finding diverse new contexts (e.g. 55.9 vs 65.1 EM for EM at 1 article poisoned). Furthermore, we see that the redundancy resolution strategy outperforms all other strategies (including the only published baseline, majority voting from Pan et al. (2023)), by up to 19.4% in the TQA setting (33.2% at 100 poisoned articles vs 13.8% baseline). Scores on NQ are lower than TQA, even with no poisoning, but still improve up to 14% EM using redundancy. Overall, we see that our proposed redundancy method outperforms all other methods on both datasets, at every level of poisoning and especially so when using the newly retrieved contexts. Can we use open-source LLMs as the query augmentation model? We replace GPT-3 with Vicuna v1.5 (using Llama 2) and repeat the experiments with FiD. The results are shown in Table 2 for NQ and in Appendix K in figure form. We see that Vicuna performs similar to GPT-3, in some cases even outperforming it. Thus, we see that our approach works with both open and closed-source models. How many augmented questions are needed for our approach to work well? To answer this, we show Figure 5 with the overall trend showing that as the number of augmented queries increases, so does the score. Furthermore, it shows that even one augmented query has gains over the baseline method, allowing for a more compute efficient method at the expensive of several points of performance. More computational analysis of our methods is in Appendix J. Why is performance not 0% at 100 poisoned documents? We also explore why performance is non-zero when the number of poisoned articles is equal to the number of contexts the model receives. We manually annotated 20 examples on TriviaQA that FiD got correct at the 100-article poisoning 2 4 6 8 10 Number of Augmented Queries 0.625 0.630 0.635 0.640 Exact Match Figure 5: An ablation on the number of augmented queries (and thus number of times retrieval is used) for the redundancy resolution method on Natural Questions 1-article FiD poisoning setting. As the number of augmented queries increases, so does the performance. Baseline performance is 50.1%, indicating that even just one augmented query provides significant gains. setting. We found that it is due to the model using its parametric knowledge to correctly answer (65% of the time), as the correct answer was not present in any of the input documents, or due to answer aliases (35%) that were not part of the answer set. Examples of cases can be found in Appendix F. 5" + }, + { + "url": "http://arxiv.org/abs/2206.02291v1", + "title": "Pretrained Models for Multilingual Federated Learning", + "abstract": "Since the advent of Federated Learning (FL), research has applied these\nmethods to natural language processing (NLP) tasks. Despite a plethora of\npapers in FL for NLP, no previous works have studied how multilingual text\nimpacts FL algorithms. Furthermore, multilingual text provides an interesting\navenue to examine the impact of non-IID text (e.g. different languages) on FL\nin naturally occurring data. We explore three multilingual language tasks,\nlanguage modeling, machine translation, and text classification using differing\nfederated and non-federated learning algorithms. Our results show that using\npretrained models reduces the negative effects of FL, helping them to perform\nnear or better than centralized (no privacy) learning, even when using non-IID\npartitioning.", + "authors": "Orion Weller, Marc Marone, Vladimir Braverman, Dawn Lawrie, Benjamin Van Durme", + "published": "2022-06-06", + "updated": "2022-06-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Federated learning (FL) is a machine learning technique that trains a model across multiple distributed clients holding local data samples, without ever storing client data in a central location (Kone\u02c7 cn` y et al., 2016; McMahan et al., 2017). These techniques are appealing for those who wish to learn from data in a privacy-preserving way, without ever transmitting the data off of a client device. FL becomes essential when data is especially sensitive, as is the case at hospitals, legal \ufb01rms, \ufb01nancial institutions, or in countries that enact legislation concerning data privacy (such as the EU\u2019s GDPR or the US\u2019s HIPAA). FL has been applied to problems in natural language processing (NLP) since its inception, particularly in use of the language modeling task (Yang et al., 2018; Hard et al., 2018; Ramaswamy et al., 2019; Chen et al., 2019a; Ji et al., 2019; Stremmel 1Our code and data are made publicly available at https://github.com/orionw/ Multilingual-Federated-Learning * Authors contributed equally En Fr Ru Zh Model En Fr Ru Zh En Fr Ru Zh En Fr Ru Zh Zh Zh Zh Zh Model Ru Ru Ru Ru Fr Fr Fr Fr En En En En Client 1 Client 2 Client 3 Client 4 En x4 Fr x4 Ru x4 Zh x4 Federated Learning with IID Data Federated Learning with non-IID Data Centralized (standard) Learning Client 1 Client 2 Client 3 Client 4 Model Figure 1: A depiction of different learning strategies with Federated Learning (FL) and multilingual data, with 4 clients and 16 instances from En, Fr, Ru, and Zh in this toy example. Black lines indicate gradient \ufb02ow. Centralized learning is the standard training method (no privacy), FL with IID data partitions the data into IID data subsets for each client, while FL with non-IID data has the languages separated across clients. and Singh, 2020). Another large area of FL research is focused on analyzing performance when the data is non identically independently distributed (non-IID). In these cases, many works have shown that FL performance is sub-par with respect to centralized learning methods (Kone\u02c7 cn` y et al., 2016; Hard et al., 2018; Lin et al., 2021). Despite the large amount of research in FL for NLP, how different languages impact the FL training process has yet to be explored (Liu et al., 2021). Furthermore, multilingual FL provides an interesting and natural setting to explore non-IID data, of which different languages are an obvious example. In this work, we explore multilingual federated learning across three multilingual language tasks and different stages of model pretraining. Our results show that \ufb01ne-tuning pretrained models with FL can perform similarly to pretrained models \ufb01netuned with the standard centralized method (the arXiv:2206.02291v1 [cs.CL] 6 Jun 2022 \fno privacy setting), despite having completely nonIID language partitioned data. This \ufb01nding shows that pretrained models provide an effective way for practitioners (and consumers) of multilingual data to gain the privacy bene\ufb01ts of FL at little or no cost to the \ufb01nal task performance. 2 Background and Related Work The term Federated Learning was \ufb01rst proposed in McMahan et al. (2017), who applied the FederatedAveraging algorithm to the tasks of language modeling and image classi\ufb01cation. Since then, much of the theoretical and applied work in FL (e.g. Chen et al. (2019b); Wu et al. (2020) among many others) has considered language modeling as a key task or benchmark. Concurrent with the growing interest in Federated Learning, NLP has rapidly shifted toward the use of pretrained language models (PLMs) (e.g., BERT Devlin et al. 2019; T5 Raffel et al. 2019; GPT-3 Brown et al. 2020). These PLMs are used for both the core task of next word prediction and as a starting point for learning other downstream NLP tasks. This pretrain-and-\ufb01ne-tune paradigm has since become ubiquitous in modern NLP and has inspired a large and active area of research in model pretraining. Multilingual versions of these pretrained models have since been developed and are often used with transfer learning techniques to increase performance for tasks where data is limited (e.g. mBERT from Devlin et al. 2019). The intersection of distributed learning from private data partitions and PLMs is still a nascent area. Several works have explored more ef\ufb01cient methods of federated communication with the purpose of enabling these larger NLP models for production situations (Sui et al., 2020; Wu et al., 2021). Our work is orthogonal to these (and could be combined in future work), as we explore the effects of multilingual data on PLM FL, rather than creating methods to enable their use. Other papers focus on the gap between federated learning performance and centralized performance, evaluating on a wide variety of English NLP tasks (Liu and Miller, 2020; Lin et al., 2021; Chen et al., 2021). Although they focus on differential privacy (DP) rather than FL, Li et al. (2021) \ufb01nd that direct PLM training is dif\ufb01cult with standard DP methods, but that \ufb01netuning PLMs on English data is possible with private learning techniques. We differ from all these works by studying private learning, speci\ufb01cally FL, for PLMs in the novel multilingual setting. 3 Experimental Design 3.1 Federated Learning Methods We use FederatedAveraging as the primary learning algorithm (McMahan et al., 2017). FederatedAveraging was introduced alongside the term Federated Learning and has been studied in both learning theory research (Stich, 2019) and applied work (Hard et al., 2018; Lin et al., 2021). In this algorithm, each client runs stochastic gradient descent (SGD) on its local data. After a speci\ufb01ed number of steps, the client transmits its local model to the server, which averages these updates into a single centralized set of parameters. The server then broadcasts the centralized parameters to each client and the process repeats. 3.2 Client Partitioning We consider three different training settings: standard training with no FL (e.g. centralized or C), FL with IID data (FL IID or I), where the data for each client is sampled randomly from all data, and FL with non-IID data (FL non-IID or N) where each client only sees data for one language (or for MT, one direction). See Figure 1 for a visual depiction of these three client partitioning schemes. 3.3 Data We study three multilingual language tasks, due to their common use in the community: language modeling (LM), machine translation (MT), and text classi\ufb01cation (TC). We note that the data we use for training is relatively small; however, this mirrors pratical FL, as each client will not have a large amount of data. We measure scores using perplexity (PPL) for LM, BLEU (Papineni et al., 2002) for MT, and accuracy for TC. Europarl We use the Europarl corpus (Koehn et al., 2005) taken from transcripts of European Union meetings. We sample data from eight languages: English, Spanish, Portuguese, French, German, Finnish, Polish, Lithuanian, and Czech. We sample 20k of each language for training and 5k for validation/testing, and use it for the LM task. MTNT We use the Machine Translation of Noisy Text (MTNT) dataset (Michel and Neubig, 2018), which was the testset for the 2019 WMT robustness challenge. MTNT was gathered from user \fCentralized IID FL Non-IID FL Method 0 5 10 15 20 PPL Europarl Initialization Random Pretrained Centralized IID FL Non-IID FL Method 0.0 2.5 5.0 7.5 10.0 PPL UN Figure 2: An overview of the language modeling results. Bars indicate the average language perplexity (PPL) over 8 languages for the Europarl dataset and 6 languages for the UN corpus. Lower is better. Europarl UN M En Cs Lt Es Pl Fi Pt De Avg En Es Fr Ru Zh Ar Avg B 26.2 34.8 40.1 20.0 20.0 26.6 25.5 22.1 26.9 22.3 15.0 17.2 9.8 18.1 14.7 16.2 C 19.3 4.5 3.9 8.3 4.7 4.9 7.0 10.8 7.9 9.0 5.2 8.2 *3.9 *4.3 *4.6 *5.9 I 26.6 5.4 *4.3 11.2 5.8 5.7 8.9 15.1 10.4 *9.1 5.2 *8.4 3.7 3.9 4.5 5.8 N 50.6 7.1 11.9 16.0 17.7 12.1 35.6 21.7 21.6 12.8 11.5 14.6 9.3 8.2 8.3 10.8 C 12.1 3.7 3.3 13.9 4.7 4.0 4.8 *6.8 6.7 *7.0 *4.1 4.9 *2.9 *3.3 *3.6 4.3 I 10.5 *4.0 4.2 *6.1 3.8 4.5 *5.6 *6.9 *5.7 6.5 3.9 5.7 2.8 3.2 3.5 4.3 N 8.8 3.7 3.9 6.0 3.8 *4.4 *5.6 6.7 5.4 *7.1 4.5 6.2 *3.2 4.2 4.0 *4.9 Table 1: Results for FL experiments on the LM task. Bold scores indicate the best in the column for the given section. Scores are measured in perplexity (lower is better). The top row (B) is a baseline using the pretrained model with no \ufb01ne-tuning. The middle rows are trained from randomly-initialized models while the bottom rows tune the pretrained model on task data. Due to space we abbreviate: C for Centralized, I for IID FL, and N for non-IID FL. We sample the mask distribution with 5 seeds and report the mean (standard deviations can be found in the Appendix, Tables 4 and 5). Asterisks indicate scores within 2 standard deviations of the best. comments on Reddit discussion threads and contains noisy text including typos, casual language, and niche terminology. The dataset contains two non-English languages that we use: En \u2192Fr and En \u2192Ja. This dataset has been used to test MT systems for robustness to domain shift (Li et al., 2019) and is suitable for our experiments since FL deals with client data that is uniquely shifted from centralized data. For more details on MTNT data preprocessing for M2M-100, see Appendix C. UN Corpus The UN Corpus (Ziemski et al., 2016) consists of of\ufb01cial records from the UN proceedings over the years 1990 to 2014, in six languages: English, French, Spanish, Russian, Chinese, and Arabic. We use this data for LM (with 50k instances of training data per language and 5k for validation/testing) as well as three MT directions covering 6 languages (En \u2192Fr, Ar \u2192Es, Ru \u2192Zh). Following previous work in MT adaption (see MTNT above) we sample 10k in each direction for training and 5k each for evaluation sets. NC Corpus For text classi\ufb01cation we use the News Classi\ufb01cation (NC) dataset from the XGLUE benchmark for cross-lingual language understanding (Liang et al., 2020). This is a classi\ufb01cation problem with 10 classes across 5 languages: English, Spanish, French, German, and Russian. We predict the article category given the article title and body (e.g. \ufb01nance, sports, travel). Since only 10k annotated examples are available for each language (excluding the of\ufb01cial test set), we sample 8k instances for training and 1k for evaluation sets. Note that although XGLUE is made for cross-lingual evaluation, we use it for multilingual evaluation. 3.4 Modeling For language modeling and text classi\ufb01cation, we examine two different initialization settings: (1) \ufb01ne-tuning from a pretrained multilingual model or (2) training the same multilingual model architecture but doing so with randomly initialized weights. For the MT experiments, we omit the randomlyinitialized results as MT systems generally need large amounts of data to produce good results (see Appendix B for more details). Our base model for the LM task is a distilled version of the mBERT model (134M parameters), \fMTNT UN Method En-Fr En-Ja Avg En-Fr Ar-Es Ru-Zh Avg No Training 30.7 14.1 22.4 31.4 27.4 27.9 28.9 Centralized 31.8 *15.4 23.6 37.3 35.9 34.1 35.8 IID FL 33.1 15.6 24.4 38.6 36.9 *35.6 37.0 non-IID FL *32.9 15.6 24.3 37.9 *36.6 35.7 36.7 Table 2: Results for FL experiments on the Machine Translation task. Bold scores indicate the best in the column, while asterisks indicate scores that are statistically similar to the best according to a paired bootstrap resampling test. Scores are measured with sacreBLEU (Post, 2018), higher is better. Method En Es Fr De Ru Avg Centralized 86.6 \u00b1 0.3 77.5 \u00b1 1.2 74.9 \u00b1 1.6 *82.3 \u00b1 1.6 80.7 \u00b1 0.7 80.4 \u00b1 0.6 IID FL 88.0 \u00b1 0.6 79.8 \u00b1 0.5 76.4 \u00b1 0.6 82.6 \u00b1 0.6 82.5 \u00b1 0.4 81.8 \u00b1 0.3 non-IID FL 81.0 \u00b1 0.9 69.3 \u00b1 1.6 73.7 \u00b1 1.0 76.0 \u00b1 0.3 71.9 \u00b1 1.1 74.4 \u00b1 0.5 Centralized 93.5 \u00b1 0.7 *86.3 \u00b1 0.5 82.9 \u00b1 0.3 89.6 \u00b1 0.1 *88.5 \u00b1 0.4 *88.1 \u00b1 0.2 IID FL 94.0 \u00b1 0.2 86.9 \u00b1 1.1 82.1 \u00b1 0.7 89.6 \u00b1 0.2 89.1 \u00b1 1.2 88.3 \u00b1 0.3 non-IID FL 92.5 \u00b1 0.1 *86.1 \u00b1 0.6 81.4 \u00b1 0.3 88.8 \u00b1 0.1 84.5 \u00b1 0.7 86.7 \u00b1 0.1 Table 3: Results for FL experiments on the Text Classi\ufb01cation task. Bold scores indicate the best in the column, while asterisks indicate scores within two standard deviations of the best. Scores are the mean of training with 3 different seeds, \u00b1 denotes the standard deviation. Scores are measured with accuracy, higher is better. The top rows are trained from random initialization while the bottom rows initialize from the pretrained model. shown to perform well across many languages (Sanh et al., 2019; Devlin et al., 2019) while being smaller than the full mBERT.2 For MT, we use the M2M-100 model (Fan et al., 2020) with 418M parameters, a many-to-many MT model that can translate between any pairing of 100 languages. For text classi\ufb01cation, we use the XLM-RoBERTa base sized model (270M parameters). We note that although there are other PLMs to consider, we focus on testing a varied set of commonly used, high-performing PLMs. 3.5 Training We use the Flower framework (Beutel et al., 2020) for federated training and evaluation due to its ease of use and strong community support. We use Hugging Face\u2019s transformers library (Wolf et al., 2019) for loading pretrained models and PyTorch as the underlying differentiation framework (Paszke et al., 2019). We train each LM model for 100 epochs if pretrained or 200 epochs if randomly initialized. For MT, we train for 25 epochs and for TC we train 2We note that mBERT uses masked language modeling (MLM) instead of standard language modeling, however, for the purposes of our analysis (as we do not seek to compare direct scores to previous work) MLM suf\ufb01ces. Furthermore, most multilingual PLMs train via some version of MLM. for 10 epochs if pretrained and 50 epochs if randomly initialized. For other hyperparameters and compute settings, see Appendix A. 4 Results Language Modeling In Figure 2 we see the overall results of the language modeling task across the two datasets. As expected, the randomly initialized models perform much worse than the pretrained models. The gap between between FL and centralized methods is smaller when using pretrained models, indicating that pretrained models are an effective initialization for federated learning. In Table 1 we show results broken down by language. Since the \ufb01ne-tuning task is the same as the pretraining objective (masked language modeling), we can use the pretrained model as a baseline (top row, B). In the randomly initialized category, the centralized model is the same or better than the FL methods in every single language, across both datasets. In the pretrained section the results are more mixed, with the centralized model winning or tying in 5 of the 8 Europarl languages and obtaining similar scores on the UN corpus. We also see that the randomly initialized non-IID model appears to diverge for some of the Europarl languages. \fExamining the difference between IID FL and non-IID FL, we see that IID FL performs better on average in three of the four settings. However, when initializing with a pretrained model, the performance gap narrows. Machine Translation Table 2 exhibits results on tuning a machine translation model on a domain speci\ufb01c dataset. We see that on the MTNT dataset, both FL algorithms actually outperform centralized learning (24.4 avg. BLEU for IID FL vs 23.6 for Centralized). The scores on Japanese are very similar for all models, possibly re\ufb02ecting the dif\ufb01culty of the task. On the UN corpus, we see again that the IID FL model performs best. Since the \ufb01ne-tuning task matches the original M2M-100 task, we can use the pretrained model directly as a baseline. In all cases, \ufb01ne-tuning shows an improvement (\ufb01rst row, No Training baseline). Note that our scores are not directly comparable to other work as we use a smaller training set. Text Classi\ufb01cation Table 3 shows results on text classi\ufb01cation. We see that when initialized randomly, non-IID FL shows a large drop in performance compared to the two other methods (i.e. more than 5 points worse than the Centralized method). Initializing with the pretrained model yields a modest though consistent improvement for all three models (80.4% accuracy vs 88.3% accuracy for Centralized).3 Furthermore, with a pretrained initialization the non-IID FL method scores become signi\ufb01cantly closer to the other two methods, with less than a two point difference between them (86.7% non-IID FL vs 88.3% IID FL). Discussion Our examination of multilingual FL indicates that performance is similar when pretrained models are used. Despite the fact that local models are averaged together, non-IID data partitioning (where each client sees only one language) has only a small impact on \ufb01nal multilingual performance, when using pretrained models. These \ufb01ndings suggest that, when possible, practitioners who need multilingual federated learning should employ pretrained models in order to gain the privacy bene\ufb01ts of federated learning, without taking much (if any) of a performance loss to do so. In several cases, we found that IID FL or nonIID FL could even outperform centralized learn3Note that although the setups are not the same (e.g. XGLUE is cross-lingual rather than multilingual) our scores are slightly higher than those reported in the original paper. ing. We leave investigation of this phenomena for future work but note a couple of possible explanations. First, FL with FederatedAveraging may have similar implicit regularization effects to checkpoint averaging, a common technique when using transformer models (noted in Vaswani et al. 2017, Edunov et al. 2018, etc.). Furthermore, there may be other regularization effects during federated \ufb01ne-tuning, as transformer training is known to be unstable and sensitive to optimization choices (Mosbach et al. 2020, Nguyen and Salazar 2019). Overall, our analysis shows that our conclusions hold for different multilingual models, on disparate NLP tasks, and across 13 different languages. We acknowledge that the languages used in this study are generally considered higher-resource, but expect that these conclusions will continue to hold as long as the pretrained model is effective on the target language (or language pairs, for MT). 5" + }, + { + "url": "http://arxiv.org/abs/2205.08124v1", + "title": "When to Use Multi-Task Learning vs Intermediate Fine-Tuning for Pre-Trained Encoder Transfer Learning", + "abstract": "Transfer learning (TL) in natural language processing (NLP) has seen a surge\nof interest in recent years, as pre-trained models have shown an impressive\nability to transfer to novel tasks. Three main strategies have emerged for\nmaking use of multiple supervised datasets during fine-tuning: training on an\nintermediate task before training on the target task (STILTs), using multi-task\nlearning (MTL) to train jointly on a supplementary task and the target task\n(pairwise MTL), or simply using MTL to train jointly on all available datasets\n(MTL-ALL). In this work, we compare all three TL methods in a comprehensive\nanalysis on the GLUE dataset suite. We find that there is a simple heuristic\nfor when to use one of these techniques over the other: pairwise MTL is better\nthan STILTs when the target task has fewer instances than the supporting task\nand vice versa. We show that this holds true in more than 92% of applicable\ncases on the GLUE dataset and validate this hypothesis with experiments varying\ndataset size. The simplicity and effectiveness of this heuristic is surprising\nand warrants additional exploration by the TL community. Furthermore, we find\nthat MTL-ALL is worse than the pairwise methods in almost every case. We hope\nthis study will aid others as they choose between TL methods for NLP tasks.", + "authors": "Orion Weller, Kevin Seppi, Matt Gardner", + "published": "2022-05-17", + "updated": "2022-05-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction The standard supervised training paradigm in NLP research is to \ufb01ne-tune a pre-trained language model on some target task (Peters et al., 2018; Devlin et al., 2018; Raffel et al., 2019; Gururangan et al., 2020). When additional non-target supervised datasets are available during \ufb01ne-tuning, it is not always clear how to best make use of the supporting data (Phang et al., 2018, 2020; Liu et al., 2019b,a; Pruksachatkun et al., 2020a). Although 1We make our code publicly available at https:// github.com/orionw/MTLvsIFT. * Corresponding author, oweller2@jhu.edu there are an exponential number of ways to combine or alternate between the target and supporting tasks, three predominant methods have emerged: (1) \ufb01ne-tuning on a supporting task and then the target task consecutively, often called STILTs (Phang et al., 2018); (2) \ufb01ne-tuning on a supporting task and the target task simultaneously (here called pairwise multi-task learning, or simply MTL); and (3) \ufb01ne-tuning on all N available supporting tasks and the target tasks together (MTLAll, N > 1). Application papers that use these methods generally focus on only one method (S\u00f8gaard and Bingel, 2017; Keskar et al., 2019; Glavas and Vuli\u00b4 c, 2020; Sileo et al., 2019; Zhu et al., 2019; Weller et al., 2020; Xu et al., 2019; Chang and Lu, 2021), while a limited amount of papers consider running two. Those that do examine them do so with a limited number of con\ufb01gurations: Phang et al. (2018) examines STILTS and one instance of MTL, Changpinyo et al. (2018); Peng et al. (2020); Schr\u00f6der and Biemann (2020) compare MTL with MTLAll, and Wang et al. (2018a); Talmor and Berant (2019); Liu et al. (2019b); Phang et al. (2020) use MTLAll and STILTs but not pairwise MTL. In this work we perform comprehensive experiments using all three methods on the 9 datasets in the GLUE benchmark (Wang et al., 2018b). We surprisingly \ufb01nd that a simple size heuristic can be used to determine with more than 92% accuracy which method to use for a given target and supporting task: when the target dataset is larger than the supporting dataset, STILTS should be used; otherwise, MTL should be used (MTLAll is almost universally the worst of the methods in our experiments). To con\ufb01rm the validity of the size heuristic, we additionally perform a targeted experiment varying dataset size for two of the datasets, showing that there is a crossover point in performance between the two methods when the dataset sizes are equal. We believe that this analysis will help NLP researchers to make better decisions when choosing arXiv:2205.08124v1 [cs.CL] 17 May 2022 \fFigure 1: Results comparing intermediate \ufb01ne tuning (STILTs) vs multi-task learning (MTL). Numbers in cells indicate the absolute percent score difference on the primary task when using MTL instead of STILTs (positive scores mean MTL is better and vice versa). The colors indicate visually the best method, showing a statistically signi\ufb01cant difference from the other from using using a two-sided t-test with \u03b1 = 0.1. Numbers in red indicate the cells where the size heuristic does not work. Datasets are ordered in descending size (WNLI is the smallest). a TL method and will open up future research into understanding the cause of this heuristic\u2019s success. 2 Experimental Settings Dataset Suite To conduct this analysis, we chose to employ the GLUE dataset suite, following and comparing to previous work in transfer learning for NLP (Phang et al., 2018; Liu et al., 2019b). Training Framework We use Huggingface\u2019s transformers library (Wolf et al., 2019) for accessing the pre-trained encoder and for the base training framework. We extend this framework to combine multiple tasks into a single PyTorch (Paszke et al., 2017) dataloader for MTL and STILTs training. Many previous techniques have been proposed for how to best perform MTL (Raffel et al., 2019; Liu et al., 2019b), but a recent paper by Gottumukkala et al. (2020) compared the main approaches and showed that a new dynamic approach provides the best performance in general. We implement all methods described in their paper and experimented with several approaches (sampling by size, uniformity, etc.). Our initial results found that dynamic sampling was indeed the most effective on pairwise tasks. Thus, for the remainder of this paper, our MTL framework uses dynamic sampling with heterogeneous batch schedules. For consistency, we train the STILTs models using the same code, but include only one task in the dataloader instead of multiple. The MTLAll setup uses the same MTL code, but includes all 9 GLUE tasks. We train each model on 5 different seeds to control for randomness (Dodge et al., 2020). For the STILTs method, we train 5 models with different seeds on the supporting task and then choose the best of those models to train with 5 more random seeds on the target task. For our \ufb01nal reported numbers, we record both the average score and the standard deviation, comparing the MTL approach to the STILTs approach with a two-sample t-test. In total, we train 9 \u22178 \u22175 = 360 different MTL versions of our model, 5 MTLAll models, and 9 \u22175 + 9 \u22175 = 90 models in the STILTs setting. Model We use the DistilRoBERTa model (pretrained and distributed from the transformers library similarly to the DistilBERT model in Sanh et al. (2019)) for our experiments, due to its strong performance and ef\ufb01ciency compared to the full model. For details regarding model and compute parameters, see Appendix A. Our purpose is not to train the next state-of-the-art model on the GLUE task and thus the absolute scores are not immediately relevant; our purpose is to show how the different methods score relative to each other. We note that we conducted the same analysis in Fig\fFigure 2: Experiments validating the size heuristic on the (QNLI, MNLI) task pair. The right \ufb01gure shows training on 100% of the QNLI training set while the left \ufb01gure shows training with 50%. The x-axis indicates the amount of training data of the supporting task (MNLI) relative to the QNLI training set, arti\ufb01cially constrained (e.g. 0.33 indicates that the supporting task is a third of the size of the QNLI training set, etc.). The blue line indicates MTL results while the green line indicates the STILTs method. Error bars indicate a 90% CI using 5 random seeds. ure 1 for BERT and found the same conclusion (see Appendix D), showing that our results extend to other pre-trained transformers. 3 Results We provide three different analyses: a comparison of pairwise MTL vs STILTs, experiments varying dataset size to validate our \ufb01ndings, and a comparison of pairwise approaches vs MTLAll. MTL vs STILTs We \ufb01rst calculate the absolute score matrices from computing the MTL and STILTs method on each pair of the GLUE dataset suite, then subtract the STILTs average score matrix from the MTL one (Figure 1). Thus, this shows the absolute score gain for using the MTL method instead of the STILTs method (negative scores indicate that the STILTs method was better, etc.). However, this matrix does not tell us whether these differences are statistically signi\ufb01cant; for this we use a two-sample t-test to compare the mean and standard deviation of each method for a particular cell. Scores that are statistically signi\ufb01cant are color coded green (if STILTs is better) or blue (if MTL is better), whereas they are coded grey if there is no statistically signi\ufb01cant difference. We note that although some differences are large (e.g. a 9 point difference on (WNLI, STS-B)) the variance of these results is high enough that there is no statistically signi\ufb01cant difference between the STILTs and MTL score distributions. We order the datasets in Figure 1 by size, to visually illustrate the trend. The number of green cells in a row is highly correlated with the size of the dataset represented by that row. For example, MNLI is the largest and every cell in the MNLI row is green. QQP is the 2nd largest and every cell in its row is also green, except for (QQP, MNLI). The smallest dataset, WNLI, has zero green cells. We can summarize these results with the following size heuristic: MTL is better than STILTs when the target task has fewer training instances than the supporting task and vice versa. In fact, if we use this heuristic to predict which method will be better we \ufb01nd that it predicts 49/53 signi\ufb01cant cells, which is equivalent to 92.5% accuracy. To more clearly visualize which cells it fails to predict accurately, those four cells are indicated with red text. We note that this approach does not hold on the cells that have no statistically signi\ufb01cant difference between the two methods: but for almost every signi\ufb01cant cell, it does. Unfortunately, there is no clear answer to why those four cells are misclassi\ufb01ed. Three of the four misclassi\ufb01ed cells come when using the MRPC dataset as the target task, but there is no obvious reason why it fails on MRPC. We recognize that this size heuristic is not an absolute law, but merely a good heuristic that does so with high accuracy: there are still other pieces to this puzzle that this work does not consider, such as dataset similarity. Dataset Size Experiments In order to validate \fApproach Mean WNLI STS-B SST-2 RTE QQP QNLI MRPC MNLI CoLA MTLAll 73.3 54.4 86.6 90.8 67.4 80.2 84.9 85.4 74.2 35.8 Avg. STILTs 75.8 45.0 87.5 92.1 61.9 88.9 89.4 87.4 84.0 46.4 Avg. MTL 77.3 56.1 87.4 91.9 66.0 85.6 87.5 87.4 80.8 52.7 Avg. S.H. 78.3 56.1 87.7 92.3 66.5 89.0 89.6 87.3 84.0 52.1 Pairwise Oracle 80.7 57.7 88.8 92.9 76.0 89.5 90.6 90.2 84.3 56.5 Table 1: Comparison of MTLAll to the pairwise STILTs or MTL approaches. \u201cS.H\" stands for size heuristic. Pairwise Oracle uses the best supplementary task for the given target task using the best pairwise method (STILTs or MTL). All scores are the average of 5 random seeds. We \ufb01nd that on almost every task, pairwise approaches are better than MTLAll. Bold scores indicate the best score in the column, excluding the oracle. the size heuristic further we conduct controlled experiments that alter the amount of training data of the supporting task to be above and below the target task. We choose to test QNLI primary with MNLI supporting, as they should be closely related and thus have the potential to disprove this heuristic. We subsample data from the supporting task so that we have a proportion K of the size of the primary task (where K \u2208{1/3, 1/2, 1, 2, 3}). By doing so, we examine whether the size heuristic holds while explicitly controlling for the supporting task\u2019s size. Other than dataset size, all experimental parameters are the same as in the original comparison (\u00a72). We also test whether these results hold if the size of the primary dataset is changed (e.g., perhaps there is something special about the current size of the QNLI dataset). We take the same pair and reduce the training set of QNLI in half, varying MNLI around the new number of instances in the QNLI training set as above (e.g. 1/3rd, 1/2, etc.). The results of these two experiments are in Figure 2. We can see that as the size of the supporting dataset increases, MTL becomes more effective than STILTs. Furthermore, we \ufb01nd that when both datasets are equal sizes the two methods are statistically similar, as we would expect from the size heuristic (Support Task Proportion=1.0). Thus, the synthetic experiments corroborate our main \ufb01nding; the size heuristic holds even on controlled instances where the size of the training sets are arti\ufb01cially manipulated. Pairwise TL vs MTLAll We also experiment with MTLAll on GLUE (see Appendix B for implementation details). We \ufb01nd that the average pairwise approach consistently outperforms the MTLAll method, except for the RTE task (Table 1) and using the best supporting task outperforms MTLAll in every case (Pairwise Oracle). Thus, although MTLAll is conceptually simple, it is not the best choice w.r.t. the target task score: on a random dataset simply using STILTs or MTL will likely perform better. Furthermore, using the size heuristic on the average supplementary task increases the score by 5 points over MTLAll (78.3 vs 73.3). 4 Related Work A large body of recent work (S\u00f8gaard and Bingel, 2017; Vu et al., 2020; Bettgenh\u00e4user et al., 2020; Peng et al., 2020; Poth et al., 2021) exists that examines when these transfer learning methods are more effective than simply \ufb01ne-tuning on the target task. Oftentimes, these explanations involve recognizing catastrophic forgetting (Phang et al., 2018; Pruksachatkun et al., 2020b; Wang et al., 2018a) although recent work has called for them to be reexamined (Chang and Lu, 2021). This paper is orthogonal to those, as we examine when you should choose MTL or STILTs, rather than when they are more effective than the standard \ufb01ne-tuning case (in fact, these strategies could be combined to predict transfer and then use the best method). As our task is different, theoretical explanations for how these methods work in relation to each other will need to be explored in future work. Potential theories suggested by our results are discussed in Appendix C, and are left to guide those efforts. 5" + }, + { + "url": "http://arxiv.org/abs/2204.05076v1", + "title": "End-to-End Speech Translation for Code Switched Speech", + "abstract": "Code switching (CS) refers to the phenomenon of interchangeably using words\nand phrases from different languages. CS can pose significant accuracy\nchallenges to NLP, due to the often monolingual nature of the underlying\nsystems. In this work, we focus on CS in the context of English/Spanish\nconversations for the task of speech translation (ST), generating and\nevaluating both transcript and translation. To evaluate model performance on\nthis task, we create a novel ST corpus derived from existing public data sets.\nWe explore various ST architectures across two dimensions: cascaded (transcribe\nthen translate) vs end-to-end (jointly transcribe and translate) and\nunidirectional (source -> target) vs bidirectional (source <-> target). We show\nthat our ST architectures, and especially our bidirectional end-to-end\narchitecture, perform well on CS speech, even when no CS training data is used.", + "authors": "Orion Weller, Matthias Sperber, Telmo Pires, Hendra Setiawan, Christian Gollan, Dominic Telaar, Matthias Paulik", + "published": "2022-04-11", + "updated": "2022-04-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "main_content": "Introduction Over half of the world\u2019s population is estimated to be bilingual. 2 Those that know multiple languages are prone to code switch, i.e., to interchangeably use words and phrases from two (or more) languages in situations such as casual dialog, while traveling abroad, or simply to use a word they \ufb01nd more \ufb01tting (Myers-Scotton and Ury, 1977; Heredia and Altarriba, 2001). In CS, the base language is referred to as the matrix language while the contributing language is called the embedded language (Myers-Scotton, 1995), where speakers often use the matrix language the majority of the time. Code switched language is challenging to both automatic speech recognition (ASR) and machine 1We make instructions and extra data needed to construct our CS data set available at https://github.com/apple/ ml-code-switched-speech-translation 2BBC: https://bbc.in/3jgwzZ2 * Work done as an intern. Transcript (CS) Translation (En) Ac\u00e1 te tiene como constantemente escribiendo papers y reviews no cierto Here they're like constantly writing papers and reviews right Audio Figure 1: An example instance of the joint speech recognition and translation task for code-switching (CS). Red indicates English words in the transcript and their corresponding words in the translation, whereas blue indicates Spanish words in the transcript and their corresponding translation. translation (MT) and therefore also to the composite task of speech translation (ST). While a rich amount of prior works exist on CS in the context of ASR (Lyu et al., 2006; Ahmed and Tan, 2012; Vu et al., 2012; Johnson et al., 2017; Yue et al., 2019) and MT (Sinha and Thakur, 2005; Winata et al., 2021; Zhang et al., 2021; Yang et al., 2020), there is little prior work in the context of ST. The aforementioned challenges to ASR, MT and ST arise largely due to the lack of CS data as well as the often monolingual nature of ASR systems, and of encoders of MT and ST systems. The lack of CS data is often addressed via synthetic data, e.g. as seen in Xu and Yvon (2021); Nakayama et al. (2019). Instead, in this work we derive two novel natural CS datasets from existing public corpora. CS is also dif\ufb01cult for modeling due to its mixed multilingual nature. In order to support multiple languages on the utterance level, automatic language identi\ufb01cation (LID) is often performed before applying monolingual systems on a per utterance basis. However, this does not address withinutterance CS, where embedded foreign words and phrases result in recognition errors for monolingual ASR systems, making multilingual models an attractive alternative. Furthermore, CS increases arXiv:2204.05076v1 [cs.CL] 11 Apr 2022 \fspeech recognition errors, signi\ufb01cantly increasing the problem of error propagation (Ruiz and Federico, 2014) in cascaded ST systems, where MT is then performed on the erroneous ASR output. Thus, multilingual end-to-end (E2E) ST systems may be especially appropriate to tackle CS speech. As both the transcript and translation are important in many CS ST use cases, we focus on the joint transcription and translation ST setting (Anastasopoulos and Chiang, 2018; Weller et al., 2021), extending it to CS data. We follow the methodology of these previous works and focus on the triangle E2E ST model to jointly generate both a transcript of the CS utterance and a translation of that utterance into text containing only one language (c.f. Figure 1 for an illustration). We perform a comparison along two axes: (1) comparing this E2E model to the standard cascaded ST systems, and (2) exploring the difference between bilingual systems and primarily monolingual systems gated by utterance-level LID. Following recent work that has shown the effectiveness of pre-trained models for ST (Li et al., 2020; G\u00b4 allego et al., 2021), we use Wav2Vec 2.0 (Baevski et al., 2020) as our encoder model and the multilingual mBART 50-50 (Tang et al., 2020) as our decoder model. We also make several modeling contributions in order to use these pre-trained models for joint transcription and translation. For the E2E ST model, we extend Li et al. (2020) to adapt the mBART decoder to jointly produce both transcription and translation. Furthermore, we introduce a triangle E2E ST model with a shared bilingual decoder and show that this improves transcription and translation accuracy. Our model analysis shows a surprising amount of robustness to CS speech, with the amount (or proportion) of CS words in a sentence not affecting model accuracy. Overall, we observe strong accuracy scores (WER, BLEU) on the CS task, both without CS training data and in the low-resource setting. We believe this opens the door to new and exciting progress in this area. 2 Related Work Code-switching in NLP has seen a rise of interest in recent years, including a dedicated workshop starting in 2014 (Diab et al., 2014) and still ongoing (Solorio et al., 2021). CS in machine translation also has a long history (Le F\u00b4 eal, 1990; Climent et al., 2003; Sinha and Thakur, 2005; Johnson et al., 2017; Elmadany et al., 2021; Xu and Yvon, 2021), but has seen a rise of interest with the advent of large multilingual models such as mBART (Liu et al., 2020) or mT5 (Xue et al., 2020; Gautam et al., 2021; Jawahar et al., 2021). Due to the lack of available CS data and the ease of single-word translation, most of these recent related MT works have synthetically created CS data for either training or testing by translating one or more of the words in a sentence (Song et al., 2019; Nakayama et al., 2019; Xu and Yvon, 2021; Yang et al., 2020). We differ from those works by using naturally occurring CS data (Section 3) which models the real-world CS distribution rather than arbitrary language mixing. For spoken input, as present in ASR and ST, synthetically creating realistic CS data is more challenging than it is for MT. However, dedicated ASR corpora that contain natural CS exist, including the Bangor Miami (Deuchar et al., 2014), SEAME (Zeng et al., 2018), and the recent largescale ASRU 2019 task (Shi et al., 2020). These corpora generally do not contain translations of the ASR annotations, since they were designed for the ASR task only. However, there exist two exceptions, which we leverage to derive our ST CS data set, described in Section 3. There also exists a wide range of prior modeling work on CS in ASR models, for a variety of strategies (Lyu et al., 2006; Ahmed and Tan, 2012; Seki et al., 2018; Luo et al., 2018; Lu et al., 2020; Du et al., 2021; Zhang et al., 2021). However, the recently introduced large multilingual models for speech, such as Wav2Vec, Wav2Vec 2.0, Schneider et al. (2019); Baevski et al. (2020) and HuBERT (Hsu et al., 2021), are still underexplored with regards to their CS performance. Handling mixed languages also requires understanding what languages are being spoken. Systems that support mixed language input therefore require some form of automatic LID \u2013 either as an explicit component on the utterance (Mabokela et al., 2014; Xu and Yvon, 2021) or word-level (Lyu and Lyu, 2008a; Nakayama et al., 2019), or implicitly learned by the underlying model(s) via a multi-task learning setup (Lyu and Lyu, 2008b; Watanabe et al., 2017; Hou et al., 2020). In our work, we leverage both, exploring utterance-level LID components as well as implicit learning of utterance and word level LID. In both MT and ASR, prior publications have \falso included the study of intra-word mixing of languages (Y\u0131lmaz et al., 2018; Mager et al., 2019), a phenomenon we do not explore in our work. Finally, our work builds off of advances made by G\u00b4 allego et al. (2021); Li et al. (2020) that show that combining large multilingual speech and text models provide consistent improvements. We differ however, by exploring ST in the novel CS setting. 3 Task Description & Data Used 3.1 Task Description We investigate systems suitable for bilingual English/Spanish conversational scenarios where some of the English and Spanish utterances may include some amount of words and phrases of the respective other language. That is, we are focusing on ST systems that can automatically and seamlessly handle utterances that are either purely English, purely Spanish, English with some Spanish words/phrases embedded or Spanish with some English words/phrases embedded. For transcription, we aim for models to generate the exact mixedlanguage transcript with each word written in its original spoken language. For translation, we aim to generate purely monolingual translations. See Figure 1 for an example. The experiments and results presented in this paper focus on translating into monolingual English only due to data availability, although we expect similar results for Spanish translations, due the bidirectional model training on standard ST data (Appendix D). We will leave it to future work to more closely examine translation into Spanish \u2013 or even a third language not present in the original utterance. It must be noted that word-level language categorization is sometimes ambiguous. A word in one language may also be considered part of a different language. That is for example true for loan words (Baugh, 1935), e.g., e-mail in many non-English languages such as German. This issue can be further complicated by attempting to categorize what language named entities fall under: is a Spanish speaker saying Joe Biden or New York code-switching? Although we acknowledge the complexity of separating words between languages, our work, following previous work (Modipa et al., 2013; Nakayama et al., 2018), uses data annotated by crowd-sourced workers, counting any sentence annotated as having a least one foreign word as being CS. This approach also makes intuitive sense for speech, as the CS words (classi\ufb01ed as foreign) will have phonemes that will align more with the embedded language, while the non-CS phonemes will align more with the matrix language. 3.2 Code-Switched Speech Datasets We use the Fisher (Cieri et al., 2004) and Bangor Miami3 (Deuchar et al., 2014) corpora for CS data, as they are the only publicly available corpora we are aware of that contains both annotated CS ASR transcripts, as well as translations of those transcripts (Table 1). Although these corpora contain the translations, to our knowledge they have not been used to study CS translation before. The Miami corpus was collected for linguistic code-switching analysis and gathered from recorded conversations between bilingual English/Spanish speakers in casual settings, primarily in Miami, Florida. These conversations include a high proportion of naturally occurring CS speech. However, in order to collect these naturally occurring conversations, the participants were recorded throughout their day using a small digital recorder worn on belts and lapels. Due to this, the Miami audio contains lower audio quality and much noiser background conditions than standard ASR datasets. The Fisher dataset was collected for ASR and was gathered by pairing sets of Spanish speakers, located in the U.S. and Canada, to each other through phone calls. Although the Fisher dataset is not a CS focused dataset, we found that it contains a large amount of (annotated) CS utterances, due to the speakers being situated in English-speaking contexts. The recording method (phone recordings in 2004) makes this a noisy ASR dataset, although signi\ufb01cantly less so than Miami. To prepare the data for the joint ST CS task, we separate the data with CS utterances (utterances that contain at least one word annotated as CS) from those with none, creating a CS set and a monolingual set for each dataset. We note that for the Miami dataset the monolingual split contains both English-only and Spanish-only monolingual audio. As the Miami corpus was also annotated with both ambiguous and unambiguous code-switching, we only include utterances in the CS set if the annotations were tagged as unambiguously code-switched (i.e. excluding words such as ok, aha, and named entities). The Fisher CS dataset consists of majority (matrix4) Spanish 77% 3Online audio \ufb01les can be found at https://biling. talkbank.org/access/Bangor/Miami.html 4For simplicity, we use the terms majority/matrix language \fDataset Raw Transcript Clean Transcript Fisher un show <\\foreign>, a mi me gusta ver mucho estos shows <\\foreign> de la medicina forense un show, a mi me gusta ver mucho estos shows de la medicina forense Miami hay una [/] una que dice (.) it\u2019s@s:eng \ufb01ve@s:eng o\u2019clock@s:eng somewhere@s:eng hay una una que dice it\u2019s \ufb01ve o\u2019clock somewhere Table 1: Examples of the raw and clean data for Miami and Fisher. Text in red indicates English text while blue text indicates Spanish. The Miami dataset uses the CHAT annotation format (MacWhinney and Snow, 1990). Figure 2: Histogram of the proportions of code-switched words in a sentence for the CS test sets (Fisher on the left, Miami on the right). For example, 0.2 means that 20% of the words in the sentence are CS. Dataset Split Type Hours Instances Miami Train Mono 3.60 6,489 Test CS 2.82 3,296 Mono 3.61 6,490 Fisher Train CS 13.28 7,398 Mono 157.3 130,600 Dev CS 1.45 821 Test CS 1.63 986 Mono 12.15 10,595 Table 2: Dataset Statistics. CS stands for CodeSwitched and Mono for Monolingual. of the time, English-majority 17%, and 6% evenly split between English/Spanish. For the Miami CS dataset the languages are more evenly distributed, with 51% majority-Spanish, 35% majority-English, and 9% evenly split.5 The Fisher data consists of three evaluation sets (Dev/Dev2/Test) that together contain approxiand minority/embedded language interchangeably. 5To make these CS datasets reproducible for the broader ST community, we provide a \ufb01le with instructions for gathering the data (as Fisher is part of the LDC library) as well as \ufb01les containing a mapping between the original dataset indices to the CS data splits. mately a thousand instances of CS with corresponding translations in monolingual English. We combine them into a Fisher CS Test set. The Fisher dataset also contains a large amount of CS utterances in the training set (appx. 8k or 15 hrs) which we use as \ufb01ne-tuning (90%) and validation data (10%). As the Miami dataset contains no splits, we use all CS data for the test set and split the monolingual data into even train/test sets. We include basic summary statistics in Table 2. Note that when compared to standard ST datasets, these CS ST datasets would be considered low-resource settings. In Figure 2, we see the proportion of CS words in a sentence for the CS test sets. We note that there are no sentences with more than 50% of the words CS since the minority language cannot be more than 50% by de\ufb01nition. For instances that are exactly 50% code switched their language identi\ufb01cation was chosen by randomly selecting either English or Spanish. We see that for the Fisher dataset there are more sentences with less than 15% CS with a small uptick around 50%. For Miami it is more uniform, with a large amount of sentences being approximately 25% CS. To prepare our models for Spanish-English CS, \fC) CASCADE BIDIRECT (1.1B) G) E2E BIDIRECT SHARED (0.6B) F) E2E BIDIRECT BY TASK (1.1B) A) CASCADE UNIDIRECT (2.4B) Transcript Translation Transcript Translation E) E2E BIDIRECT BY LANG (1.1B) LID Classi\ufb01er Wav2Vec2 a Wav2Vec2 b mBARTb mBARTd D) E2E UNIDIRECT (2.2B) LID Classi\ufb01er Wav2Vec2 a mBARTa mBARTb Transcript Translation Wav2Vec2 b mBARTc mBARTd Translation Transcript Transcript Translation Wav2Vec2 a mBARTb Wav2Vec2 a mBARTa mBARTb Transcript Translation LID Classi\ufb01er Wav2Vec2 a mBARTa mBARTb Transcript Translation Wav2Vec2 a mBARTb mBARTa Translation Transcript B) CASCADE UNI SHARED ENC (1.8B) Transcript Translation Transcript Translation LID Classi\ufb01er Wav2Vec2 a Wav2Vec2 a mBARTb mBARTd Wav2Vec2 a mBARTa mBARTa Transcript Translation More Shared Parameters Fewer Shared Parameters mBARTa mBARTc mBARTa mBARTc mBARTa Figure 3: Illustration of model architectures, with cascaded architectures on the top and E2E architectures on the bottom. Left to right shows the progression of models with the least and the most amount of shared parameters respectively. Subscripts are present to indicate shared modules within each model. Dotted lines indicate a decision where only one path is chosen using the LID. Note that there is no cascade equivalent to the BIDIRECTIONAL E2E SHARED model, as the cascaded model by de\ufb01nition generates transcript then translation separately. The numbers in parentheses stands for the number of model parameters in billions. we use the CoVoST (Wang et al., 2020a,b) and MuST-C (Cattoni et al., 2019) datasets for standard ST training, as CoVoST contains only Es\u2212 \u2192En and MuST-C contains only En\u2212 \u2192Es. Although high scores on these datasets are not our primary target, we note that our scores come close to or improve the state of the art (SoTA) on these tasks (see Appendix A, Table 9) albeit with different data used in training, showing that our base ST models are representative of current SoTA techniques. 4 Experimental Settings 4.1 Models Joint Transcript/Translation Models Many different types of E2E models exist for joint transcript/translation ST (Sperber and Paulik, 2020). Here, we focus on the triangle E2E architecture due to its strong performance in previous work (Anastasopoulos and Chiang, 2018; Sperber et al., 2020). Following recent work (G\u00b4 allego et al., 2021; Li et al., 2020) we use pre-trained modules as a starting place for our ST model, using a Wav2Vec 2.0 (Baevski et al., 2020) encoder and a mBART 50-50 (Liu et al., 2020; Tang et al., 2020) decoder. Because our task involves joint ASR and ST, we need to adapt the pre-trained decoder to work with the E2E triangle architecture. Speci\ufb01cally, the triangle model\u2019s second decoder computes cross attention separately over both the \ufb01rst decoder and the encoder states. We place an additional crossattention layer after each encoder-attention layer in mBARTs decoder blocks, initializing them with the pre-trained encoder-attention weights. To make sure these weights converge properly, we freeze the entire model for approximately the \ufb01rst epoch while training only the bridge and additional cross attention layers (c.f. Appendix A). As described in Section 3, our task involves modeling intra-sentence CS. This means that any model used for this task must either explicitly or implicitly learn to model the language of each word in the sentence. Furthermore, as more than one language is being modeled, each sub-component of the model can either be unidirectional or bidirectional. We \fcan thus categorize potential models by how much information is shared within the parameters: the least shared models would be unidirectional and joined together by explicit LID, whereas the most shared would be bidirectional models that learn the LID implicitly. Models and their categorization along this scale are shown in Figure 3. For cascade models, the most basic would be separate unidirectional cascaded models joined by an LID model. The LID model will explicitly decide what the matrix language is and send the utterance to the model that is best equipped to handle that language (Figure 3A). Note that this approach may suffer from error propagation issues due to incorrect LID. A more parameter-shared version of this model is to make the cascaded model encoder shared between both unidirectional models (Figure 3B). Finally, we can examine a bidirectional cascade model that shares each component across both languages. This architecture implicitly learns to model the language of the input, removing the need for an explicit LID model (Figure 3C). We also examine similar analogues for the E2E triangle model: unidirectional models joined by LID (Figure 3D) and a bidirectional model with LID and a shared encoder (Figure 3E). We can also use the standard triangle model (see Anastasopoulos and Chiang (2018) for implementation details) that includes one encoder and two decoders (one for each sub-task) (Figure 3F). Furthermore, we propose to alter the standard triangle model and share both decoder parameters for both languages with a joint bidirectional decoder (Figure 3G, note that the cascade model cannot do this due to the de\ufb01nition of the cascade). By doing so, we hope to provide an inductive bias for the model to more easily handle code-switched data, as the weights of that decoder will already be used to handling multiple languages for both tasks (compared to the bidirectional cascade model, which only shares multilingual parameters for each task of transcript and translation). Language Identi\ufb01cation Model We train the language identi\ufb01cation (LID) model to identify the matrix language. For consistency with our other models (and similar to concurrent work, e.g. Tjandra et al. (2021)), we use a pre-trained Wav2Vec2 along with a classi\ufb01er layer to predict whether the utterance is majority Spanish or majority English. We train the model in the same fashion as the joint transcription and translation models (Section 4.1 and Appendix A) but train on the LID data instead. The data for the LID model was gathered by taking the CS data6 from the training set of the Fisher corpus and combining it with randomly sampled data from several different datasets in order to help the model learn despite the domain of the audio. We use MuST-C English audio, CoVoST English audio, CoVoST Spanish audio, and the monolingual Spanish audio from the training sets of Fisher and Miami. We found that upsampling the CS training set by 2 and using the same amount of data (2x the number of the CS set) for CoVoST and MuST-C provided the best results: 98%+ accuracy on CoVoST and MuST-C, 89% on the Fisher CS validation and test sets, and 72% on the Miami CS test set (due to the noisy data). As a large proportion of the CS data is close to 50% code-switched (see Figure 2), it becomes more dif\ufb01cult for the model to predict the matrix language correctly. 4.2 Training Process and Evaluation For all dataset evaluations, we use word error rate (WER) and character error rate (CER) for the transcript and Charcut (CCT) (Lardilleux and Lepage, 2017) and sacreBLEU (Post, 2018) for the translation. However, we found that there was no difference in conclusions between each of the two metrics (WER vs CER and BLEU vs Charcut) and thus we only report BLEU/WER in the main text (see Appendix A for implementation details). For tables showing all metrics, see Appendix E. We evaluate our models on the Fisher and Miami test sets (with both CS-only and monolingual-only test sets) in two different settings: (1) without \ufb01netuning them on CS data (No-FT) and (2) after \ufb01netuning the already trained ST models on the Fisher CS Training set (FT). For models consisting of two monolingual sub-models we \ufb01ne-tune both on the CS data. During \ufb01ne-tuning we employ the same hyperparameters as in the original experiment, but perform early stopping on the Fisher CS Dev set. We use signi\ufb01cance tests to verify the reliability of our results (Koehn, 2004). We run bootstrap resampling tests against the best performing model, using \u03b1 = 0.05. More training parameters such as learning rates, etc. can be found in Appendix A. 6For the No-FT case (Section 4.2), we exclude the CS data when training the LID model. \fNot Fine-Tuned Fine-Tuned CS Mono. CS Mono. Models \u2193WER \u2191BLEU \u2193WER \u2191BLEU \u2193WER \u2191BLEU \u2193WER \u2191BLEU CASCADE UNIDIRECT 37.1 22.5 26.6 24.7 33.5 24.6 24.8 25.5 (-0.8) (-0.4) (-3.1) (+0.9) (-0.4) (0.0) (-1.0) (+0.2) CASCADE UNI SHARED ENC 36.0 21.6 25.6 24.3 31.2 25.4 25.6 24.8 (0.0) (+0.6) (0.0) (+0.5) (+0.1) (+0.2) (-0.3) (+0.1) E2E UNIDIRECT 36.6 22.3 26.7 25.0 33.4 24.4 25.3 25.5 (-0.9) (-0.1) (-3.5) (+1.0) (-0.2) (+0.1) (-1.4) (+0.4) E2E BIDIRECT BY LANG 37.0 23.4 27.2 25.0 36.7 22.8 27.3 25.0 (-0.9) (-0.1) (-1.9) (+0.5) (-0.8) (+0.2) (-2.0) (+0.4) Table 3: Comparison of Oracle vs Predicted LID results on the Fisher dataset. Numbers in parenthesis are the difference to the corresponding model with oracle LID. Note that the Oracle LID improves upon the Predicted LID in most cases. Conclusions are similar for the Miami corpus (see Appendix B Table 7) Not Fine-Tuned Fine-Tuned CS Mono. CS Mono. Model \u2193WER \u2191BLEU \u2193WER \u2191BLEU \u2193WER \u2191BLEU \u2193WER \u2191BLEU Fisher CASCADE UNIDIRECT 37.1 22.5 26.6 24.7 33.5 24.6 24.8 25.5 CASCADE UNI SHARED ENC 36.0 21.6 25.6 24.3 31.2 *25.4 25.6 24.8 CASCADE BIDIRECT 37.2 21.8 26.5 24.1 33.2 23.2 28.1 23.2 E2E UNIDIRECT 36.6 22.3 26.7 25.0 33.4 24.4 25.3 25.5 E2E BIDIRECT BY LANG 37.0 23.4 27.2 25.0 36.7 22.8 27.3 25.0 E2E BIDIRECT BY TASK *34.1 *23.0 23.6 26.0 *30.1 25.6 *24.3 25.6 E2E BIDIRECT SHARED 33.8 *23.3 23.2 26.2 30.0 *25.4 24.1 26.1 Miami CASCADE UNIDIRECT 65.2 8.8 52.3 16.8 64.8 10.8 51.5 16.8 CASCADE UNI SHARED ENC 60.2 9.7 53.8 15.7 55.0 14.7 55.6 15.3 CASCADE BIDIRECT 61.4 9.3 54.0 14.8 57.4 10.6 58.2 14.0 E2E UNIDIRECT 65.6 10.1 53.0 17.2 65.1 11.7 *51.4 17.6 E2E BIDIRECT BY LANG 69.5 12.4 55.2 16.5 69.3 11.5 54.5 16.6 E2E BIDIRECT BY TASK 59.9 11.0 *50.0 *18.1 *53.6 *13.8 52.6 *17.5 E2E BIDIRECT SHARED 58.9 *11.8 49.9 18.3 53.0 *14.1 52.1 *17.4 Table 4: Test set scores, with results from the Fisher corpus on the top half and the Miami corpus on the bottom half. Bold scores indicate the best score in the column, while asterisks indicate results that are statistically similar to the best score in the column group using a bootstrap resampling test with \u03b1 = 0.05. 5 Results 5.1 Scores on Test Sets In this section, we explore the results of doing ST for CS data along the two axes of unidirectional vs bidirectional and end-to-end vs cascade. We see results for models using explicit LID prediction in Table 3, showing that models that use the predicted LID perform worse than those that use Oracle LID (e.g. 36.6 vs 35.7 WER for the E2E UNIDIRECT). This provides a slight advantage for the bidirectional models that learn LID implicitly. However, the predicted LID case is the realistic setting, and thus we use it for the remainder of our experiments. When we examine the models along the scale of unidirectional to bidirectional, we see that higher amounts of shared parameters are correlated with higher scores, e.g. bidirectional is better. We see that on all datasets and evaluation settings (TaFigure 4: Accuracy of the models in generating the CS spans. Note that this excludes all non-exact matches and is a lower bound on performance. ble 4) that the E2E BIDIRECT SHARED model is either statistically similar or outperforms all other models, except for the Miami Monolingual FT case, where it comes in 3rd. Thus, the inductive bias of sharing the multilingual task parameters provides \fModel Transcript Translation Reference si entonces volv\u00b4 \u0131 aqu\u00b4 \u0131 a la casa si el fall break yes so I returned here to the house yes the fall break Cascade si entonces volv\u00b4 \u0131 aqu\u00b4 \u0131 a la casa si es folvereak yes then I returned here at home yes its folvereak E2E si entonces volv\u00b4 \u0131 aqu\u00b4 \u0131 a la casa si es fallbreak yes so I came back to the house yes its fallbreak Table 5: Example generated output from the CASCADE BIDIRECT and E2E BIDIRECT SHARED models. Note the error propagation in the cascade model. a gain of approximately 3.5 WER points (33.8 vs 37.3) and 1.5 BLEU points (23.3 vs 21.9) for the E2E BIDIRECT SHARED model over the E2E UNIDIRECT model on the Fisher dataset, with similar performance on the Miami dataset. We can also examine Table 4 to see how the cascade models compare to the E2E models. The results show that the cascaded models perform the same or worse than the E2E models they compare to w.r.t. parameter sharing, with the best overall model being the E2E BIDIRECT SHARED, beating the CASCADE BIDIRECT (e.g. 33.8 vs 37.2 WER or 23.3 vs 21.8 BLEU on Fisher No-FT). Table 4 also illustrates that \ufb01ne-tuning models on CS data improves scores on CS test sets (33.8 vs 30.0 WER for the E2E BIDIRECT SHARED on Fisher, 58.9 vs 53.0 for Miami). These gains are consistent for the Fisher dataset, which is the domain of the CS training set, however there are still gains for the out-of-domain Miami CS data. These results suggest that additional pre-training on natural or synthetic data (in both audio/text modalities) would likely be fruitful future work. When we examine how \ufb01ne-tuning on CS data changes the model\u2019s monolingual scores, we \ufb01nd that they generally improve the monolingual results for the unidirectional models, but tend to make bidirectional models slightly worse, perhaps due to interference between the languages and tasks in the same weights. However, overall we \ufb01nd that \ufb01netuning provides large gains for CS with only minor decreases in monolingual performance. 5.2 Model Analysis We also provide further analysis of the CS output of the best model and its cascaded counterpart (BIDIRECT CASCADE and E2E BIDIRECT SHARED). We perform three analyses: (1) comparing utterance level scores vs the proportion of CS words in the utterance, (2) computing the exact match accuracy of the CS spans in the model\u2019s output, and (3) qualitatively examining model output. We check the correlation between the proportion of CS words in a sentence and the model\u2019s score, using a linear model to \ufb01nd the R2 values. We found that surprisingly, there was no correlation between the proportion of CS words and the models score for any of the different models or metrics (R2 < 0.025 for all models and metrics). A graphical depiction of the model\u2019s scores over CS proportions is in the Appendix, Figure 5. We note that this \ufb01nding was the same for comparing the number of CS words instead of the proportion. This \ufb01nding implies that the models are surprisingly robust to the amount of CS in a sentence. Although BLEU and WER scores show how well the models do on the CS data, we can further isolate the performance of these models on only the code-switched parts of the utterances. To do so, we isolate all CS spans in the sentences and check to see if the model\u2019s output contains the exact-match of those spans. We note that this metric does not take into account synonyms or different tenses of the same word, making it a stricter metric serving as a lower bound of absolute performance. We see in Figure 4 that the E2E model still outperforms the cascade on CS spans, with Fisher No-FT scores around 20-30% and Fisher FT scores around 45%. Finally, we can also examine the model\u2019s outputs. We inspected 200 output sentences for the monolingual subsets and found that both models generated the correct language in every case, indicating that they correctly learned the implicit LID. However, we can see that the cascade model does struggle with error propagation (especially so in the CS setting, Table 5), likely causing part of the difference between the E2E and cascade models. Although the CS WER and BLEU scores are not as high as they are on cleaner monolingual datasets such as CoVoST (Appendix A), their performance is competitive with their respective monolingual performance on Miami and Fisher, even in the NoFT setting. We believe that with additional data and improvements ST models will be well-equipped to handle CS in practical situations and that overall, models show strong CS performance. \f6" + }, + { + "url": "http://arxiv.org/abs/2101.09149v1", + "title": "Streaming Models for Joint Speech Recognition and Translation", + "abstract": "Using end-to-end models for speech translation (ST) has increasingly been the\nfocus of the ST community. These models condense the previously cascaded\nsystems by directly converting sound waves into translated text. However,\ncascaded models have the advantage of including automatic speech recognition\noutput, useful for a variety of practical ST systems that often display\ntranscripts to the user alongside the translations. To bridge this gap, recent\nwork has shown initial progress into the feasibility for end-to-end models to\nproduce both of these outputs. However, all previous work has only looked at\nthis problem from the consecutive perspective, leaving uncertainty on whether\nthese approaches are effective in the more challenging streaming setting. We\ndevelop an end-to-end streaming ST model based on a re-translation approach and\ncompare against standard cascading approaches. We also introduce a novel\ninference method for the joint case, interleaving both transcript and\ntranslation in generation and removing the need to use separate decoders. Our\nevaluation across a range of metrics capturing accuracy, latency, and\nconsistency shows that our end-to-end models are statistically similar to\ncascading models, while having half the number of parameters. We also find that\nboth systems provide strong translation quality at low latency, keeping 99% of\nconsecutive quality at a lag of just under a second.", + "authors": "Orion Weller, Matthias Sperber, Christian Gollan, Joris Kluivers", + "published": "2021-01-22", + "updated": "2021-01-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Speech translation (ST) is the process of translating acoustic sound waves into text in a different language than was originally spoken in. This paper focuses on ST in a particular setting, as described by two characteristics: (1) We desire models that translate in a streaming fashion, where *Work done during an internship with Apple users desire the translation before the speaker has \ufb01nished. This setting poses additional dif\ufb01culties compared to consecutive translation, forcing systems to translate without knowing what the speaker will say in the future. (2) Furthermore, the speaker may want to verify that their speech is being processed correctly, intuitively seeing a streaming transcript while they speak (F\u00a8 ugen, 2008; Hsiao et al., 2006). For this reason, we consider models that produce both transcripts and translation jointly.1 Previous approaches to streaming ST have typically utilized a cascaded system that pipelines the output of an automatic speech recognition (ASR) system through a machine translation (MT) model for the \ufb01nal result. These systems have been the preeminent strategy, taking the top place in recent streaming ST competitions (Pham et al., 2019; Jan et al., 2019; Elbayad et al., 2020; Ansari et al., 2020). Despite the strong performance of these cascaded systems, there are also some problems: error propagation from ASR output to MT input (Ruiz and Federico, 2014); ASR/MT training data mismatch and loss of access to prosodic/paralinguistic speech information at the translation stage (Sperber and Paulik, 2020); and potentially sub-optimal latencies in the streaming context. End-to-end (E2E) models for ST have been proposed to remedy these problems, leveraging the simplicity of a single model to sidestep these issues. E2E models are also appealing from computational and engineering standpoints, reducing model complexity and decreasing parameter count. Although initial research has explored E2E models for joint speech recognition and translation, no previous works have examined them in the streaming case, a crucial step in using them for many real-world applications. To understand this area more fully, we develop an E2E model to compare 1This corresponds to the mandatory transcript case in the proposed categorization by Sperber and Paulik (2020). arXiv:2101.09149v1 [cs.CL] 22 Jan 2021 \fwith its cascading counterpart in this simultaneous joint task. We build off the models proposed by Sperber et al. (2020) in the consecutive case, extending them for use in the streaming setting. We also use the re-translation technique introduced by Niehues et al. (2018) to maintain simplicity while streaming. To reduce model size, we introduce a new method for E2E inference, producing both transcript and translation in an interleaved fashion with one decoder. As this task requires a multi-faceted evaluation along several axes, we provide a suite of evaluations to highlight the differences of these major design decisions. This suite includes assessing translation quality, transcription quality, lag of the streaming process, output \ufb02icker, and consistency between the transcription and translation. We \ufb01nd that our E2E model performs similarly to the cascaded model, indicating that E2E networks are a feasible and promising direction for streaming ST. 2 Proposed Method Network Architecture In the ST survey provided by Sperber et al. (2020), they introduce several E2E models that could be used for the joint setting. As our work focuses on providing a simple but effective approach to streaming ST, we focus on the CONCAT model, which generates both the transcript and translation in a concatenated fashion. We compare this E2E model against the standard cascading approach, following the architecture and hyperparameter choices used in Sperber et al. (2020). All audio input models use the same multi-layer bidirectional LSTM architecture, stacking and downsampling the audio by a factor of three before processing. We note that although bidirectional encoders are unusual with standard ASR architectures, re-translation makes them possible. The cascaded model\u2019s textual encoder follows the architecture described in Vaswani et al. (2017) but replaces self-attention blocks with LSTMs. Decoder networks are similar, but use unidirectional LSTMs. More implementation details can be found in Appendix A. In order to reduce model size and inference time for E2E networks, we introduce a novel method for interleaving both transcript and translation in generation, removing the need to use separate decoders. This method extends the CONCAT model proposed by Sperber et al. (2020) to jointly decode according to the ratio given by the parameter \u03b3 (FigOutputs Do Wollen you Sie want gehen to EOS2 go EOS1 Language tokens src trg src trg src trg src trg src src Target Inputs Do Wollen you Sie want gehen to go Interleaving at \ud6fe = 0.5 Outputs Do you want to go EOS1 Wollen Sie gehen EOS2 Language tokens src src src src src src trg trg trg trg Target Inputs Do you want to go Wollen Sie gehen Interleaving at \ud6fe = 0.0 Outputs Wollen Sie gehen EOS2 Do you want to go EOS1 Language tokens trg trg trg trg src src src src src src Target Inputs Wollen Sie gehen Do you want to go Interleaving at \ud6fe = 1.0 Figure 1: Example token representations (En:De) for three different interleaving parameters (Section 2). Language tokens indicate whether the data corresponds to the source transcript or the target translation and are used with a learned embedding that is summed with the word embeddings, as described in Sperber et al. (2020). ure 1). When \u03b3 = 0.0, we generate the transcript tokens until completion, followed by the translation tokens (vice versa for \u03b3 = 1.0). At \u03b3 = 0.0, our model is equivalent to the previously proposed model. De\ufb01ning counti as the count of i tokens previously generated, transcription tokens as st and translation tokens as tt, we generate the next token as a transcription token if: (1.0 \u2212\u03b3) \u2217(1 + counttt) > \u03b3 \u2217(1 + countst) This approach enables us to produce tokens in an interleaving fashion, given the hyperparameter \u03b3. Re-translation We use the re-translation method (Niehues et al., 2018; Arivazhagan et al., 2020a,b) as it provides a simple way to handle the streaming case. This method works by simply re-translating the utterance as new data arrives, updating its former prediction. As we are generating both transcript and translation, this avoids the challenging issue of combining the requirements for both components: streaming speech models need to manage the audio signal variability across time while streaming translation models need to overcome issues with reordering and lack of future context. Alternative strategies to the re-translation approach include the chunk-based strategy explored by Liu et al. (2020), which commits to all previous output chunks and Ren et al. (2020) who utilize an additional segmenter model trained via CTC \fFigure 2: Left: average lag in seconds vs BLEU score. Right: average lag in seconds vs WER score. All points are the mean of each con\ufb01guration\u2019s score across the eight target languages. Con\ufb01gurations are the cross product of the values for K and F, see Section 2: Inference. Note that points near 1.0 AL have appx. 99% of the unconstrained BLEU score. Results for the E2E model use \u03b3 = 0.5. . Metric Params Model De Es Fr It Nl Pt Ro Ru Average BLEU \u2191 217M Cascade 18.8 22.7 27.0 18.9 22.5 21.9 17.9 13.0 20.3 107M E2E \u03b3=0.0 18.1 23.1 27.0 18.7 22.3 22.2 17.6 12.2 20.2 107M E2E \u03b3=0.3 17.7 22.6 26.3 18.0 21.5 21.5 17.0 12.1 19.6 107M E2E \u03b3=0.5 18.2 22.8 27.0 18.6 21.9 21.9 17.1 12.0 19.9 107M E2E \u03b3=1.0 18.2 22.8 27.1 18.9 22.2 22.3 17.6 12.7 20.2 WER \u2193 217M Cascade 25.9 24.0 23.1 25.6 28.5 26.4 24.4 23.1 25.1 107M E2E \u03b3=0.0 24.2 23.5 23.3 23.0 23.4 25.3 24.1 23.6 23.8 107M E2E \u03b3=0.3 24.1 23.6 22.9 23.8 23.4 25.7 24.1 24.1 24.0 107M E2E \u03b3=0.5 24.5 23.9 22.9 23.8 23.4 25.7 24.3 23.6 24.0 107M E2E \u03b3=1.0 23.6 22.9 22.3 23.0 22.4 24.7 23.4 22.7 23.1 Table 1: BLEU and WER scores for models trained on different target languages. Bold scores indicate results that are statistically similar to the best score using a bootstrap permutation test with \u03b1 = 0.05. (Graves et al., 2006) to create segments that are translated via wait-k (Ma et al., 2019). Although these approaches show effective results, they add additional complexity without addressing issues particular to streaming transcription. Inference In order to generate quality-latency curves, we use several techniques to reduce latency and \ufb02icker at the cost of quality. The \ufb01rst is the mask-k method proposed by Arivazhagan et al. (2020b), masking the last K output tokens. The second method is a form of constrained decoding: we de\ufb01ne a hyperparameter F that sets the number of free tokens allowed to change in the next re-translation. Thus, we constrain future output to match the \ufb01rst len(tokens) \u2212 F tokens of the current output. All models use values {0, 1, 2, 3, 4, 5, 7, 10, 100} for K and {0, 1, 2, 3, 4, 5, 7, 10, 15, 20, 25, 100} for F. For interleaving models, we set K and F on both transcript and translation tokens. 3 Experimental Settings Data We use the MuST-C corpus (di Gangi et al., 2019) since it is the largest publicly available ST corpus, consisting of TED talks with their English transcripts and translations into eight other language pairs. The dataset consists of at least 385 hours of audio for each target language. We utilize the log Mel \ufb01lterbank speech features provided with the corpus as input for the ASR and E2E models. To prepare the textual data, we remove non-speech artifacts (e.g. \u201c(laughter)\u201d and speaker identi\ufb01cation) and perform subword tokenization using SentencePiece (Kudo and Richard\fModel En:De Incr. En:De Full En:Es Incr. En:Es Full Mean Incr. Mean Full Cascade 13.8 13.2 12.2 11.6 14.1 13.4 Concat \u03b3=0.0 17.6 16.7 14.9 13.8 17.0 16.0 Concat \u03b3=0.3 17.2 16.6 14.3 13.7 16.6 15.8 Concat \u03b3=0.5 17.8 16.5 14.8 13.3 17.3 15.7 Concat \u03b3=1.0 17.3 16.8 14.9 13.7 16.9 15.8 Table 2: Consistency scores for En:De, En:Es, and average results over all languages; lower is better (see Sperber et al. (2020)). Incr. stands for the incremental consistency score, or the average consistency throughout retranslation. Bold scores indicate results that are statistically similar to the best score using a bootstrap permutation test with \u03b1 = 0.05. son, 2018) on the unigram setting. Following previous work for E2E ST models, we use a relatively small vocabulary and share transcription and translation vocabularies. We use MuST-C dev for validation and report results on tst-COMMON, utilizing the segments provided (Appendix D). Pre\ufb01x Sampling We implement techniques developed by Niehues et al. (2018); Arivazhagan et al. (2020b) for improving streaming ST, sampling a random proportion of each training instance as additional data to teach our models to work with partial input. See Appendix C for implementation details. Metrics We evaluate these models on a comprehensive suite of metrics: sacrebleu (BLEU, Post (2019)) for translation quality, word error rate (WER, (Fiscus, 1997)) for transcription quality, average lag (AL, Ma et al. (2019)) for the lag between model input and output, and normalized erasure (NE, Arivazhagan et al. (2020a)) for output \ufb02icker. Measuring consistency is a nascent area of research; we use the robust and simple lexical consistency metric de\ufb01ned by Sperber et al. (2020), which uses word-level translation probabilities. To show how consistent these results are while streaming, we compute an incremental consistency score, averaging the consistency of each re-translation. 4 Results Results for the quality-latency curves created by the use of constrained decoding and mask-k (Section 3) are shown in Figure 2. Unconstrained settings are used for all results in table form. For convenience, bold scores indicate the highest performing models in each metric according to a bootstrap permutation test. Translation Quality We see in Table 1 that the cascaded model slightly outperforms some E2E models, while achieving statistically similar performance to the \u03b3 = 1.0 model. We note however, that the cascaded model has nearly twice as many parameters as the E2E models (217M vs 107M). When we examine these models under a variety of different inference conditions (using constrained decoding and mask-k as in Arivazhagan et al. (2020a)), we further see this trend illustrated through the quality vs latency trade-off (left of Figure 2), with both models retaining 99% of their BLEU at less than 1.0 AL. Transcription Quality Conversely, Table 1 and the right of Figure 2 show that the \u03b3 = 1.0 E2E model performs similarly or slightly better than the cascaded model across all inference parameters and all target languages. With an AL of 1.5, the E2E model loses only 3% of its performance. Consistency The E2E models perform worse than the cascaded on consistency, with the best models being approximately 18% less consistent (Table 2). The cascaded model also maintains better scores through each re-translation (Incr.).2 Flicker We note that the \ufb02icker scores for cascade and E2E models are similar, with both having normalized erasure scores of less than 1 and the majority of inference settings having less than the \u201cfew-revision\u201d threshold of 0.2 (proposed by Arivazhagan et al. (2020a)). More NE details are found in Appendix B. Interleaving Rate Table 1 also shows us the overall results for different interleaving rates. We see that interleaving at a rate of 1.0 has the best 2Initial experiments indicate that the triangle E2E architecture (Sperber et al., 2020) model may perform better on consistency in our streaming setting, but due to time constraints we were not able to explore this further. Future work exploring alternative architectures or decoding techniques (Le et al., 2020) may provide fruitful avenues of research. \fquality scores (0.7 less WER than the next best rate, the base \u03b3 = 0.0 model) but the worst consistency (Table 2). Conversely, \u03b3 = 0.3 has the worst quality scores but the best consistency. 5" + }, + { + "url": "http://arxiv.org/abs/2011.08115v1", + "title": "Learning from Task Descriptions", + "abstract": "Typically, machine learning systems solve new tasks by training on thousands\nof examples. In contrast, humans can solve new tasks by reading some\ninstructions, with perhaps an example or two. To take a step toward closing\nthis gap, we introduce a framework for developing NLP systems that solve new\ntasks after reading their descriptions, synthesizing prior work in this area.\nWe instantiate this framework with a new English language dataset, ZEST,\nstructured for task-oriented evaluation on unseen tasks. Formulating task\ndescriptions as questions, we ensure each is general enough to apply to many\npossible inputs, thus comprehensively evaluating a model's ability to solve\neach task. Moreover, the dataset's structure tests specific types of systematic\ngeneralization. We find that the state-of-the-art T5 model achieves a score of\n12% on ZEST, leaving a significant challenge for NLP researchers.", + "authors": "Orion Weller, Nicholas Lourie, Matt Gardner, Matthew E. Peters", + "published": "2020-11-16", + "updated": "2020-11-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction The dominant paradigm in supervised NLP today is learning from examples, where machine learning algorithms are trained using a large set of taskspeci\ufb01c input-output pairs. In contrast, humans learn to perform the same task by reading a description, after which they are able to perform the task in a zero-shot manner\u2014indeed, this is how crowd-sourced NLP datasets are constructed. In this paper, we argue that learning from task descriptions in this way is a necessary attribute of a general purpose NLP system, and we propose it as a new paradigm to train and test NLP systems. Recent work in NLP has shown signi\ufb01cant progress in learning tasks from examples. Large pretrained language models have dramatically improved performance on standard benchmarks (Peters et al., 2018; Devlin et al., 2019; Raffel et al., *Work done while at the Allen Institute for AI. 1Data, evaluation code, baseline models, and leaderboard at https://allenai.org/data/zest X Y N X Y N M dT (a) (b) T T Figure 1: Comparison of (a) supervised learning from examples with observed input X, output Y , corresponding to an unobserved task \u03c4 (b) our proposed method of learning from task descriptions where systems can make inferences about unseen tasks \u03c4 given a natural language description d\u03c4. 2019) and have shown promising results in zero shot prediction by leveraging their language understanding capabilities (Levy et al., 2017; Zhou et al., 2018; Yin et al., 2019). Despite this progress, there are many serious issues that come with learning from examples. There is an almost in\ufb01nite number of tasks that a person might wish to solve with a general-purpose NLP system. Learning to solve these tasks by reading a description instead of observing a collection of examples would solve the problem of having to create training sets for each language task. Such a system would also be more accessible to practitioners and domain experts in other \ufb01elds, who could describe their tasks and solve them, opening up new avenues of research where it is expensive or infeasible to gather training data. Additionally, we \ufb01nd that current supervised learning techniques partly achieve their success due to memorizing uninteresting aspects of the training distribution (Gururangan et al., 2018; Geva et al., 2019; Gardner et al., 2020). Teaching a system to learn a task from the description alone would alleviate these biases, as new training data would not be needed to learn a novel task. arXiv:2011.08115v1 [cs.CL] 16 Nov 2020 \fIn this paper, we synthesize prior approaches to zero-shot learning in NLP and provide a formal framework for thinking about the zero-shot prediction problem. We show that previous zero-shot approaches are limited in both scope of application and rigour of evaluation. For example, while prior work has used zero-shot prediction for text classi\ufb01cation, entity typing, and relation extraction, we push this to the more complex task of slot \ufb01lling. We instantiate our formalism in an English language dataset, ZEST (ZEro Shot learning from Task descriptions), that is formatted similarly to reading comprehension datasets, in that we formulate task descriptions as questions and pair them with paragraphs of text. We choose this format as it provides a natural way to crowdsource data. This zero-shot dataset differs from typical reading comprehension datasets, however, in that each task description is paired with twenty different passages, and we evaluate a model\u2019s ability to solve the task, not just give the correct answer for a single (question, passage) pair. That is, given a question, a model produces some decision function f, and it is this function which we comprehensively evaluate on many different inputs. We also carefully select axes on which to evaluate the generalization of a model to different kinds of task descriptions, changing task descriptions in speci\ufb01c ways to systematically push the \ufb01eld towards more interesting and complex task descriptions. We evaluate models based on recent state-ofthe-art sequence to sequence architectures, which seem most suited to the task of zero shot prediction in this setting. We \ufb01nd that our best model based on T5 (Raffel et al., 2019) achieves a score of only 12% on this data, leaving a signi\ufb01cant gap to our human performance estimate of 42%. Zero shot learning from complex task descriptions remains a signi\ufb01cant challenge for current NLP systems. 2 Learning from task descriptions This section describes our framework for enabling zero-shot generalization to unseen tasks, and relates it to prior work. 2.1 Learning from examples Consider the supervised learning setting2 where the goal is to learn a function y = f\u03b8(x), with 2This setting also includes popular self-supervised objectives such as autoregressive or masked language modeling. trainable parameters \u03b8, for a particular task. We de\ufb01ne the task \u03c4 as: \u2022 a de\ufb01nition for the sets of allowable inputs x \u2208X, outputs y \u2208Y, and, \u2022 a probability distribution p\u03c4(x, y). In text classi\ufb01cation, for example, X is natural language text and Y is a categorical label from one of C classes. In the single task setting, the function f is learned by collecting a dataset of labeled examples D = {(x1, y1), . . . (xN, yN)} sampled from p\u03c4(x, y) (see Fig. 1a). We call this \u201clearning from examples\u201d. Crucially, once D is constructed, the underlying task de\ufb01nition is discarded, assumed to be captured in the labeled (xi, yi) pairs. There are many ways to sample from p\u03c4(x, y) to create a dataset. One approach, in cases such as language modeling where p\u03c4 is de\ufb01ned by a set of rules, just applies the rules to raw text. Another popular approach uses human annotation. In this case, the most common strategy factorizes p\u03c4(x, y) = p\u03c4(y|x)p\u03c4(x), samples from p\u03c4(x) via some method (e.g. collecting text from the domain of interest), and uses a natural language task description, d\u03c4, to describe p\u03c4(y|x). The description is shown to human annotators who use it to compute arg maxy\u2208Y p(y|x0) for a given x0. 2.2 Learning from task descriptions The largest downside to learning from examples is that every new task requires collecting a new dataset to learn a new function f\u03b8(x) for the task. This approach also discards the task de\ufb01nition after the labeled dataset is constructed, despite the fact that the task de\ufb01nition carries all of the information necessary for a human to solve the task. Moreover, it holds the task constant at test time (except in certain limited cases, see Sec. 2.4). Our proposed framework, which we call \u201clearning from task descriptions\u201d, removes these restrictions. First, instead of discarding the task de\ufb01nition, we provide a natural language description of it to the model, in addition to the input x. Second, by providing the model with the task description, we expect it to generalize to unseen tasks at test time in a zero-shot way. These modi\ufb01cations shift the learning problem from \ufb01tting a probability distribution in the learning from examples approach, to understanding the semantics of a task description in order to apply it \fto a given input in the learning from task descriptions approach. Successfully building a model to perform in this manner would open up a wide range of NLP applications whereby one could simply construct an NLP system by describing the desired output in natural language. Our proposed framework is illustrated in Fig. 1b. In contrast to learning from examples, we assume the task description d\u03c4 is observed for M different tasks, and that each of these tasks has some number N of observed (xi, yi) pairs. 2.3 Task competence In order to test whether a system can adequately perform an unseen task, we propose a new evaluation metric as follows. Traditional evaluation metrics in supervised learning are averages over instance-level metrics, that is, they perform independent computation on individual (x, y) pairs and aggregate them across a dataset to produce a summary score. As we are interested in assessing whether a model can competently perform a task from its description, we instead \ufb01rst evaluate whether a model can perform each individual task using the entire set of (x, y) pairs for a given task, and then report averages over all tasks. Formally, a dataset with M tasks can be viewed as the concatenation of M different Nj sized datasets, Dj = {(x1, y1), . . . (xNj, yNj)}, one for each task. We assume each task has an associated metric \u00b5j(Dj, f\u03b8) \u2208R, which is used to compute the model performance for task \u03c4j on Dj for the model represented by f\u03b8. For simplicity, we assume each metric is such that larger values indicate better performance3. Then, for a given level of competence cj for task \u03c4j, we say that the model can perform the task if \u00b5j \u2265cj. The \ufb01nal model competence metric is the average individual task competence over the dataset, c = 1 M P j 1(\u00b5j \u2265cj), where 1 is the indicator function. In the special case where cj has the same threshold T for all j, we write \u201cC@T\u201d to represent the competence at T. As a concrete example of this metric, consider the simple case where all M tasks are binary classi\ufb01cation (so that unseen classes correspond to unseen tasks). If we adopt accuracy as the metric for all tasks, and set cj to 90% for all j then a C@90 of 72% indicates that the model is able to successfully classify unseen inputs x into a set of unseen 3This can be achieved by rescaling if necessary. classes Y with at least 90% accuracy, for 72% of the unseen tasks \u03c4. 2.4 Discussion Prior researchers have recognized the limitations of learning from examples, and have worked to address some of them. Our proposed framework builds upon and generalizes much of this work. Zero-shot learning (Chang et al., 2008; Socher et al., 2013; Norouzi et al., 2013) asks systems to generalize to unseen classes at test time. In this approach, the task is the same at both train and test time\u2014models are only asked to generalize to new classes. In terms of the graphical model in Fig. 1, prior work attaches a natural language description to some new yi at test time. In contrast, our approach asks models to generalize to entire unseen tasks, attaching the natural language description to the task variable \u03c4. Zero-shot learning has been widely adopted including for classi\ufb01cation (Dauphin et al., 2013), entity typing (Ma et al., 2016; Zhou et al., 2018) and relation extraction (Levy et al., 2017; Shi and Lin, 2019). More closely related to our approach are the zero-shot experiments in Radford et al. (2019); Brown et al. (2020) that provide a generative language model with a prompt (that could be viewed as a type of task description) and asks for a completion. This is similar to the observation in Petroni et al. (2019) that it is possible to extract knowledge graph relationships from large language models with an appropriate prompt. ZEST provides a benchmark dataset for systematically measuring how well models can generalize to many tasks in the zero-shot setting. Multitask learning (Caruana, 1997; Collobert and Weston, 2008) seeks to learn a single model that can solve multiple tasks simultaneously, similar to our framework that seeks to learn a model that can solve many tasks. However, in multitask learning each task is learned from examples, and the model is not able to generalize to unseen tasks. This is also the case for newer control code type approaches (Raffel et al., 2019; Keskar et al., 2019) to multitask learning, where the task is encoded as short string, often containing no information other than a largely meaningless identi\ufb01er. There are also connections between our proposed framework and tasks such as natural language inference (NLI) or reading comprehension (RC), where two natural language inputs (a \fpremise and a hypothesis for NLI, and a question and passage for RC) are used to predict some output. In our case, we have two observed variables, x and d\u03c4, which in\ufb02uence the prediction of the output y (Fig. 1). Indeed, the baseline model that we discuss in Section 5 takes a similar approach to NLI and RC and jointly models the two textual inputs. This correspondence has been used in prior work, where Yin et al. (2019) used a model pretrained on MNLI (Williams et al., 2018) to perform zero-shot text classi\ufb01cation. A key difference, however, is that hypotheses in NLI and questions in RC are typically only paired with single inputs. In fact, they typically only make sense for a single input, and thus it is hard to characterize these narrow questions as \u201ctask descriptions\u201d. Lastly, the problem of learning from task descriptions is fundamentally one of translating a natural language description into some executable function that can operate on arbitrary inputs. This problem has been well-studied for narrow domains in the semantic parsing literature (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Liang et al., 2011; Andreas et al., 2013), though the input is typically a single static database, not arbitrary natural language text. Attempts to generalize semantic parsing to more open domains are still nascent (Chen et al., 2020; Gupta et al., 2020). 3 Instantiating the Framework Section 2 showed a framework for training and testing a general purpose system that could perform unseen NLP tasks. An ideal system in this framework would be able to read the descriptions of the tasks in the GLUE suite (Wang et al., 2019) and perform well with no additional training. However, this goal is far beyond the current capabilities of today\u2019s models. In order to make progress, we must break down the problem into manageable steps. In this section we outline the scope that we envision for a reasonable NLPfocused dataset that can push forward the current state of learning from task descriptions, without being so challenging as to be out of reach. Sec. 4 describes the data collection process for ZEST, our new English benchmark built following this scope. To de\ufb01ne the scope, we begin by considering the types of applications a model that could successfully learn from task descriptions might enable. The largest bottleneck in building NLP applications today is collecting labeled data. Our framework would eliminate this step, making it possible to build ad hoc NLP applications to easily \ufb01lter, categorize, or extract structured information from corpora. For example, when planning a camping trip, one might want to know \u201cWhat are the names of all the campgrounds and their locations?\u201d that are listed in a collection of documents, which speci\ufb01es an ad hoc request to return all examples of the located at relationship between the campground and location entity types. Accordingly, it\u2019s important to include examples of the basic task building blocks of such a system: classi\ufb01cation, typed entity extraction, and relation extraction in a benchmark dataset. In doing so, it would unify the prior work in zero-shot NLP (Sec. 2.4) that has focused on just a single task, and require a single model to be able to handle any of these tasks at test time, instead of separate models for each task. More concretely, as each task \u03c4 de\ufb01nes a set of allowable outputs y \u2208Y, we can mix multiple output sets Y in a single dataset as long as the output set is speci\ufb01ed in the task description. ZEST includes the most common output sets: discrete classes, lists of (optionally) typed spans from the input, and relationships between spans. Examples of each are shown in Table 1, where it is clear from the task description which output Y is expected. In addition, we also include the NA output (Rajpurkar et al., 2018), signifying that it is not possible to solve the task given the input x. For example, if the task asks a model to extract campground names but the input is an unrelated news article, the output is NA. Being able to correctly identify unsolvable tasks is important in a practical setting where it is not reasonable to expect every possible task to be solvable with every possible input. To move beyond aggregating existing approaches into a single dataset, recall that in our framework observing the task description d\u03c4 in addition to the input x allows us to test a model\u2019s generalization relative to four variables: x, y, \u03c4, and d\u03c4 (Fig. 1). Motivated by this observation, we propose an approach that systematically varies the task descriptions and inputs while controlling for other sources of variability in order to test whether a system can generalize in multiple ways. To implement this idea, we begin by collecting a set of task descriptions, d\u03c4, inputs x, and associated outputs, y. This base group of instances already allows us to test performance of unseen tasks on un\fGeneralization Question Input Passage (shortened) Answer Base Can I hike to a waterfall at this national park? ... Yet here at Whiskeytown NRA, we encourage you to chase waterfalls go visit them! Whiskeytown has four major waterfalls ... Yes Paraphrase Is there a waterfall to hike to at this national park? (same as above) Yes Semantic Flips Can I hike to a canyon at this national park? ... descending 1,300 feet (396 m) past a large alcove, the trail meanders in a wide canyon ... Yes Composition What time of year is best to see the popular waterfalls in this national park? ... Two viewing platforms provide the best view of Great Falls. This overlook is the last place that the Falls can be viewed ... NA Output Structure What waterfall hikes are there in this national park and are they wheelchair accessible? ... Bridalveil Fall is often the \ufb01rst waterfall you\u2019ll see when entering ... Although paved, this is trail is not wheelchair accessible due to its grade. [{\u201cwaterfall hike\u201d:\u201cBridalveil Fall\u201d, \u201cwheelchair accessible\u201d: \u201cNo\u201d}] Table 1: Example instances from ZEST. The composition question is combined with \u201cWhat are the popular tourist spots in this national park?\u201d We chose to format the relation extraction questions as JSON, see Section 5.2 for details. seen input. We further augment it with four types of controlled generalization: paraphrase, semantic \ufb02ips, composition, and output structure. Examples of each type of generalization are given in Table 1. Paraphrase We can test generalization to changes in the task description d\u03c4 while keeping the task \u03c4 \ufb01xed by paraphrasing the description. By also \ufb01xing x, we can use these paraphrases to test whether a model consistently predicts the correct output given the same input and underlying task. As we collect applicable inputs x for a task using a retrieval mechanism given the task description (Section 4), this also adds some lexical distance between the input and the description, to avoid simple lexical shortcuts to solving the task (Gardner et al., 2019). Semantic \ufb02ips Closely contrasting examples have long provided an effective means of evaluation in NLP (Levesque et al., 2012; Sennrich, 2017), forcing a model to understand how small changes in inputs correspond to large changes in expected outputs. We take inspiration from this idea to include task description semantic \ufb02ips, where a given task is modi\ufb01ed in a minimal way (e.g. by changing a single word) to semantically change the meaning of the task. As the description is largely unchanged (including the output set Y), this tests whether systems can distinguish between descriptions that are minimally changed. Composition To further test whether systems can understand a task description, we can compose base tasks into new tasks with operators such as \u201cand\u201d and \u201cor\u201d. By examining the performance difference between the base group of tasks and the compositionally generated group of tasks we can estimate the extent to which a system can compose tasks in a novel way. Output structure We can also test whether models can generalize to unseen structured outputs y1 \u2208Y where y1 is not seen in the training set. Among the many ways to accomplish this, we chose a method that asks models to produce output equivalent to slot \ufb01lling or n-ary relationship extraction in the zero-shot setting. In this case, \ftask descriptions correspond to a speci\ufb01cation of an output structure that includes typed entity and relationship extraction where the entity types and relationships have not been seen in training. 4 Collecting ZEST To illustrate our novel way of evaluating and framing the \u201clearning from task descriptions\u201d problem, we provide an empirical demonstration of where current systems fail by collecting a challenge dataset. We hope this will serve as a starting point for making progress towards this goal of learning from descriptions. In this section we describe our annotation efforts, which consist of our design for the dataset, as well as three crowdsourcing steps: collecting tasks (in question form), gathering relevant documents, and annotating answers for the (task, document) pairs. 4.1 Dataset Design Our dataset consists of base task descriptions which are varied along the four areas of generalization found in Section 3, allowing us to systematically control for generalization across the different base tasks. We collect annotations for approximately 20 different input documents for each task so that we can calculate the competency metric. The framework described in Section 2.4 applies to any task description, thus, it is agnostic to the speci\ufb01c format. In deciding how to format the task descriptions in ZEST we chose to use a question format for the tasks, as crowdsourcing annotations for questions is well established, and a QA format may potentially allow transfer from existing question answering datasets. We note however, that a declarative task description such as \u201creturn a list of hikes in the national park described in the document\u201d fundamentally asks for the same information as the question \u201cwhat are the hikes in this national park?\u201d As a result, we will use the terms task description and question interchangeably when discussing our creation of ZEST. 4.2 Task Generation As each question should apply to numerous documents, we used Mechanical Turk4 to crowdsource common questions that someone might ask 4We initially opened our crowdsourcing pipeline to the U.S. population on Mechanical Turk that had above a 99% acceptance rate with over 5000 completed HITs, but reduced this pool to only include workers who performed well on initial HITs. Statistic Train Dev Test (task, passage) pairs 10,766 2,280 11,980 Avg. passage words 121 122 122 Number of tasks 538 114 599 Avg. task len [words] 12.3 12.2 11.8 NA percent 0.62 0.67 0.62 Classi\ufb01cation Percent 0.46 0.49 0.44 Table 2: Summary Statistics for ZEST. Note that NA is the most frequent answer. about three different domains: U.S. presidents, dog breeds, and U.S. national parks. We use multiple domains to include diversity in our tasks, choosing domains that have a multitude of entities to which a single question could be applied. Workers were asked to generate questions that could apply to any entity in that domain and we manually removed questions that contained duplicate meanings to maintain a rich semantic space. This left us with approximately 100 base task descriptions for each domain. These tasks were generated before gathering input documents, alleviating biases from having workers who had already seen the input passages. We split these tasks into 50% test, 40% train, and 10% development. We then employed other workers to alter them along one of the four areas of generalization. For the paraphrase generation, we asked workers to paraphrase the text so that it retained its original meaning but had a different wording. For the semantic \ufb02ip questions we asked the workers to keep as much of the task description the same as possible, but to make a slight change that would alter the meaning of the task. Composition tasks were created by randomly sampling three tasks from within each dataset split to combine, letting the worker choose two out of the three. Tasks for the output structure were created by expanding the base tasks to include multiple structured sub-tasks, using a custom built UI that automatically compiled workers\u2019 responses into JSON format. Each task description created for a particular area of generalization followed its base task to the corresponding dataset split. Hence the test set contains its own unique base questions as well the derived questions for each area of generalization. \f4.3 Passage Retrieval In order to gather a unique set of passages that pertain to a given question, we used Bing and Google Custom Search engines, focusing the results on a narrow subset of webpages. For U.S. Presidents, our queries were limited to results from Wikipedia pages (for all 45 presidents) as well as information contained on Whitehouse.gov, containing biographies and accomplishments for each President and First Lady. Similarly, we limited our queries of dog breeds to all 524 pages of Dog Breeds on Wikipedia. The U.S. National Park passages were retrieved from sub-pages of the National Parks website. On each of these domains, we ensured that no single entity garnered more than 5% of the total input documents. Details on how we used these search engines to gather the passages can be found in Appendix A and in our code. 4.4 Document Annotations We paired the gathered task descriptions with their respective passages and employed our expert workers from Mechanical Turk to annotate the answers. We had three workers annotate each (task, document) pair. For the tasks that could be answered with a yes or no response, \ufb01nal answers were chosen by taking the majority answer. For tasks that involved extracting information from the passage, we used the answer that was the subset of the other answers, preferring shorter responses over longer responses. 25,026 (task, input, answer) triples, with a total of 1251 task descriptions split across the three domains. These tasks were distributed as 45% extraction, 45% classi\ufb01cation and 10% mixed (due to the output structure tasks). More summary statistics can be found in Table 2. Our annotation costs were approximately 9,000 USD. 5 Establishing a Baseline This section describes our baseline model results. 5.1 Evaluation Due to class imbalance, we adopt F1 as the metric when computing the task competency (Sec. 2.3). However, to account for partial overlap between model and gold answers, we modify the precision P and recall R as follows. Each task \u03c4 has a number of instances (xi, yi). For each instance, we compute a partial overlap score si that includes an output-type aware5 best alignment between the model and gold answers and scores individual elements with a word overlap based method. This is similar to common practice in QA evaluation, extended to handle ZEST\u2019s output types. Then, with NA as the negative class, we compute P = P i si/m+, R = P i si/g+ where m+ and g+ are the total model predicted positive (not-NA) and gold positive instances. We take each task\u2019s F1 score and evaluate the competency metric for each task, reporting these scores in our \ufb01nal results. Additionally, when tasks are closely related we use a more stringent consistency metric (Gardner et al., 2020) that computes whether a model is competent in both tasks at the same time. For paraphrases and semantic \ufb02ips, our C@T metrics only count a model as competent for a task if it is competent for both the base task description and the changed task description. This helps to avoid giving the model credit for arti\ufb01cially simple decision boundaries that only accidentally solve the task. 5.2 Modeling For baselines, we adopt two recent state-of-the-art models, T5 (Raffel et al., 2019) and BART (Lewis et al., 2020), both because of their positions on top of popular NLP leaderboards and their text-to-text nature. Beyond training on ZEST alone, we also trained T5 using multitask learning (MTL) with a combination of other QA datasets to test transfer to ZEST: BoolQ (Clark et al., 2019), MultiRC (Khashabi et al., 2018), ReCoRD (Zhang et al., 2018), and SQuAD (Rajpurkar et al., 2016). Data Preprocessing To prepare each task\u2019s instances for the model, we prepended \u201czeroshot question: \u201d to the task description and \u201czeroshot context: \u201d to the document, then joined these two parts together with whitespace. For output structure generalization, we formatted the answers as JSON to enable more complex zero-shot relation extraction tasks. Thus, the models output answers as both text and JSON, in a seq-to-seq fashion, depending on the question type. When the question calls for JSON, we deserialize and evaluate it, counting deserialization failures as errors. See Appendix B for more on data preprocessing. Training & Hyper-parameters For T5 11B, our best baseline, training used input and output 5ZEST includes strings, sets of strings, lists of dicts, and three discrete classes (Yes/No/NA) as valid output types. \fDev Test Mean C@75 C@90 Mean C@75 C@90 BART-large ZEST only 40 13 8 38 11 4 T5-11B ZEST only 56 32 12 55 28 11 T5-11B ZEST w/MTL 56 35 14 56 28 12 Human Estimate 74 61 42 Table 3: Overall performance of baseline models showing the mean F1 and competency at 75% and 90%. Our best model, a T5 model with multi-task learning from other QA datasets (Section 5.2), is only able to perform 12% of unseen tasks at 90% F1, compared to a human estimate of 42% of tasks at 90% competency. Dev Test Generalization Type Mean C@75 C@90 Mean C@75 C@90 Base 71 48 16 63 43 22 Paraphrase 64 36 12 56 32 16 Composition 66 44 22 65 41 15 Semantic Flips 54 27 9 47 18 5 Output Structure 33 20 10 47 10 3 Overall w/MTL 56 35 14 56 28 12 Table 4: Detailed T5-11B results for ZEST with multi-task learning using other QA datasets (Section 5.2). Input Mean C@75 C@90 Full data 56 32 12 Question only 12 10 7 Context only 1 1 1 Table 5: T5-11B ablation results on the development set using the full dataset, question only and context only. Only the overall results are shown. The context only model predicted NA for each instance. sequence lengths of 512, a batch size of 32, and grid searched four different learning rates (5e-4, 1e-3, 2e-3, and 4e-3). See Appendix C for BART and other T5 details. 5.3 Results We present our overall results on ZEST, an ablation using T5 to probe for annotation artifacts (Gururangan et al., 2018), and an error analysis breaking down common mistakes. Baseline Performance Table 3 shows the performance of the baselines on ZEST, as well as an estimate of human performance.6 We report mean F1 across all instances in the data, ignoring their 6Computed by having an author label answers to 55 tasks from the test set. grouping into tasks, as well as our proposed C@T metric, for T \u2208{75, 90}. The best T5-11B model has mean performance of 56% on the development set, while the BART model has lower scores. Moreover, when we evaluate task competence, we see these models only rarely successfully solve the whole task well. For C@90, the T5 model\u2019s overall score is only 12% on the test set. Multitasking ZEST with other QA datasets only slightly improved results. Table 4 shows a detailed breakdown of performance across generalization type for the T5 model with multi-tasking. Detailed results for BART are in the Appendix. Model performance decreases as the generalization dif\ufb01culty increases from the Base level to Output Structure. Consistently recovering models from task descriptions alone remains a signi\ufb01cant challenge. Annotation Artifacts & Ablations Table 5 shows ablations on the dev set using T5, illustrating that both the question and context are needed for the model to perform well, as one would expect. We see that in the context only ablation, the model predicted NA (majority class) for all instances, showing that there were not any systematic biases in the passages alone that the model could exploit. The context only F1 is non-zero due the fact that one task had all NA answers, which is \fError Question Input Passage (shortened) Predicted Correct Recall (30%) Did this president get a graduate degree? ... at Harvard University, where he earned an M.A. in economics ... N/A Yes Precision (37%) Are the volcanoes in this national park dormant? ... Dormant: A volcano that is inactive or resting, but is likely to erupt again in the near future. Extinct: A volcano that has stopped erupting ... Yes NA Partial (9%) What kind of trout can be found at this national park? ... The presence of non-native brown trout has the potential to impact brook trout and other native \ufb01sh populations within several of the park\u2019s premier large streams ... Brown trout Brown trout,brook trout Other (24%) Was this dog breed accepted in the american kennel club in the last twenty years? ... The Cavalier would go on to be recognized by the American Kennel Club in 1995 ... No Yes Table 6: Error distribution of the baseline model. Recall errors are when the model incorrectly predicts N/A; precision errors are when the model should have predicted N/A, but didn\u2019t; partial answers are when the model failed to predict all of the members of a list. Other common errors included failing to apply reasoning to answer a question, and predicting the wrong key names when producing JSON outputs. counted as competent by convention. Error Analysis In order to more clearly understand where these models fail, we examined 100 instances of model errors and categorized them. The most frequent errors were when the model failed to recognize the answer (30% of the time) or predicted something when the answer was NA (37%). We provide detailed examples and descriptions in Table 6. Interestingly, the model failed to output parseable JSON on only 1.5% of all structure questions in the test set and generated a JSON structure format for only 0.008% of non-structure questions, showing strong results for learning the format for outputting the complex relationships. 6" + }, + { + "url": "http://arxiv.org/abs/1909.00252v1", + "title": "Humor Detection: A Transformer Gets the Last Laugh", + "abstract": "Much previous work has been done in attempting to identify humor in text. In\nthis paper we extend that capability by proposing a new task: assessing whether\nor not a joke is humorous. We present a novel way of approaching this problem\nby building a model that learns to identify humorous jokes based on ratings\ngleaned from Reddit pages, consisting of almost 16,000 labeled instances. Using\nthese ratings to determine the level of humor, we then employ a Transformer\narchitecture for its advantages in learning from sentence context. We\ndemonstrate the effectiveness of this approach and show results that are\ncomparable to human performance. We further demonstrate our model's increased\ncapabilities on humor identification problems, such as the previously created\ndatasets for short jokes and puns. These experiments show that this method\noutperforms all previous work done on these tasks, with an F-measure of 93.1%\nfor the Puns dataset and 98.6% on the Short Jokes dataset.", + "authors": "Orion Weller, Kevin Seppi", + "published": "2019-08-31", + "updated": "2019-08-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Recent advances in natural language processing and neural network architecture have allowed for widespread application of these methods in Text Summarization (Liu et al., 2018), Natural Language Generation (Bahuleyan, 2018), and Text Classi\ufb01cation (Yang et al., 2016). Such advances have enabled scientists to study common language practices. One such area, humor, has garnered focus in classi\ufb01cation (Zhang and Liu, 2014; Chen and Soo, 2018), generation (He et al., 2019; Valitutti et al., 2013), and in social media (Raz, 2012). The next question then is, what makes a joke humorous? Although humor is a universal construct, there is a wide variety between what each individual may \ufb01nd humorous. We attempt to focus on a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread. This forum is highly popular with tens of thousands of jokes being posted monthly and over 16 million members. Although larger joke datasets exist, the r/Jokes thread is unparalleled in the amount of rated jokes it contains. To the best of our knowledge there is no comparable source of rated jokes in any other language. These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. Although this type of humor may only be most enjoyable to a subset of the population, it is an effective way to measure responses to jokes in a large group setting.1 What enables us to perform such an analysis are the recent improvements in neural network architecture for natural language processing. These breakthroughs started with the Convolutional Neural Network (LeCun et al., 1998) and have recently included the inception (Bahdanau et al., 2015) and progress of the Attention mechanism (Luong et al., 2015; Xu et al., 2015), and the Transformer architecture (Vaswani et al., 2017). 2 Related Work In the related work of joke identi\ufb01cation, we \ufb01nd a myriad of methods employed over the years: statistical and N-gram analysis (Taylor and Mazlack, 2004), Regression Trees (Purandare and Litman, 2006), Word2Vec combined with K-NN Human Centric Features (Yang et al., 2015), and Convolutional Neural Networks (Chen and Soo, 2018). This previous research has gone into many settings where humor takes place. Chen and Soo (2018) studied audience laughter compared to textual transcripts in order to identify jokes in conversation, while much work has also gone into us1See the thread (of varied and not safe for work content) at this link. We do not endorse these jokes. arXiv:1909.00252v1 [cs.CL] 31 Aug 2019 \fBody Punchline Score Man, I was so tired last night; I had a dream I was a muf\ufb02er... and I woke up exhausted 276 I told my teenage niece to go get me a newspaper... She laughed at me, and said, \u201dOh uncle you\u2019re so old. Just use my phone.\u201d So I slammed her phone against the wall to kill a spider. 28315 Table 1: Example format of the Reddit Jokes dataset ing and creating datasets like the Pun of the Day (Yang et al., 2015), 16000 One-liners (Mihalcea and Strapparava, 2005), and even Ted Talks (Chen and Soo, 2018). 3 Data We gathered jokes from a variety of sources, each covering a different type of humor. These datasets include jokes of multiple sentences (the Short Jokes dataset), jokes with only one sentence (the Puns dataset), and more mixed jokes (the Reddit dataset). We have made our code and datasets open source for others to use. 2 3.1 Reddit Our Reddit data was gathered using Reddit\u2019s public API, collecting the most recent jokes. Every time the scraper ran, it also updated the upvote score of the previously gathered jokes. This data collection occurred every hour through the months of March and April 2019. Since the data was already split into body and punchline sections from Reddit, we created separate datasets containing the body of the joke exclusively and the punchline of the joke exclusively. Additionally, we created a dataset that combined the body and punchline together. Some sample jokes are shown in Table 1, above. The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what quali\ufb01ed as a funny joke, giving us 13884 notfunny jokes and 2025 funny jokes. 3.2 Short Jokes The Short Jokes dataset, found on Kaggle, contains 231,657 short jokes scraped from various joke websites with lengths ranging from 10 to 200 2Our code and datasets are publicly available at this link. characters. The previous work by Chen and Soo (2018) combined this dataset with the WMT162 English news crawl. Although their exact combined dataset is not publicly available, we used the same method and news crawl source to create a similar dataset. We built this new Short Jokes dataset by extracting sentences from the WMT162 news crawl that had the same distribution of words and characters as the jokes in the Short Jokes dataset on Kaggle3. This was in order to match the two halves (jokes and non-jokes) as closely as possible. 3.3 Pun of the Day This dataset was scraped by Yang et al. (2015) and contains 16001 puns and 16002 not-punny sentences. We gratefully acknowledge their help in putting together and giving us use of this dataset. These puns were constructed from the Pun of the Day website while the negative samples were gathered from news websites. 4 Methods In this section we will discuss the methods and model used in our experiments. 4.1 Our Model We have chosen to use the pre-trained BERT (Devlin et al., 2018) as the base of our model. BERT is a multi-layer bidirectional Transformer encoder and was initially trained on a 3.3 billion word corpus. The model can be \ufb01ned-tuned with another additional output layer for a multitude of other tasks. We chose to use this Transformer based model as our initial platform because of its success at recognizing and attending to the most important words in both sentence and paragraph structures. In Figure 1, originally designed by Vaswani et al. (2017), we see the architecture of a Transformer model: the initial input goes up through an encoder, which has two parts: a multi-headed 3The Short Jokes dataset from Kaggle is available here. \fFigure 1: Transformer Model Architecture self attention layer, followed by a feed-forward network. It then outputs the information into the decoder, which includes the previously mentioned layers, plus an additional masked attention step. Afterwords, it is transformed through a softmax into the output. This model\u2019s success is in large part due to the Transformer\u2019s self-attention layers. We chose a learning rate of 2e-05 and a max sequence length of 128. We trained the model for a maximum of 7 epochs, creating checkpoints along the way. 4.2 Training Since our data was unbalanced we decided to upsample the humorous jokes in training. We split the dataset into a 75/25 percent split, stratifying with the labels. We then upsampled the minority class in the training set until it reached an even 50 percent. This helped our model learn in a more balanced way despite the uneven amount of nonhumorous jokes. Our validation and test sets were composed of the remaining 25%, downsampling the data into a 50/50 class split so that the accuracy metric could be balanced and easily understood. To show how our model compares to the previous work done, we also test on the Short Joke and Pun datasets mentioned in the Data section. For these datasets we will use the metrics (Accuracy, Precision, Recall, and F1 Score) designated in Chen and Soo (2018) as a comparison. We use Method Body Punchline Full CNN 0.651 0.684 0.688 Transformer 0.661 0.692 0.724 Human (General) 0.493 0.592 0.663 Table 2: Results of Accuracy on Reddit Jokes dataset the same model format as previously mentioned, trained on the Reddit dataset. We then immediately apply the model to predict on the Short Joke and Puns dataset, without further \ufb01ne-tuning, in order to compare the model. However, because both the Puns and Short Joke datasets have large and balanced labels, we do so without the upsampling and downsampling steps used for the Reddit dataset. 5 Experiments In this section we will introduce the baselines and models used in our experiments. 5.1 Baselines In order to have fair baselines, we used the following two models: a CNN with Highway Layers as described by Chen and Soo (2018) and developed by Srivastava et al. (2015), and human performance from a study on Amazon\u2019s Mechanical Turk. We wanted to have the general population rate these same jokes, thus showing the difference between a general audience and a speci\ufb01c subset of the population, in particular, Reddit r/Jokes users. Since the Reddit users obviously found these jokes humorous, this experiment would show whether or not a more general population agreed with those labels. We had 199 unique participants rate an average of 30 jokes each with the prompt \u201ddo you \ufb01nd this joke humorous?\u201d If the participant was evaluating a sample from a body or punchline only dataset we prefaced our question with a sentence explaining that context, for example: \u201dBelow is the punchline of a joke. Based on this punchline, do you think you would \ufb01nd this joke humorous?\u201d Taking these labels, we used the most frequently chosen tag from a majority vote to calculate the percentages found in the Human section of Table 2. 5.2 Results In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on \fPrevious Work: Accuracy Precision Recall F1 Word2Vec+HCF 0.797 0.776 0.836 0.705 CNN 0.867 0.880 0.859 0.869 CNN+F 0.892 0.886 0.907 0.896 CNN+HN 0.892 0.889 0.903 0.896 CNN+F+HN 0.894 0.866 0.940 0.901 Our Methods: Accuracy Precision Recall F1 Transformer 0.930 0.930 0.931 0.931 Table 3: Comparison of Methods on Pun of the Day Dataset. HCF represents Human Centric Features, F for increasing the number of \ufb01lters, and HN for the use of highway layers in the model. See (Chen and Soo, 2018; Yang et al., 2015) for more details regarding these acronyms. the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60\u2019s. We also note that the general human classi\ufb01cation found 66.3% of the jokes to be humorous. In order to understand what may be happening in the model, we used the body and punchline only datasets to see what part of the joke was most important for humor. We found that all of the models, including humans, relied more on the punchline of the joke in their predictions (Table 2). Thus, it seems that although both parts of the joke are needed for it to be humorous, the punchline carries higher weight than the body. We hypothesize that this is due to the variations found in the different joke bodies: some take paragraphs to set up the joke, while others are less than a sentence. Our experiment with the Short Jokes dataset found the Transformer model\u2019s accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4). The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features. 6 Discussion Considering that a joke\u2019s humor value is subjective, the results on the Reddit dataset are surprisMethod Accuracy Precision Recall F1 CNN+F+HN 0.906 0.902 0.946 0.924 Transformer 0.986 0.986 0.986 0.986 Table 4: Results on Short Jokes Identi\ufb01cation ing. The model has used the context of the words to determine, with high probability, what an average Reddit r/Jokes viewer will \ufb01nd humorous. When we look at the general population\u2019s opinion as well, we \ufb01nd a stark difference between their preferences and those of the Reddit users (Table 2). We would hypothesize that our model is learning the speci\ufb01c type of humor enjoyed by those who use the Reddit r/Jokes forum. This would suggest that humor can be learned for a speci\ufb01c subset of the population. The model\u2019s high accuracy and F1 scores on the Short Jokes and Pun of the Day dataset show the effectiveness of the model for transfer learning. This result is not terribly surprising. If the model can \ufb01gure out which jokes are funny, it seems to be an easier task to tell when something isn\u2019t a joke at all. Although these results have high potential, de\ufb01ning the absolute truth value for a joke\u2019s humor is a challenging, if not impossible task. However, these results indicate that, at least for a subset of the population, we can \ufb01nd and identify jokes that will be most humorous to them. 7" + } + ], + "Sean Macavaney": [ + { + "url": "http://arxiv.org/abs/2306.09657v1", + "title": "Online Distillation for Pseudo-Relevance Feedback", + "abstract": "Model distillation has emerged as a prominent technique to improve neural\nsearch models. To date, distillation taken an offline approach, wherein a new\nneural model is trained to predict relevance scores between arbitrary queries\nand documents. In this paper, we explore a departure from this offline\ndistillation strategy by investigating whether a model for a specific query can\nbe effectively distilled from neural re-ranking results (i.e., distilling in an\nonline setting). Indeed, we find that a lexical model distilled online can\nreasonably replicate the re-ranking of a neural model. More importantly, these\nmodels can be used as queries that execute efficiently on indexes. This second\nretrieval stage can enrich the pool of documents for re-ranking by identifying\ndocuments that were missed in the first retrieval stage. Empirically, we show\nthat this approach performs favourably when compared with established pseudo\nrelevance feedback techniques, dense retrieval methods, and sparse-dense\nensemble \"hybrid\" approaches.", + "authors": "Sean MacAvaney, Xi Wang", + "published": "2023-06-16", + "updated": "2023-06-16", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Search result ranking is a critical component of information retrieval systems, and recent advancements in neural networks, especially pre-trained language models, have shown great promise in improving its effectiveness [3, 12]. Despite their potential, optimal ranking outcomes often require extensive model training, particularly for large-scale parameter models. Re-rankers, which reorder a set of documents (i.e., pseudo-relevant documents) retrieved by another component, can be a cost-effective solution [19, 26]. However, one significant drawback of re-rankers is that they completely discard documents that were not identified in the first stage of retrieval. To overcome this recall problem, various strategies have been proposed, including dense retrieval (e.g., [29]), learned sparse retrieval (e.g., [7]), and document re-writing (e.g., [23]). Nonetheless, these approaches require considerable computation and storage overheads, which can be particularly burdensome for large document collections. Meanwhile, there has been considerable attention paid to model distillation for ranking. This involves training a smaller neural model, referred to as the \u201cstudent\u201d model, to mimic the behaviour of a larger neural \u201cteacher\u201d model [10]. The primary focus of the student model is to emulate the teacher model\u2019s generalization capabilities. As a result, we often observe that the student models perform just as well, if not better, than their teacher models [9]. However, \u2217Both authors contributed equally to this research. Authors\u2019 addresses: Sean MacAvaney, sean.macavaney@glasgow.ac.uk, University of Glasgow, United Kingdom; Xi Wang, xi-wang@ucl.ac.uk, University College London, United Kingdom. (a) Offline Distillation (b) Online Distillation (ODIS) Re-Ranker Teacher, Neural Query Model Student re-ranks distils executes against Index Re-Ranked Results First-Stage Results (Re-)Ranker Student, Neural Dataset (Re-)Ranker Teacher, Neural generates Training Targets distils Figure 1: Comparison between (a) Offline Distillation and (b) Online DIStillation (ODIS, ours). existing efforts in this area have primarily been conducted offline, meaning that the distillation occurs over a large set of queries from a training dataset. This can introduce additional costs to deploy the distilled student model. Further, the costs are paid each time a new teacher model is trained, e.g., for updates to match recent search trends. Figure 1(a) provides a graphical overview of the offline distillation process. It shows that two separate stages are required: the first optimizes the heavy teacher model to achieve promising results, and the second conducts the distillation to obtain a lighter student model. In this work, we bridge the use of pseudo-relevance feedback and model distillation by adding a new twist to both settings: using model distillation to provide pseudo-relevance feedback. Instead of ranking all documents for each query using an expensive model, we distil a lightweight single-query model. This model approximates the ranking scores for pseudo-relevant documents and generates a new ranked list over the entire document corpus for the specific query. This direction has several advantages: it is efficient enough to train during query processing itself, and with a suitable distilling process, the resulting student model itself is able to be executed as a query over the index. In contrast with existing pseudo-relevance feedback approaches, this setting allows for optimisation over arbitrary relevance functions, rather than relying on top-\ud835\udc58heuristics. Indeed, in this work, we demonstrate that distilling a student model comprised of a linear combination of term weights via minimising the rank-biased score differences can be effective. We find this setting outperforms the traditional pseudo relevance feedback technique of assuming a fixed top-\ud835\udc58documents are relevant. Figure 1(b) shows an overview of our approach, Online DIStillation (ODIS). Specifically, ODIS distils a lightweight model of the current result set at query-time, rather than prior work that has explored the distillation of a larger model prior to indexing. arXiv:2306.09657v1 [cs.IR] 16 Jun 2023 \fMacAvaney and Wang Table 1: Table of Symbols. Symbol Description Q Query D Document C Corpus of documents C\ud835\udc65 A subset of C, filtered and scored by function \ud835\udc65 S Scoring function (e.g., a cross-encoder) R Retrieval function (e.g., BM25) O Distilled scoring model \ud835\udc65\u00bb\ud835\udc66 \ud835\udc65re-ranked by \ud835\udc66 \ud835\udf03 Model parameters of O L Loss function for distilling O \ud835\udc64D1,D2 Weight of document pair D1 and D2 \ud835\udc5f L1 regularisation rate for distilling O In summary, our formulation of distillation differs from past efforts in several significant ways: (1) the distillation occurs online rather than offline, (2) the distillation targets improvement in recall for a specific query rather than building a model capable of scoring a variety of queries, and (3) the student model itself can be efficiently executed over an index. We experimentally demonstrate the effectiveness of our Online Distillation (ODIS) approach. We observe that ODIS has a strong ability to approximate various leading retrieval pipelines in ranking relevant documents. Moreover, by applying the learned lightweight scoring function (i.e., distilled student model) to rank the entire corpus efficiently, we observe consistent and significant performance improvements, especially with respect to the recall of missing relevant documents. Overall, our results highlight the potential of ODIS as a valuable technique for improving retrieval effectiveness.1 2 RELATED WORK In this section, we provide a relevant context from prior work for our online distillation approach, ODIS, and elaborate on past works on Pseudo-Relevance Feedback (PRF) and Model Distillation (MD). Pseudo Relevance Feedback. PRF techniques are commonly employed to enhance search results by assuming the relevance of the top-ranked documents in the initial search results to the user\u2019s query. The existing PRF methods can be categorised as per their used types of feedback, learning strategies and relevance criteria of documents. For instance, Bo1 [1] and RM3 [11] leveraged the statistical information of pseudo-relevant documents for query expansion, which also serves as a strong baseline (BM25 with RM3) for newly introduced retrievers [2, 27]. With the increasing trend of neural techniques, many neural PRF approaches are emerging, such as CEQE [21], which leverages a language model and pseudorelevant documents for query expansion through contextualised semantic similarity. Other similar techniques are NPRF [14], BERTQE [32], ColBERT-PRF [27] and ANCE-PRF [15], but they differ in applying neural techniques for updating pseudo-relevance documents [32], identifying meaningful expansions [27] or updating representations [15] for a given query. Despite their success, the 1 During peer review of this work, a contemporaneous preprint [25] was published that proposed a similar strategy to ours. Algorithm 1 Online Distillation Input: Q query, C corpus, S scoring function, R retrieval function CR \u2190top results from C for Q using R CR\u00bbS \u2190score CR for Q using S O \u2190distil CR\u00bbS CO \u2190top results from C for Q using O CO\u00bbS \u2190score CO for Q using S \u22b2Note: documents in CO \u2229CR need only be scored once by S \u22b2Note: the total number of documents scored by S can be capped Output: CR\u00bbS \u222aCO\u00bbS major limitations of these neural approaches are their time and memory inefficiency [31]. As a result, it is challenging to apply these techniques to evaluate the relevance of all documents for each query and instead, they tend to focus only on pseudo-relevant documents, which may not guarantee the retrieval of the most related documents for a given query. In contrast, our proposed method, ODIS, distils and approximates the relevance scores between queries and the top-ranked documents and extends the computation to all documents for an overall evaluation so as to address the above concern. The most similar prior work to ours is Lin [16], which proposed training linear classifiers to re-rank search results at runtime. Our work differs substantially from this, both in motivation and function. First, ODIS is designed to function as a query over an inverted index (i.e., re-write the query), rather than simply re-rank results. Special considerations were made for this \u2013 particularly by controlling the sparsity of the model. Further, rather than a classification approach, which assumes a certain number of documents are relevant and non-relevant, we take a distillation approach, wherein the model training objective aims to mimic the complete ranking. Model Distillation (MD). MD is a popular method that operates in a teacher-student mode, where a large neural model (teacher) transfers its learned knowledge to a smaller, more efficient model (student) [9]. Due to its efficiency and complexity-saving nature, it has been widely used in addressing multiple tasks [8, 24, 33], including our focus on document retrieval [6, 10, 17]. For example, TCT [17] distils the learned knowledge from the ColBERT model [12] to generate a similar relevance distribution \u2013 judged by KL divergence \u2013 on documents for queries. Unlike TCT, Choi et al. [3] encouraged the student model to generate similar representation embeddings for the \u2018[CLS]\u2019 token, queries and documents for the ad hoc retrieval. In contrast with existing approaches that train student models capable of handing a variety of queries, ODIS distils single-query student models, aimed at improving the recall for individual queries. 3 ODIS: ONLINE DISTILLATION Consider a scoring function S : Q \u00d7 D \u21a6\u2192R, which maps query Q and document D to a real-valued relevance score (notation in Table 1). When S is expensive to run (such as a cross encoder model), it is impractical to score all documents in corpus C for a given query. Therefore, a subset of the corpus CR \u2282C is selected for scoring, using the top results of efficient retrieval R : Q\u00d7D \u21a6\u2192R (e.g., BM25). \fOnline Distillation for Pseudo-Relevance Feedback This \u201cre-ranking\u201d or \u201ctelescoping\u201d approach is the predominant approach for using expensive models like S for ranking. However, CR will not necessarily contain all the documents that S would have scored the highest across all of C. Online Distillation (ODIS) aims to better approximate the full ranking results of S over C for a given Q by distilling a lightweight, single-query model O : D \u21a6\u2192R. For a well-designed O, we can efficiently execute it over the entire corpus to produce a new ranked list CO \u2282C. Since O is distilled to mimic the rankings of S, we can expect CO to contain new documents that are potentially relevant to S. In other words, we expect CO \\ CR to contain documents that S will score highly. Given this, we can then score CO \u222aCR using S as the final output, in a typical PRF fashion. An overview of this process is given in Algorithm 1, and we detail the distillation procedure in the following section. 3.1 Distillation Procedure O can be any learnable function, but for efficiency, we consider it as a linear combination of features, parameterised by \ud835\udf03. The goal of the distillation process is to find a set of parameters that minimise the loss L between the document scores produced by S and those for O: min\ud835\udf03L\u0000S, Q, CR\u0001. An approximate solution can be found using stochastic gradient descent approaches, such as Adam [13]. Finding a linear combination of field values that reconstructs the (often arbitrary) scale of scores from S could be challenging, depending on the nature of the fields. We therefore propose using a weighted pairwise preference loss function: L\u0000S, Q, CR\u0001 = CR \u00d7CR \u2211\ufe01 (D1,D2) \ud835\udc64D1,D2 (O(D2) \u2212O(D1)) (1) where \ud835\udc64is a weighting score for the pair of documents. For \ud835\udc64, we use the difference in reciprocal ranks (from S) between the two documents, which prioritises pairs with at least one document scored near the top of the ranked list. In our experiments, we use token-level TF-IDF features,2 allowing the parameters \ud835\udf03to define the weights of query terms that can be executed against a lexical index for efficient retrieval. We introduce two practical modifications to ODIS to account for this setting. First, since most retrieval engines limit token weights to be positive, we include a ReLU activation function over \ud835\udf03, which clips negative weights up to 0. Second, since queries with many tokens are costly to execute, we add a \u21131 regularisation component to the loss, which pushes parameter weights towards zero. The weight of this component with respect to the data loss is controlled by hyperparameter \ud835\udc5f. We default \ud835\udc5f= 1, but automatically increase \ud835\udc5fby a factor of 10 if the model converges without achieving a target level of sparsity \ud835\udc61, the maximum number of non-zero token weights. Put another way, if the model converges without the desired sparsity, the model will continue to train, but with a higher regularisation rate, encouraging a model that is more sparse the next time it converges. 4 EXPERIMENTAL SETUP We run experiments to answer the following questions about ODIS: 2 ODIS could be applied in other settings as well, such as over dense document embedding features to find a strong query vector for dense retrieval; we leave this exploration to future work. RQ1 Can models distilled in an online fashion mimic the rankings from relevance models? The replication of the rankings is likely necessary for models to perform well as queries, and the answer to this question is non-obvious, considering the complexity of neural relevance models. RQ2 Do online distilled models generalise enough to identify new relevant documents from the corpus? This question is at the core of our method, since to goal is to improve the recall of re-ranking pool. RQ3 What is the computational and storage overheads of the approach? Overheads need to be low enough to allow for practical use. Datasets and Measures. We test our approach on the TREC Deep Learning 2019 and 2020 (DL193 and DL204) passage ranking datasets [4, 5]. We report nDCG and Recall (with a minimum relevance score of 2) cut off at 1000 results.5 We also measure the Rank Biased Overlap (RBO, \ud835\udc5d= 0.99) [28] between given result sets and exhaustive rankings of the scorer over the entire corpus to check how well the approach approximates the complete rankings. Pipelines. We test various pipelines that re-rank a cross-encoder model (CE6), which have shown promise as a teacher model with rich interactions between transformer-encoded queries and documents [18]. As initial result sets, we test: a lexical model (BM25), two dense models (BE, a bi-encoder7, and BECE, a bi-encoder distilled from the CE8), and an ensemble sparse-dense hybrid system of BM25 and BE (via reciprocal rank fusion at \ud835\udc58= 60). For context, we also include the results of CE after an exhaustive search over the entire corpus. Baselines. We compare ODIS with several competitive generalpurpose pseudo-relevance feedback methods. RM3 [11] and Bo1 [1] are lexical PRF methods that identify and weight salient terms from the top documents, as re-ranked by CE. Graph Adaptive Reranking (GAR, [20]) is a recent PRF approach incorporated as a step that pulls in documents nearby the top results during the re-ranking process. Several other neural PRF techniques, such as ANCE-PRF [30] and ColBERT-PRF [27], have been proposed in the literature. However, these approaches are not general \u2014 they are tied to a particular classes of models (single-representation or multi-representation dense retrieval, respectively.) We, therefore, also focus on only general-purpose PRF methods, including GAR [20], which represents a strong, recent advance. We plan to extend ODIS to include dense features in future work and compare it with neural techniques as the next stage of this work. In all pipelines, we score a maximum of 1000 documents using CE. To validate the effectiveness of various PRF approaches, for RM3, Bo1, and ODIS, we consume up to half the budget (500 documents) on the first-stage results and the remaining budget on the documents retrieved during PRF. Similarly, we use the \u201calternate\u201d variant of GAR, which consumes half the budget from the initial retrieval and half from nearby documents, in an iterative fashion. Parameters and Tuning. ODIS was developed on the BM25 pipeline over DL19 using a different cross-encoder model (MonoT5 [22]), 3 msmarco-passage/trec-dl-2019 4 msmarco-passage/trec-dl-2020 5 The official measure for the datasets are nDCG cut off at 10, but we observed virtually no difference among systems at this depth, so we explore deeper ranking quality. 6 cross-encoder/ms-marco-MiniLM-L-6-v2 7 sentence-transformers/msmarco-distilbert-base-v2 8 sentence-transformers/msmarco-distilbert-dot-v5 \fMacAvaney and Wang Table 2: Distillation quality of ODIS over various retrieval pipelines on DL19. Each row compares the pipeline to one with ODIS, either re-ranking the original results or using the ODIS model to retrieve from the corpus. * indicates significant \u0394nDCG values (Student\u2019s paired t-test, \ud835\udc5d< 0.05). Re-Rank Retrieve Pipeline RBO \u0394nDCG RBO Overlap +Rel/q BM25 \u00bb CE 0.584 *\u22120.017 0.377 0.269 10.2 BE \u00bb CE 0.623 \u22120.006 0.385 0.197 15.7 BECE \u00bb CE 0.593 \u22120.008 0.415 0.250 9.7 BM25+BE \u00bb CE 0.571 \u22120.006 0.407 0.300 7.3 while the DL20 dataset and other pipelines were held out for final evaluation only. This experimental setup was developed to ensure that neither the method itself nor the hyper-parameters are over-fit to our evaluation data; we demonstrate that the method and settings transfer zero-shot to other pipelines and datasets. To facilitate a fair comparison among methods, we equally tuned the number of feedback terms for RM3, Bo1, and ODIS on the BM25 pipeline over DL19. We observed stable and strong effectiveness for all three models at 50 feedback terms, which lead to a consistent application of this setting across all pipelines. As is common for PRF, we also tuned the \ud835\udf06parameter for all applicable methods (including ODIS), which represents the weight of the new query terms WRT the original query terms. 5 RESULTS AND ANALYSIS We begin by testing whether an ODIS lexical model is capable of adequately fitting the cross-encoder model (RQ1). To test this, we train ODIS models on the re-ranked results from the cross-encoder model and then compare the effectiveness of the model as per its ranking results to that of the original ones. Table 2 presents the results, both in re-raking and retrieval settings. When re-ranking, the ODIS model achieves RBO scores of between 0.571 and 0.623, when compared with the cross encoder\u2019s ranking \u2013 a reasonably strong overlap.9 Not all differences matter in terms of ranking result quality, however. We therefore also measure the difference in nDCG score (\u0394nDCG) between the original and ODIS results, and find that ODIS only degrades the quality by up to 0.017 nDCG. In three of the four cases, the differences are not statistically significant. These results answer RQ1: the rankings of a cross-encoder can be successfully distilled down into a linear combination of up to 50 TF-IDF-weighted terms. This conclusion is remarkable, given the complexity of cross-encoder models and the simplicity of the distilled model. Still, to achieve the highest-quality results in the final ranking, it\u2019s the teacher model should be used. However, the ODIS models might be overfitting to the CE rankings \u2013 the models are not valuable unless they can identify new relevant documents from the corpus (RQ2). The retrieval results from Table 2 show that the ODIS models indeed identify new potentiallyrelevant documents. The retrieval RBO is markedly lower than in the re-ranking setting, and the overlap ratio between the newly 9 For reference, two rankings that are identical aside from the items at rank 1 and 12 swapped have an RBO of 0.6. Table 3: Effectiveness of ODIS and baselines over various pipelines. Significant differences between ODIS and corresponding baselines are indicated as superscripts: CE\ud835\udc50, RM3\ud835\udc5f, Bo1\ud835\udc4f, GAR\ud835\udc54(Student\u2019s paired t-test, \ud835\udc5d< 0.05). RBO compares each combined result list with the exhaustive CE search. DL19 DL20 System nDCG R@1k nDCG R@1k RBO Lexical BM25 0.602 0.755 0.596 0.805 0.347 \u00bb CE 0.703 0.755 0.717 0.805 0.711 \u00bb RM3CE 0.759 0.845 0.770 0.887 0.761 \u00bb Bo1CE 0.745 0.815 0.757 0.857 0.749 \u00bb GARCE 0.753 0.839 0.757 0.878 0.772 \u00bb ODISCE \ud835\udc50\ud835\udc4f\ud835\udc540.768 \ud835\udc50\ud835\udc4f0.859 \ud835\udc50\ud835\udc4f\ud835\udc540.785 \ud835\udc50\ud835\udc4f0.909 \ud835\udc50\ud835\udc4f0.769 Dense BE 0.607 0.734 0.594 0.773 0.467 \u00bb CE 0.674 0.734 0.679 0.773 0.822 \u00bb RM3CE 0.768 0.882 0.781 0.925 0.876 \u00bb Bo1CE 0.770 0.877 0.772 0.899 0.885 \u00bb GARCE 0.746 0.849 0.740 0.867 0.867 \u00bb ODISCE \ud835\udc50\ud835\udc540.777 \ud835\udc50\ud835\udc540.894 \ud835\udc50\ud835\udc4f\ud835\udc540.779 \ud835\udc50\ud835\udc540.911 \ud835\udc50\ud835\udc5f\ud835\udc540.889 Dense (Distilled) BECE 0.698 0.818 0.696 0.843 0.632 \u00bb CE 0.728 0.818 0.728 0.843 0.916 \u00bb RM3CE 0.777 0.904 0.784 0.930 0.918 \u00bb Bo1CE 0.777 0.897 0.775 0.907 0.922 \u00bb GARCE 0.758 0.880 0.754 0.884 0.918 \u00bb ODISCE \ud835\udc50\ud835\udc540.783 \ud835\udc500.909 \ud835\udc50\ud835\udc540.781 \ud835\udc50\ud835\udc540.917 \ud835\udc50\ud835\udc5f\ud835\udc4f\ud835\udc540.924 Ensemble BM25+BE 0.725 0.856 0.697 0.871 0.534 \u00bb CE 0.755 0.856 0.752 0.871 0.886 \u00bb RM3CE 0.776 0.887 0.784 0.921 0.869 \u00bb Bo1CE 0.769 0.873 0.774 0.898 0.867 \u00bb GARCE 0.769 0.881 0.767 0.902 0.889 \u00bb ODISCE \ud835\udc50\ud835\udc4f0.780 \ud835\udc50\ud835\udc4f0.894 \ud835\udc50\ud835\udc4f\ud835\udc540.790 \ud835\udc50\ud835\udc4f0.927 \ud835\udc50\ud835\udc5f\ud835\udc4f\ud835\udc540.875 CE (Exhaustive) 0.768 0.894 0.765 0.906 1.000 retrieved documents with the original results only reach up to 0.3\u2014 both of which demonstrate that new documents are introduced in the rankings. Among such new documents retrieved, between 7.3 and 15.7 of them are relevant documents that were missed in the first stage. These newly-retried documents yield a marked improvement in overall system recall; e.g., from 0.805 R@1000 to 0.909 R@1000 for the BM25 pipeline, as shown in Table 3. These newly retrieved relevant documents are not particularly helpful unless they can be successfully incorporated into the rankings by the cross-encoder model. To address this concern, Table 3 presents the overall effectiveness of the ODIS-augmented retrieval pipelines and baselines. We observe that ODIS significantly improves over the base pipeline in both benchmarks and all four pipelines. ODIS also performs favourably compared to relevant PRF baselines. RM3 is overall the strongest baseline in terms of retrieval effectiveness; indeed, there are never significant differences between ODIS and RM3 in terms of nDCG or R@1k. However, in three out of four pipelines, ODIS provides results that are significantly closer to the exhaustive CE rankings (RBO), meaning that it provides a more \u201cfaithful\u201d approximation of the full results. Meanwhile, it provides a significant improvement over Bo1, another lexical PRF technique in both pipelines where lexical signals are already present in the original rankings (BM25 and BM25+BE), but not in pipelines where only dense signals are used for the initial \fOnline Distillation for Pseudo-Relevance Feedback ranking. Finally, ODIS typically outperforms GAR in terms of nDCG (7 of 8 tests), while not requiring pre-computed nearest neighbours. To answer RQ2, online distilled models identify new relevant documents that enable effective rankings of the scoring model. Further, ODIS outperforms strong baseline PRF approaches in terms of ranking effectiveness (or produces results that are more faithful to the complete ranking when there is no significant difference in ranking effectiveness), making ODIS an attractive alternative to existing techniques. Next, we consider the overheads of performing ODIS (RQ3). In terms of storage, ODIS incurs no extra overhead compared to existing lexical PRF approaches (RM3 and Bo1); they all make use of a direct index to look up the tokens that occur in documents and a inverted index to perform retrieval. The computational overhead can be broken down into training and retrieval stages. When we tested running ODIS on an NVIDIA 3090 GPU, we found that ODIS training only takes 98ms per query on average, albeit with only 20% GPU utilisation. Given that such pipelines involve a cross-encoder scorer, requiring a GPU for ODIS is reasonable, and only represents a fraction of the time spent scoring documents. However, RM3 and Bo1 are far less costly, requiring only 4ms on average to rewrite the query, without the need for a hardware accelerator. Future work could investigate techniques to reduce the computational overhead of ODIS distillation, or make better use of hardware acceleration to increase utilisation. On the other hand, both RM3 and Bo1 also involve a secondary retrieval stage, so no computational overhead with respect to baselines in incurred at that stage. Meanwhile, GAR incurs very low query-time overhead (around 20ms/query total), but requires an expensive offline nearest neighbour computation. In summary, to answer RQ3, ODIS adds computational overhead compared to baselines in the query rewriting process but does not add relative overhead in terms of storage or retrieval. 6" + }, + { + "url": "http://arxiv.org/abs/2302.11266v2", + "title": "One-Shot Labeling for Automatic Relevance Estimation", + "abstract": "Dealing with unjudged documents (\"holes\") in relevance assessments is a\nperennial problem when evaluating search systems with offline experiments.\nHoles can reduce the apparent effectiveness of retrieval systems during\nevaluation and introduce biases in models trained with incomplete data. In this\nwork, we explore whether large language models can help us fill such holes to\nimprove offline evaluations. We examine an extreme, albeit common, evaluation\nsetting wherein only a single known relevant document per query is available\nfor evaluation. We then explore various approaches for predicting the relevance\nof unjudged documents with respect to a query and the known relevant document,\nincluding nearest neighbor, supervised, and prompting techniques. We find that\nalthough the predictions of these One-Shot Labelers (1SL) frequently disagree\nwith human assessments, the labels they produce yield a far more reliable\nranking of systems than the single labels do alone. Specifically, the strongest\napproaches can consistently reach system ranking correlations of over 0.86 with\nthe full rankings over a variety of measures. Meanwhile, the approach\nsubstantially increases the reliability of t-tests due to filling holes in\nrelevance assessments, giving researchers more confidence in results they find\nto be significant. Alongside this work, we release an easy-to-use software\npackage to enable the use of 1SL for evaluation of other ad-hoc collections or\nsystems.", + "authors": "Sean MacAvaney, Luca Soldaini", + "published": "2023-02-22", + "updated": "2023-07-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Cranfield-style test collections [8]\u2014consisting of a document corpus, a set of queries, and manually-assessed relevance judgements between queries and documents\u2014are the bedrock of most information retrieval research. Since it is often prohibitively expensive to assess every document for every query (for a non-trivially sized corpus), a subset of documents are chosen for each query for assessment. This is typically accomplished through a pooling process, wherein the top results from one or more systems are selected for assessment. Decisions about pooling (e.g., how many systems, how many results per system, etc.) have a direct effect on the cost of assessment in terms of human labor. On one extreme, some collections only pool from the very top results (e.g., top 10) of a single system. This makes the cost of assessing a single query low, but results in very incomplete assessments [1] and survivorship bias [14]. At the other extreme, some collections assess the tens to hundreds of documents from dozens of systems. This comes at a much higher cost, but results in test collections that are more complete, improving test collection reusability since documents returned from new systems are more likely to already be assessed [46]. Given an identical assessment budget, arguments have been made in favor of both techniques. While reusability of deeper pools is an appealing advantage, shallower assessment means that more queries can be assessed, improving coverage of various information needs and making statistical significance easier [3]. Shallow assessments can also be more easily collected, since they do not require a diverse set of input systems, and can be inferred from query logs (e.g., [34]), making them popular in industry and in large-scale collections like MS MARCO [30]. Further, even with deep assessments, \u201choles\u201d (missing assessments) still exist, especially for large corpora, where many documents can satisfy a query\u2019s information need [45]. In this work, we explore the extent to which we can fill holes in relevance assessments using models over query and document text. To reign in the scope, we focus on a setting where only a single relevant document per query, sampled from the top results of a baseline system, is known. This setup is akin to the MS MARCO dataset, and is often used in industry. We explore several \u201coneshot\u201d approaches1 (henceforth one-shot labelers, or 1SL) for predicting the relevance of holes, including a \ud835\udc58nearest neighbor search of the known relevant document, a supervised relative relevance scorer (DuoT5 [32]), and an instruction-tuned model (FlanT5 [7]). Through experiments on the TREC DL 2019, 2020, and 2021 datasets, we find that 1SL approaches are not yet able to sufficiently identify all relevant documents (best F1 score: 0.63). Therefore, 1SL is likely inadequate for estimating system recall. However, we find 1 Zero-shot methods are simply relevance models, which could be used as ranking models directly. Since these would clearly favor systems that use them for ranking, we only consider approaches that make use of a known relevant document. arXiv:2302.11266v2 [cs.IR] 11 Jul 2023 \fSIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Sean MacAvaney & Luca Soldaini that when incorporating the estimated relevance scores into recallagnostic C/W/L measures [27], we are able to achieve high correlation with the ranking of systems from full TREC assessments (consistently \u22650.86 correlation, often \u22650.98). Further, we find that using these approaches yields more reliable statistical tests when comparing system performance. These results suggest that automatic 1SL techniques could potentially replace expensive deep assessments when evaluating with precision-oriented measures. 2 RELATED WORK The Cranfield evaluation paradigm [8] was introduced as a more cost-effective alternative to a user study: instead of asking a population of users to interact with IR systems, use expert assessors to judge whether documents retrieved by IR systems are relevant for a set of predefined topics, each representing an hypothetical information need. Cranfield evaluations rely on a series of assumptions [44] to justify this simplification: (i) relevance can be approximated as similarity between documents and topics, (ii) the topics are a representative approximation of user information need, and, most pertinent to our work, (iii) all relevant documents for a query are known. The latter requirement is often violated, as it is unfeasible to judge every document in a collection containing up to billions of items [44, 54]. Instead, pooling techniques are often used to create a test collection: all systems part of a shared task contribute2 to a pool of documents [41]; items in the pool get judged by annotators; any document not in the pool is assumed to be irrelevant. Metrics for Partially Assessed Collections. In response to the inevitable violation of completeness assumption in pooled evaluation, researchers have proposed metrics that take this limitation into account. Early on, Buckley and Voorhees [4] measured the effect of this violation in common metrics, such as mean Average Precision, Precision@k retrieved results, and R-Precision; further, they proposed Bpref, a metric that exhibits fewer errors in ranking systems that contributed to the evaluation pool. Follow up work proposed rank-biased precision (RBP) [28, 29] and inferred precision [36, 52]; the latter has been argued to be more effective than Bpref and RBP thanks to its superior discriminative power [37]. Fr\u00f6be et al. [13] proposed using bootstrapping to overcome holes. Despite these efforts, the community has largely not adopted these measures;3 instead, measures that simply treat unjudged documents as irrelevant\u2014such as nDCG, MAP, and P@k\u2014remain in common use. Meanwhile, it has also become increasingly popular to simply report the proportion of documents in the top results that have relevance assessments (Judged@k, or conversely Hole@k) alongside these measures (e.g., [23, 43]). Recognizing the preference for traditional metrics that rely on full judgments, our 1SLs can effectively fill holes in the evaluation pool, thus mitigating violations of the completeness assumption. Automated Assessment of Documents. Given the limitations arising from incomplete judgments, automated tools have been proposed to assist or automate the evaluation of retrieval systems. Some researchers have focused on completely automating IR evaluation [31, 40, 42, 50, 51]; however, these works have been generally 2 Many strategies can be employed to sample documents from the output of a IR system in order to build a pool. 3 As a few examples, none of the TREC 2021 tracks reported measures that were robust to unjudged documents as official evaluation measures, nor did any of papers at SIGIR 2022 \u201cSearch and Ranking\u201d session. Table 1: Comparison of proposed gain estimators. \u2717* indicates a system that was not directly supervised on the data, but may seen some during instruction tuning. Considers MS MARCO \ud835\udc3a \ud835\udc5e \ud835\udc51+ \ud835\udc51? Supervised # Params MaxRep-BM25 \u2717 \u2713 \u2713 \u2717 n/a MaxRep-TCT \u2717 \u2713 \u2713 \u2713 110M DuoT5 \u2713 \u2713 \u2713 \u2713 3B DuoPrompt \u2713 \u2713 \u2713 \u2717* 3B criticized as evaluating retrieval systems by \u201cpopularity as opposed to performance\u201d [2]. Further, recent studies of fully-automated relevance estimators found that careful selection of topics [15] and ensembling of estimators [35] are necessary to best employ these otherwise unstable techniques. Rather that completely relying on models, our work instead focuses on how to best exploit a very small number of relevance judgments provided by human assessors to label an otherwise unusable collection. Previous work in this area is built upon the cluster hypothesis [19]: that is, it assumes that documents that are similar to relevant documents must also be relevant. For example, Carterette and Allan [6] used tfidf -weighted vector space similarity to estimate the probability of a unjudged document. B\u00fcttcher et al. [5] experimented with predicting relevance of documents using models that estimate unigram distribution over relevant and nonrelevant documents. Another direction involves human-written criteria, which can be automatically assessed by QA systems [38]. Hui and Berberich [17] evaluates different strategies to measure document similarity for MaxRep, concluding that bag of words and paragraph embedding approaches to be most suitable. In this manuscript, we extend their approach by using modern contextualized embedders, and compare it with our proposed approaches. Outside adhoc retrieval, others have proposed methods for completing partially annotated pools for temporal summarization [25], novelty-oriented retrieval [18], and question answering [47] tasks. 3 AUTOMATIC ONE-SHOT LABELING (1SL) Consider the case where only a single relevant document \ud835\udc51+ per query \ud835\udc5eis known. Many common evaluation measures (e.g., those in the C/W/L framework [27]) begin by mapping a system ranking [\ud835\udc511,\ud835\udc512, ...,\ud835\udc51\ud835\udc5b] to gain values [\ud835\udc541,\ud835\udc542, ...,\ud835\udc54\ud835\udc5b]. It is common practice to treat documents that are not assessed as having no gain, i.e., \ud835\udc54\ud835\udc56= ( 1 if \ud835\udc51\ud835\udc56= \ud835\udc51+ 0 otherwise (1) In this work, we explore techniques for estimating the gain for an unknown document in the ranking \ud835\udc51? (i.e., \u201choles\u201d) with respect to \ud835\udc51+ and \ud835\udc5eusing a relevance estimator function \ud835\udc3a(\ud835\udc5e,\ud835\udc51+,\ud835\udc51?) \u2208[0, 1]:4 \ud835\udc54\ud835\udc56= ( 1 if \ud835\udc51\ud835\udc56= \ud835\udc51+ \ud835\udc3a(\ud835\udc5e,\ud835\udc51+,\ud835\udc51\ud835\udc56) otherwise (2) 4 Without loss of generality, we assume the outputs are in the range of 0 to 1; in cases where they are not, the outputs could simply be normalized using various approaches. \fOne-Shot Labeling for Automatic Relevance Estimation SIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan These gains can then be used by various evaluation measures. We refer to \ud835\udc3aas a One-Shot Labeler (1SL), given that it provides labels for a query based on a single known relevance label. We now describe several 1SL implementations, summarized in Table 1. MaxRep. We adapt MaxRep by Hui and Berberich [16] to a one-shot setting: given \ud835\udc51+, we retrieve the \ud835\udc58nearest neighboring documents and also treat them as relevant, with a linearly degrading gain (i.e., the \ud835\udc56th nearest neighbor receives a gain of (\ud835\udc58\u2212\ud835\udc56)/\ud835\udc58). Note that this approach does not use the query directly, except insofar as that \ud835\udc51+ is known to be relevant to \ud835\udc5e. In line with prior work [17], we explore both lexical similarity and semantic similarity. For lexical similarity, we use the top \ud835\udc58= 128 BM25 results for the given document, and for semantic similarity, we use the top \ud835\udc58= 128 nearest neighbors over TCT-ColBERT [20] embeddings5 \u2014 both of which have been recently shown as an effective signal to identify additional relevant documents when re-ranking [24]. DuoT5. The DuoT5 model [32] is a sequence-to-sequence model trained to provide a relative relevance signal between two documents, with respect to a query. It was proposed for use as a final stage re-ranker, where it can help refine the top documents in a ranked list by comparing them with one another. We recognize that it may also be suitable estimating the relevance in a one-shot setting. Specifically, by presenting the model with \ud835\udc5eand \ud835\udc51+, the model estimates the probability that \ud835\udc51? is more relevant to \ud835\udc5ethan \ud835\udc51+. Note that in contrast with when DuoT5 is used as a re-ranker, this case makes use of a known relevant document, and uses the information to predict the relevance of additional documents. DuoPrompt. Recent advances in instruction-tuned models [39, 48, 49] have lead to the rise of models that can be prompted \u201cin context\u201d to complete a task. Practically, this consists of prefixing each sample one wishes to perform inference on with instructions describing the task. In our case, we prepend query and passage with a short text prompting the model to estimating relevance of passages for the given query.6 Akin to DuoT5, DuoPrompt prompts the model to assess whether\ud835\udc51? is as relevant as\ud835\udc51+ for\ud835\udc5e. We teacherforce the model to generate either \u201cyes\u201d or \u201cno\u201d; the probability values obtained by computing the softmax over the logits of these two tokens is used as the relevance score.7 We leverage the pretrained Flan-T5 [7], a variant of T5 [33] that has been fine-tuned on a large set of instruction datasets.8 4 EXPERIMENTS AND RESULTS In this section, we establish two research questions and evaluate how one-shot modelers perform in these two settings. First, RQ1 asks: can 1SLs be used to directly label unjudged documents in a shared task pool? While appealing, we show that proposed techniques cannot reliably provide binary relevance labels. Then, RQ2 asks (despite the limitations described above): can 1SLs be used 5 castorini/tct_colbert-v2-hnp-msmarco-r2 6 Prompt: Determine if passage B is as relevant as passage A. Passage A: Passage B: Query: Is passage B as relevant as passage A? 7 Inspired by its success in prior work [21, 26, 53], we also experimented with adding examples of the tasks to the prompt, a practice known as in-context learning. However, we found it to be worse than not including any labeled samples. This might be due to the large variance in passages, but we leave the problem of exploring in-context learning to future work. 8 We note that the instruction data includes 500 samples from the MS MARCO QnA train dataset, resulting in a model that is supervised on a small amount of data from our target domain. However, the instruction prompts do not make use of the relevance labels; only a set of candidates, the queries and assessor-written answers. 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.4 0.5 0.6 0.7 0.8 Precision BM25-MR: f1=0.39 AP=0.36 TCT-MR: f1=0.45 AP=0.40 DuoT5: f1=0.59 AP=0.56 DuoP: f1=0.55 AP=0.59 0 1 0 1 detail Figure 1: Precision-recall curve of one-shot labelers (1SLs) on the qrels from the 2019 TREC Deep Learning Track. to rank IR systems a shared task? Crucially, we find that 1SLs are very accurate relative performance estimators across three adhoc passage retrieval benchmarks and three metrics. To answer both questions, we use a simulated shallow document pool, based on the task\u2019s official BM25 baseline.9 We choose the first relevant document from the run, and treat it as the single \ud835\udc51+ for the query.10 4.1 RQ1: 1SLs as Relevance Estimators Setup. In order to establish whether any of the methods described in Section 3 can function as automatic binary relevance assessors, we measure their efficacy at inferring relevance labels for unknown documents. Given the simulated shallow pools, we predict the binary labels of qrels from the TREC DL 2019 track [11]. Results. We report the precision-recall curves of these predictions in Figure 1. We note clear differences among the methods introduced in Section 3. First, MaxRep methods achieve notably worse F1 and Average Precision (AP) than other methods. This is likely due to MaxRep not considering query text when estimating relevance, instead only relying on intra-document similarity. On the other hand, DuoPrompt and DuoT5 are moderately effective at the tasks, achieving an F1-score of at most 0.55 and 0.59 (AP: 0.59 and 0.56) respectively. However, these results are far from meeting necessary accuracy to use DuoPrompt or DuoT5 as fully automatic labelers, as they are unable to reliably identify the long tail of relevant documents. Further, we note that their PR curves are very different, making difficult to tune a reliable operating point for the binary labels. This observation confirms that the difficulties noted by Hauff et al. [15] and Roitero et al. [35] in tuning zero-shot labelers applies to our 1SLs. 4.2 RQ2: 1SLs for System Evaluation Setup. In order for 1SLs to be a viable alternative to highly skilled assessors, they should lead to similar evaluation outcomes. More specifically, given a set of systems submitting runs to a shared track, results using 1SLs should result in a ranking of participating systems that is comparable to that of human labels and for similar outcomes from statistical significance tests. To study if that is the case, we calculate various evaluation measures for the submissions to the TREC 2019, 2020, and 2021 Deep 9 bm25base_p for DL 2019, p_bm25 for DL 2020 and 2021 10 On average, this requires 6 passages to be \u201cexamined\u201d per query. \fSIGIR \u201923, July 23\u201327, 2023, Taipei, Taiwan Sean MacAvaney & Luca Soldaini Table 2: 1SLs as system evaluators. Correlations of the TREC submissions are reported in terms of Kendall\u2019s \ud835\udf0f, Spearman\u2019s \ud835\udf0c, Rank Biased Overlap (\ud835\udc5d= 0.9), and the \ud835\udc61-test (\ud835\udc5d< 0.05 w/ Bonferroni) false positive/negative rates of the top identified system. msmarco-passage/trec-dl-2019 msmarco-passage/trec-dl-2020 msmarco-passage-v2/trec-dl-2021 Measure Holes \ud835\udf0f \ud835\udf0c RBO \ud835\udc61-FNR \ud835\udc61-FPR \ud835\udf0f \ud835\udf0c RBO \ud835\udc61-FNR \ud835\udc61-FPR \ud835\udf0f \ud835\udf0c RBO \ud835\udc61-FNR \ud835\udc61-FPR SDCG@10 Non-relevant -0.204 -0.248 0.420 0.000 0.857 0.419 0.498 0.731 0.000 0.510 0.402 0.531 0.564 0.125 0.714 MaxRep-BM25 0.240 0.320 0.690 0.000 0.229 0.468 0.597 0.710 0.000 0.059 0.455 0.576 0.609 0.200 0.489 MaxRep-TCT 0.829 0.958 0.818 0.208 0.083 0.793 0.933 0.920 0.263 0.053 0.578 0.761 0.817 0.222 0.300 DuoT5 0.889 0.972 0.812 0.000 0.417 0.837 0.944 0.939 0.000 0.895 0.859 0.963 0.880 0.000 0.571 DuoPrompt 0.904 0.980 0.830 0.160 0.000 0.909 0.986 0.863 0.184 0.000 0.910 0.983 0.925 0.040 0.143 P@10 Non-relevant -0.033 -0.003 0.511 0.000 0.571 0.429 0.526 0.695 0.143 0.120 0.387 0.520 0.528 0.182 0.739 MaxRep-BM25 0.362 0.452 0.722 0.000 0.314 0.442 0.565 0.660 0.000 0.118 0.425 0.579 0.571 0.167 0.467 MaxRep-TCT 0.870 0.971 0.874 0.208 0.083 0.792 0.928 0.769 0.214 0.067 0.579 0.767 0.781 0.154 0.290 DuoT5 0.891 0.974 0.811 0.000 0.333 0.868 0.970 0.781 0.000 0.867 0.858 0.962 0.916 0.000 0.700 DuoPrompt 0.891 0.981 0.872 0.000 0.250 0.907 0.986 0.817 0.143 0.000 0.868 0.972 0.903 0.000 0.200 RBP(p=0.8) Non-relevant -0.177 -0.220 0.445 0.000 0.857 0.437 0.510 0.840 0.000 0.353 0.387 0.516 0.535 0.167 0.706 MaxRep-BM25 0.246 0.332 0.713 0.000 0.200 0.452 0.583 0.715 0.000 0.059 0.446 0.562 0.601 0.182 0.565 MaxRep-TCT 0.853 0.963 0.829 0.167 0.083 0.791 0.930 0.917 0.333 0.000 0.569 0.761 0.766 0.259 0.333 DuoT5 0.892 0.975 0.854 0.000 0.667 0.806 0.929 0.939 0.000 0.889 0.863 0.970 0.882 0.000 0.625 DuoPrompt 0.919 0.986 0.889 0.040 0.000 0.889 0.980 0.964 0.179 0.000 0.890 0.980 0.897 0.000 0.250 Learning tracks [9\u201311] (37, 58, and 58 runs, respectively) using our simulated shallow pools. We treat holes as either non-relevant (0-gain, the baseline), or by replacing the relevance gains using 1SL scores. Since we found that 1SLs are not yet capable of providing a reliable measure of recall, we instead focus on three recall-agnostic evaluation measures from the C/W/L framework: SDCG@10, P@10, and RBP(p=0.8) using ir-measures [22].11 We use variants of P@10 (weighted-precision) and RBP that make use of partial gains [27]. Results. We report our main findings in Table 2. Here the first row in each section corresponds to estimating system ranks using only the one labeled document from BM25; subsequent rows show how 1SLs perform. Overall, we observe that DuoPrompt usually leads to more accurate estimation across all years and metrics: it achieves a correlation of 0.87 to 0.92, and a rank correlation of 0.97 to 0.98. More importantly, \ud835\udc61-tests conducted using the method are far more reliable than other methods; it rarely gives false positive results (as compared to the full qrels), and infrequently yields false negatives. DuoT5 also often achieves high correlations with the full qrels, but results in more false positive \ud835\udc61-tests. On the other hand, MaxRep [16] does not offer reliable estimation of labels, leading to inferior correlations when compared to DuoT5 and DuoPrompt. MaxRep-BM25 is particularly unreliable, with all its correlation < 0.5; MaxRep-TCT, while more accurate than MaxRepTCT, still suffers from higher false positive rates in most cases than DuoPrompt. Finally, while unsurprising, we note that, ordering of systems induced by using only one positive labels (\u201cnot relevant\u201d in Table 2) is a very inaccurate approach, leading to very low or even negative correlations and a high proportion of false positive tests. Clearly, in this case, the single system that contributed to the qrels is unfairly favored above the other systems, leading to artificially higher evaluation measures than it would under deeper pools. 5" + }, + { + "url": "http://arxiv.org/abs/2208.08942v1", + "title": "Adaptive Re-Ranking with a Corpus Graph", + "abstract": "Search systems often employ a re-ranking pipeline, wherein documents (or\npassages) from an initial pool of candidates are assigned new ranking scores.\nThe process enables the use of highly-effective but expensive scoring functions\nthat are not suitable for use directly in structures like inverted indices or\napproximate nearest neighbour indices. However, re-ranking pipelines are\ninherently limited by the recall of the initial candidate pool; documents that\nare not identified as candidates for re-ranking by the initial retrieval\nfunction cannot be identified. We propose a novel approach for overcoming the\nrecall limitation based on the well-established clustering hypothesis.\nThroughout the re-ranking process, our approach adds documents to the pool that\nare most similar to the highest-scoring documents up to that point. This\nfeedback process adapts the pool of candidates to those that may also yield\nhigh ranking scores, even if they were not present in the initial pool. It can\nalso increase the score of documents that appear deeper in the pool that would\nhave otherwise been skipped due to a limited re-ranking budget. We find that\nour Graph-based Adaptive Re-ranking (GAR) approach significantly improves the\nperformance of re-ranking pipelines in terms of precision- and recall-oriented\nmeasures, is complementary to a variety of existing techniques (e.g., dense\nretrieval), is robust to its hyperparameters, and contributes minimally to\ncomputational and storage costs. For instance, on the MS MARCO passage ranking\ndataset, GAR can improve the nDCG of a BM25 candidate pool by up to 8% when\napplying a monoT5 ranker.", + "authors": "Sean MacAvaney, Nicola Tonellotto, Craig Macdonald", + "published": "2022-08-18", + "updated": "2022-08-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Deep neural ranking models \u2013 especially those that use contextualised language models like BERT [6] \u2013 have brought significant benefits in retrieval effectiveness across a range of tasks [18]. The most effective techniques tend to be those that first retrieve a pool of candidate documents2 using an inexpensive retrieval approach and then re-score them using a more expensive function. This process is called re-ranking, since the documents from the candidate pool are given a new ranked order. Re-ranking enables the use of sophisticated scoring functions (such as cross-encoders, which jointly model the texts of the query and document) that are incompatible with inverted indexes or vector indexes. Since the scoring function can be computationally expensive, re-ranking is often limited to a predefined maximum number documents that the system is willing to re-rank for each query (i.e., a re-ranking budget, such as 100). The performance of a re-ranking pipeline is limited by the recall of the candidate pool, however. This is because documents that were not found by the initial ranking function have no chance of being re-ranked. Consequently, a variety of techniques are employed to improve the recall of the initial ranking pool, including documentrewriting approaches that add semantically-similar terms to an inverted index [30], or dense document retrieval techniques that enable semantic searching [12]. In this work, we explore a complementary approach to overcoming the recall limitation of re-ranking based on the long-standing clustering hypothesis [11], which suggests that closely-related documents tend to be relevant to the same queries. During the re-ranking process, our approach, called Graph-based Adaptive Re-Ranking (Gar), prioritises the scoring of the neighbours of documents that have received high scores up to this point. An overview of Gar is shown in Figure 1. The Gar feedback mechanism allows for 2Or passages; we often simply use the term \u201cdocument\u201d for ease of reading. arXiv:2208.08942v1 [cs.IR] 18 Aug 2022 \fCIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA MacAvaney, et al. documents to be retrieved that were not present in the initial reranking pool, which can improve system recall. It also allows for the re-ranking of the documents that may have otherwise been skipped from the pool when the re-ranking budget is low. Finally, by including the feedback within the re-ranking itself (as opposed to post-scoring feedback mechanisms, such as PRF), our approach can find documents that are multiple hops away (i.e., neighbours of neighbours). Gar achieves low online overhead through offline computation of a corpus graph that stores the nearest neighbours of each document. On experiments over the TREC Deep Learning datasets, we find that Gar significantly improves precisionand recall-oriented evaluation measures. Gar can improve virtually any re-ranking pipeline, with the results largely holding across a variety of initial retrieval functions (lexical, dense retrieval, document expansion, and learned lexical), scoring functions (cross-encoding, late interaction), document similarity metrics (lexical, semantic), and re-ranking budgets (high, low). Impressively, a Gar pipeline that uses only BM25 for both the initial retrieval and the document similarity is able to achieve comparable or improved performance in terms of reranked precision and recall over the competitive TCT-ColBERT-v2HNP [19] and DocT5Query [30] models \u2013 both of which have far higher requirements in terms of offline computation and/or storage capacities. We find that the online overhead of Gar is low compared to a typical re-ranking, usually only adding around 2-4ms per 100 documents re-ranked. We also find that Gar is largely robust to its parameters, with major deviations in performance only occurring with extreme parameter values. Finally, we find that despite using document similarity, Gar does not significantly reduce the diversity among the relevant retrieved documents. In summary, we propose a novel approach to embed a feedback loop within the neural re-ranking process to help identify un-retrieved relevant documents through application of the clustering hypothesis. Our contributions can therefore be summarised as follows: (1) We demonstrate a novel application of the clustering hypothesis in the context of neutral re-ranking; (2) We show that our proposed approach can successfully improve both the precision and the recall of re-ranking pipelines with minimal computational overhead; (3) We demonstrate that the approach is robust across pipeline components and the parameters it introduces. The remainder of the paper is organised as follows: We first provide additional background and related work, positioning Gar in context with past work in neural retrieval, relevance feedback, and the clustering hypothesis (Section 2); We then briefly demonstrate that the clustering hypothesis still holds on a recent dataset to motivate our approach (Section 3); We formally describe our method (Section 4) and present our experiments that demonstrate its effectiveness (Sections 5 & 6); We wrap up with final conclusions and future directions of this promising area (Section 7). 2 BACKGROUND AND RELATED WORK The recent advancements in deep neural ranking models have brought significant improvements on the effectiveness of ad-hoc ranking tasks in IR system [18]. In particular, pre-trained language models such as BERT [6] and T5 [33] are able to lean semantic representations of words depending on their context, and these representations are able to better model the relevance of a document w.r.t. a query, with notable improvements w.r.t. classical approaches. However, these improvements have an high computational costs; BERT-based rankers [22, 28] are reported to be slower than classical rankers such as those based on BM25 by orders of magnitude [10, 22]. Therefore, it is still usually infeasible to directly use pre-trained language models to rank all documents in a corpus for each query (even using various to reduce the computational cost [12, 20, 21].) Deep neural ranking models are typically deployed as re-rankers in a pipeline architecture, where a first preliminary ranking stage is deployed before the more expensive neural re-ranker, in a cascading manner. During query processing, the first ranking stage retrieves from the whole document corpus a candidate pool of documents using a simple ranking function, with the goal of maximising the recall effectiveness. The following re-ranking stage processes the documents in the candidate pool, reordering them by focusing on high precision results at the top positions, whose documents will be returned to the user [31, 36]. In this setting, there is an efficiency-effectiveness tradeoff on the number of documents retrieved by the first ranker. From the efficiency perspective, a smaller number of documents in the candidate pool will allow the re-ranker to reduce the time spent on re-ranking the documents, since the execution time is proportional to the candidate set size. From the effectiveness perspective, the larger the candidate pool, the higher the number of potentially relevant documents to be retrieved from the document corpus. In fact, relevant documents can be retrieved from the corpus only during first-stage processing. The recall effectiveness of the candidate pool has been investigated in previous IR settings, in particular in learning-to-rank pipelines. Tonellotto et al. [35] studied how, given a time budget, dynamic pruning strategies [36] can be use in first-stage retrieval to improve the candidate pool size on a per-query basis. Macdonald et al. [23] studied the minimum effective size of the document pool, i.e., when to stop ranking in the first stage, and concluded that the smallest effective pool for a given query depends, among others, on the type of the information need and the document representation. In the context of neural IR, learned sparse retrieval focuses on learning new terms to be included in a document before indexing, and the impact scores to be stored in the inverted index, such that the resulting ranking function approximates the effectiveness of a full transformer-based ranker while retaining the efficiency of the fastest inverted-index based methods [5, 7, 26]. In doing so, first-stage rankers based on learned impacts are able to improve the recall w.r.t. BM25, but the end-to-end recall is still limited by the first-stage ranker. Pseudo-Relevance Feedback (PRF) involves the reformulation of a query based on the top results (e.g., by adding distinctive terms from the top documents). This query is then re-issued to the engine, producing a new ranked result list. Adaptive Re-Ranking also makes use of these top-scoring documents, but differs in two important ways. First, the query remains unmodified, and therefore, ranking scores from the model need not be re-computed. Second, the top scores are used in an intermediate stage of the scoring process; the process is guided by the highest-scoring documents known up until a given point, which may not reflect the overall top results. Finally, we note that the output of an adaptive re-ranking operation could be fed as input into a PRF operation to perform query reformulation. \fAdaptive Re-Ranking with a Corpus Graph CIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA This work can be seen as a modern instantiation of the clustering hypothesis, which Jardine and van Rijsbergen [11] stated as \u201cClosely associated documents tend to be relevant to the same requests\". Many works have explored the clustering hypothesis for various tasks in information retrieval, such as for visualisation of the corpus (e.g., [17]), visualisation of search results (e.g., [4]), enriching document representations [16] and fusing rankings (e.g., [15]). Most related to our application is the usage of the clustering hypothesis for first-stage retrieval (i.e., document selection), in which the documents to rank are identified by finding the most suitable cluster for a query [13]. However, these works focus on identifying the most suitable clusters for a given query and transforming the constituents into a ranking. Moreover, while our approach also takes a soft clustering approach [14] where each \u2018cluster\u2019 is represented by a document and its neighbours, instead of ranking clusters, we identify \u201cgood\u201d clusters as when the representative document is scored highly by a strong neural scoring function. We also address the problem of transforming the documents into a ranking by letting the neural scoring function do that job as well. Overall, our novel approach is the first to embed a feedback loop within the re-ranking process to help identify un-retrieved relevant documents. 3 PRELIMINARY ANALYSIS We first perform a preliminary check to see whether the clustering hypothesis appears to hold on a recent dataset and using a recent model. Namely, we want to check whether the passages from the MS MARCO corpus [2] are more likely to distributed closer to those with the same relevance labels than those with differing grades. We explore two techniques for measuring similarity: a lexical similarity score via BM25, and a semantic similarity via TCT-ColBERT-HNP [19]. For the queries in the TREC DL 2019 dataset [3], we compute similarity scores between each pair of judged documents. Then, akin to the Voorhees\u2019 cluster hypothesis test [37], we calculate the distribution of the relevance labels of the nearest neighbouring passage by relevance label (i.e., we calculate \ud835\udc43\u0000\ud835\udc5f\ud835\udc52\ud835\udc59(\ud835\udc5b\ud835\udc52\ud835\udc56\ud835\udc54\u210e\ud835\udc4f\ud835\udc5c\ud835\udc62\ud835\udc5f(\ud835\udc5d)) = \ud835\udc66|\ud835\udc5f\ud835\udc52\ud835\udc59(\ud835\udc5d) = \ud835\udc65\u0001 for all pairs of relevance labels \ud835\udc65and \ud835\udc66.) Table 1 presents the results of this analysis. We observe a clear trend: passages with a given relevance label are far more likely to be closer to the passages with the same label (among judged passages) than those with other labels (in the same row). This holds across both lexical (BM25) and semantic (TCT-ColBERT) similarity measures, and across all four relevance labels (ranging from nonrelevant to perfectly relevant). Table 1: Distribution of nearest neighbouring passages, among pairs of judged passages in TREC DL 2019, based on BM25 and TCT-ColBERT-HNP similarity scores. Each cell represents the percentage that a passage with a given relevance label (\ud835\udc65) has a nearest neighbour with the column\u2019s relevance label (\ud835\udc66); each row sums to 100%. BM25 neighbour\u2019s rel \ud835\udc66 0 1 2 3 \ud835\udc65= 0 67 11 16 7 1 14 47 31 8 2 8 12 71 9 3 8 7 12 73 TCT-ColBERT-HNP neighbour\u2019s rel \ud835\udc66 0 1 2 3 \ud835\udc65= 0 76 10 10 4 1 17 46 29 8 2 8 11 72 8 3 6 7 12 75 Algorithm 1 Graph-based Adaptive Re-Ranking Input: Initial ranking \ud835\udc450, batch size \ud835\udc4f, budget \ud835\udc50, corpus graph \ud835\udc3a Output: Re-Ranked pool \ud835\udc451 \ud835\udc451 \u2190\u2205 \u22b2Re-Ranking results \ud835\udc43\u2190\ud835\udc450 \u22b2Re-ranking pool \ud835\udc39\u2190\u2205 \u22b2Graph frontier do \ud835\udc35\u2190Score(top \ud835\udc4ffrom \ud835\udc43, subject to \ud835\udc50) \u22b2e.g., monoT5 \ud835\udc451 \u2190\ud835\udc451 \u222a\ud835\udc35 \u22b2Add batch to results \ud835\udc450 \u2190\ud835\udc450 \\ \ud835\udc35 \u22b2Discard batch from initial ranking \ud835\udc39\u2190\ud835\udc39\\ \ud835\udc35 \u22b2Discard batch from frontier \ud835\udc39\u2190\ud835\udc39\u222a(Neighbours(\ud835\udc35,\ud835\udc3a) \\ \ud835\udc451) \u22b2Update frontier \ud835\udc43\u2190 ( \ud835\udc450 if \ud835\udc43= \ud835\udc39 \ud835\udc39 if \ud835\udc43= \ud835\udc450 \u22b2Alternate initial ranking and frontier while |\ud835\udc451| < \ud835\udc50 \ud835\udc451 \u2190\ud835\udc451 \u222aBackfill(\ud835\udc450, \ud835\udc451) \u22b2Backfill remaining items This analysis suggests that the clustering hypothesis holds on TREC DL 2019. Therefore, it follows that the neighbours of passages that a scoring function considers most relevant are a reasonable place to look for additional relevant passages to be scored \u2013 which is the core motivation of our proposed method. 4 GRAPH-BASED ADAPTIVE RE-RANKING We now introduce the document re-ranking scenario, and we present a description of our proposed re-ranking algorithm. Let \ud835\udc450 denote an initial ranked pool of |\ud835\udc450| documents produced by a first-stage ranker, and let \ud835\udc451 denote a subsequent re-ranked pool of |\ud835\udc451| = |\ud835\udc450| documents. A certain number of top ranked documents from the \ud835\udc451 pool will subsequently be returned to the user who issued the query. In re-ranking, we assume that the documents from \ud835\udc450 are processed in batches of \ud835\udc4fdocuments at maximum (the size of the last batch depends on the re-ranking budget). A scoring function Score() takes as input a batch of documents, e.g., the top scoring \ud835\udc4fdocuments in \ud835\udc450 and re-scores them according to a specific re-ranking stage implementation. The re-scored batch is added the final re-ranked pool \ud835\udc451, and then removed from the initial ranked pool \ud835\udc450. Note that by setting \ud835\udc4f= 1, we are re-ranking one document at a time, as in classical learning-to-rank scenarios; in contrast, when \ud835\udc4f> 1, we allow for more efficient re-ranking function implementations leveraging advanced hardware, such as GPUs and TPUs. Since the time available for re-ranking is often small, and given that it is directly proportional to the number of documents reranked, the re-ranking process can be provided with a budget \ud835\udc50, denoting the maximum number of documents to be re-ranked given the proportional time constraint. If the budget does not allow to re-rank all the document in the initial ranked pool, the Backfill function returns the documents in \ud835\udc450 that have not been re-ranked, i.e., not in \ud835\udc451, that are used to fill up the final re-ranked pool \ud835\udc451 to contain all the documents initially included in \ud835\udc450. For example, if \ud835\udc450 contains 1000 documents and, due to the budget, only 100 documents can be re-scored, the 900 top ranked documents in \ud835\udc450 but not re-ranked in \ud835\udc451 are appended to\ud835\udc451 in the same order as in\ud835\udc450, to obtain a re-ranked list of 1000 documents. The uncoloured lines in Alg. 1 illustrate this re-ranking algorithm, which corresponds to the common re-ranking adopted in a pipelined cascading architecture. \fCIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA MacAvaney, et al. In our adaptive re-ranking algorithm, we leverage a corpus graph \ud835\udc3a= (\ud835\udc49, \ud835\udc38). This directed graph encodes the similarity between documents, and can be computed offline, using lexical or semantic similarity function between two documents. Every node in \ud835\udc49represents a document in the corpus, and every pair of documents may be connected with an edge in \ud835\udc38, labelled with the documents\u2019 similarity. To address the graph\u2019s quadratic space (and time) complexity, we limit to a small value \ud835\udc58the number of edges for each node in the corpus graph, i.e., |\ud835\udc38| = \ud835\udc58|\ud835\udc49|. The top \ud835\udc58edges are selected according to their similarity scores, in decreasing order. Our adaptive re-ranking algorithm, illustrated in Alg. 1, receives an initial ranking pool of documents \ud835\udc450, a batch size \ud835\udc4f, a budget \ud835\udc50, and the corpus graph\ud835\udc3aas input. We consider a dynamically updated re-ranking pool \ud835\udc43, initialised with the contents of \ud835\udc450 (\ud835\udc43\u2190\ud835\udc450), and a dynamically updated graph frontier \ud835\udc39, initially empty (\ud835\udc39\u2190\u2205). After the re-ranking of the top \ud835\udc4fdocuments selected from \ud835\udc43and subject to the constraint \ud835\udc50(called batch \ud835\udc35, where \ud835\udc4f= |\ud835\udc4f|), we update the initial and re-ranked pools \ud835\udc450 and \ud835\udc451. The documents in the batch are removed from the frontier \ud835\udc39because there is no need to re-rank them again. Now we consider the documents in the batch \ud835\udc35, and we look up in the corpus graph for documents whose nodes are directly connected to the documents in \ud835\udc35. These documents (except any that have already been scored) are added to the frontier (\ud835\udc39\u222a(Neighbours(\ud835\udc35,\ud835\udc3a)\\\ud835\udc451)), prioritised by the computed ranking score of the source document. Note that the neighbours may occur later in the ranking list. Next, instead of using the current contents of the initial pool \ud835\udc450 for the next batch evaluation, we alternate between \ud835\udc450 and the current frontier \ud835\udc39. In doing so, we ensure that \ud835\udc451 contains documents from \ud835\udc450 and newly identified documents not included in \ud835\udc450. The algorithm proceeds alternating between these two options, populating the frontier at each step, until the budget allows, then backfills the final pool of initial candidates as before. We note that alternating between the initial ranking and the frontier is somewhat na\u00efve; perhaps it is better to score more/fewer documents from the frontier, or to dynamically decide whether to select batches from the frontier or the initial ranking based on recent scores. Indeed, we investigated such strategies in pilot experiments but were unable to identify a strategy that consistently performed better than the simple alternating technique. We therefore decided to leave the exploration of alternative techniques to future work. 5 EXPERIMENTAL SETUP We experiment to answer the following research questions: RQ1 What is the impact of Gar on retrieval effectiveness compared to typical re-ranking? RQ2 What is the computational overhead introduced by Gar? (Section 6.2) RQ3 How sensitive is Gar to the parameters it introduces: the number of neighbours included in the corpus graph \ud835\udc58and the batch size \ud835\udc4f? (Section 6.3) RQ4 What is the impact of Gar on retrieval effectiveness compared to state-of-the-art neural IR systems? Finally, because Gar is based on scoring similar documents, we recognise that it has the potential to reduce the diversity of the retrieved passages (i.e., it could make the retrieved passages more homogeneous). Therefore, we ask: RQ5 Does Gar result in more homogeneous relevant passages than existing techniques? 5.1 Datasets and Evaluation Our primary experiments are conducted using the TREC Deep Learning 2019 (DL19) and 2020 (DL20) test collections [3]. DL19 is used throughout the development and for the analysis of Gar, and therefore acts as our validation set. DL20 is held out until the final evaluation, allowing us to confirm that our approach has not over-fit to DL19. Both datasets use the MS MARCO passage ranking corpus, which consists of 8.8M passages [2]. DL19 consists of 43 queries and an average of 215 relevance assessments per query; DL20 has 54 queries with 211 assessments per query. We evaluate our approach using nDCG, MAP, and Recall at rank 1000. For the binary measures (MAP and Recall), we use the standard practice of setting a minimum relevance score of 2, which counts answers that are highly or perfectly relevant. In our experiments we are concerned with both precision and recall, so we focus on nDCG without a rank cutoff, though we also report the official task measure of nDCG with a rank cutoff of 10 (nDCG@10) to provide meaningful comparisons with other works. We select DL19 and DL20 because they provide more complete relevance assessments than the MS MARCO development set; this is especially important given that Gar is designed to retrieve documents that were not necessarily in the initial re-ranking pool. For completeness, we also report performance on the small subset of MS MARCO dev, which consists of 6980 queries, each with 1.1 relevance assessments per query on average. For this dataset, we report the official measure of Mean Reciprocal Rank at 10 (MRR@10) and the commonly-reported value of Recall at 1000. 5.2 Retrieval and Scoring Models To test the effect of Gar under a variety of initial ranking conditions, we conduct experiments using four retrieval functions as first stage rankers, each representing a different family of ranking approaches. \u2022 BM25, a simple and long-standing lexical retrieval approach. We retrieve the top 1000 BM25 results from a PISA [27] index using default parameters. \u2022 TCT, a dense retrieval approach. We conduct exact (i.e., exhaustive) retrieval of the top 1000 results using a TCT-ColBERT-HNP model [19] trained on MS MARCO.3 This is among the most effective dense retrieval models to date. \u2022 D2Q, a document expansion approach. We retrieve the top 1000 BM25 results from a PISA index of documents expanded using a docT5query model [30] trained on MS MARCO. We use the expanded documents released by the authors. This is the most effective document expansion model we are aware of to date. \u2022 SPLADE, a learned sparse lexical retrieval model. We retrieve the top 1000 results for a SPLADE++ model [7] trained on MS MARCO (CoCondenser-EnsembleDistil version). We use code released by the authors for indexing and retrieval.4 This is the most effective learned lexical retrieval model we are aware of to date. Similarly, we experiment with the following neural re-ranking models to test the effect of the scoring function on Gar. \u2022 MonoT5, a sequence-to-sequence scoring function. We test two versions of the MonoT5 model [29] trained on MS MARCO from two base language models: MonoT5-base, and MonoT5-3b. The 3b model has the same structure as the base model, but has more 3Hugging Face ID: castorini/tct_colbert-v2-hnp-msmarco 4https://github.com/naver/splade \fAdaptive Re-Ranking with a Corpus Graph CIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA parameters (13\u00d7 more; 2.9B, compared to base\u2019s 223M) so it is consequently more expensive to run. These models are among the most effective scoring functions reported to date.5 \u2022 ColBERT (scorer only), a late interaction scoring function. Although ColBERT [12] can be used in an end-to-end fashion (i.e., using its embeddings to perform dense retrieval), we use it as a scoring function over the aforementioned retrieval functions. The model represents two paradigms: one where representations are pre-computed to reduce the query latency, and another where the representations are computed on-the-fly. We use the implementations of the above methods provided by PyTerrier [24]. Following PyTerrier notation, we use \u00bb to denote a re-ranking pipeline. For instance, \u201cBM25\u00bbMonoT5-base\u201d retrieves using BM25 and re-ranks using MonoT5-base. 5.3 Corpus Graphs In our experiments, we construct and exploit two corpus graphs, namely a lexical similarity graph and a semantic similarity graph. The lexical graph (denoted as GarBM25) is constructed by retrieving the top BM25 [34] results using the text of the passage as the query. We use PISA to perform top \ud835\udc58+ 1 lexical retrieval (discarding the passage itself). Using a 4.0 GHz 24-core AMD Ryzen Threadripper Processor, the MS MARCO passage graph takes around 8 hours to construct. The semantic similarity graph (denoted as GarTCT) is constructed using the TCT-ColBERT-HNP model. We perform an exact (i.e., exhaustive) search over an index to retrieve the top \ud835\udc58+ 1 most similar embeddings to each passage (discarding the passage itself). Using an NVIDIA GeForce RTX 3090 GPU to compute similarities, the MS MARCO passage graph takes around 3 hours to construct. We construct both graphs using \ud835\udc58= 8 neighbours, and explore the robustness to various values of \ud835\udc58in Section 6.3. Because the number of edges (i.e., neighbours) per node (i.e., passage) is known, the graphs are both stored as a uncompressed sequence of docids. Using unsigned 32-bit integer docids, only 32 bytes per passage are needed, which amounts to 283 MB to store an MS MARCO graph.6 We note that there are likely approaches that reduce the computational overhead in graph construction by making use of approximate searches; we leave this for future work. The two graphs differ substantially in their content.7 We release these graphs through our implementation to aid other researchers and enable future works. 5.4 Other Parameters and Settings We use a Gar batch size of \ud835\udc4f= 16 by default, matching a typical batch size for a neural cross-encoder model. We explore the robustness of Gar to various values of \ud835\udc4fin Section 6.3. We explore two budgets: \ud835\udc50= 100 (a reasonable budget for a deployed re-ranking system, e.g., [9]) and \ud835\udc50= 1000 (the de facto default threshold commonly used in shared tasks like TREC). 6 RESULTS AND ANALYSIS We now present the results of our experiments and conduct associated analysis to answer our research questions. 5We also experiment with applying DuotT5 [32] as a final re-ranker in Section 6.4. 6For context, the compressed document source is 1035MB, and the compressed PISA index of MS MARCO is 647MB. 7Only 3% of passages agree on seven or eight neighbours across graphs, and 43% of passages have no agreement on neighbours across graphs. 6.1 Effectiveness To understand whether Gar is generally effective, it is necessary to test the effect it has on a variety of retrieval pipelines. Therefore, we construct re-ranking pipelines based on every pair of our initial ranking functions (BM25, TCT, D2Q, and SPLADE) and scoring functions (MonoT5-base, MonoT5-3b, and ColBERT). These 12 pipelines collectively cover a variety of paradigms. Table 2 presents the results of Gar on these pipelines for TREC DL 2019 and 2020 using both the lexical BM25-based graph and the semantic TCT-based corpus graph. We report results using both re-ranking budgets \ud835\udc50= 100 and \ud835\udc50= 1000. Each box in Table 2 allows the reader to inspect the effect on retrieval effectiveness that Gar has on a particular re-ranking pipeline and re-ranking budget. In general, we see that the greatest improvement when the initial retrieval pool is poor. In particular, BM25 only provides a R@1k of 0.755 and 0.805 on DL19 and DL20, respectively, while improved retrieval functions offer up to 0.872 and 0.899, respectively (SPLADE). Gar enables the pipelines to find additional relevant documents. Using BM25 as the initial pool, our approach reaches a R@1k up to 0.846 and 0.892, respectively (BM25\u00bbMonoT53b w/ GarTCT and \ud835\udc50= 1000). Perhaps unsurprisingly, this result is achieved using both a corpus graph (GarTCT) that differs substantially from the technique used for initial retrieval (BM25) and using the most effective re-ranking function (MonoT5-3b). However, we also note surprisingly high recall in this setting when using the GarBM25 corpus graph: up to 0.831 (DL19) and 0.881 (DL20). These results are on par with the recall achieved by TCT and D2Q \u2013 an impressive feat considering that this pipeline only uses lexical signals and a single neural model trained with a conventional process.8 The pipelines that use a BM25 initial ranker also benefit greatly in terms of nDCG, which is likely due in part to the improved recall. Significant improvements are also observed in all other pipelines, particularly in terms of nDCG when there is a low re-ranking budget available (\ud835\udc50= 100) and in recall when a high budget is available (\ud835\udc50= 1000). In general, the corpus graph that is least similar to the initial ranker is most effective (e.g., the BM25 graph when using a TCT ranking). However, we note that both corpus graphs improve every pipeline, at least in some settings. For instance, the GarTCT corpus graph consistently improves the nDCG of pipelines that use TCT as an initial ranker, but rarely the recall. We also note that Gar can nearly always improve the precision of the top results, as measured by nDCG, in settings with a limited re-ranking budget (\ud835\udc50= 100), even when R@1k remains unchanged. This is likely due to the fact that Gar is able to pick out documents from lower depths of the initial ranking pool to score within the limited available budget. For instance, in the case of the strong SPLADE\u00bbMonoT5-base pipeline with \ud835\udc50= 100, which offers high recall to begin with (0.872 on DL19 and 0.899 on DL20), GarBM25 improves the nDCG from 0.750 to 0.762 (DL19) and from 0.748 to 0.757 (DL20), while leaving the R@1k unchanged. In a few rare cases, we observe that Gar can yield a lower mean performance than the baseline (e.g., MAP for the D2Q\u00bbMonoT5base pipeline with \ud835\udc50= 1000). However, these differences are never 8D2Q is trained as a sequence-to-sequence model and involves a lengthy inference stage during indexing, while TCT employs a complex, multi-stage training process involving another trained scoring model that is challenging to fully reproduce [38]. Meanwhile, MonoT5-3b is simply trained using MS MARCO\u2019s training triples. \fCIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA MacAvaney, et al. Table 2: Effectiveness of Gar on TREC DL 2019 and 2020 in a variety of re-ranking pipelines and re-ranking budgets (\ud835\udc50). The top result for each pipeline is in bold. Significant differences with the baseline (typical re-ranking) are marked with *, while insignificant differences are in grey (paired t-test, \ud835\udc5d< 0.05, using Bonferroni correction). DL19 (valid.) \ud835\udc50= 100 DL19 (valid.) \ud835\udc50= 1000 DL20 (test) \ud835\udc50= 100 DL20 (test) \ud835\udc50= 1000 Pipeline nDCG MAP R@1k nDCG MAP R@1k nDCG MAP R@1k nDCG MAP R@1k BM25\u00bbMonoT5-base 0.665 0.417 0.755 0.699 0.483 0.755 0.672 0.421 0.805 0.711 0.498 0.805 w/ GarBM25 * 0.697 * 0.456 * 0.786 0.727 0.490 * 0.827 * 0.695 0.439 * 0.823 * 0.743 0.501 * 0.874 w/ GarTCT *0.722 *0.491 *0.800 *0.743 0.511 *0.839 *0.714 *0.472 *0.831 *0.749 0.501 *0.892 BM25\u00bbMonoT5-3b 0.667 0.418 0.755 0.700 0.489 0.755 0.678 0.442 0.805 0.728 0.534 0.805 w/ GarBM25 * 0.693 0.454 * 0.790 * 0.741 0.517 * 0.831 * 0.715 * 0.469 * 0.829 * 0.772 0.556 * 0.881 w/ GarTCT *0.715 *0.484 *0.806 *0.746 0.522 *0.846 *0.735 *0.512 *0.837 *0.787 *0.564 *0.899 BM25\u00bbColBERT 0.663 0.409 0.755 0.681 0.458 0.755 0.667 0.421 0.805 0.697 0.469 0.805 w/ GarBM25 * 0.690 * 0.442 * 0.783 * 0.720 0.480 * 0.825 * 0.695 * 0.446 * 0.823 * 0.732 0.479 * 0.870 w/ GarTCT *0.716 *0.475 *0.798 *0.727 0.482 *0.841 *0.707 *0.463 *0.829 *0.740 0.481 *0.887 TCT\u00bbMonoT5-base 0.708 0.472 0.830 0.704 0.473 0.830 0.698 0.488 0.848 0.693 0.471 0.848 w/ GarBM25 *0.728 0.484 0.852 *0.733 0.480 *0.883 *0.719 *0.501 0.861 *0.719 0.473 *0.881 w/ GarTCT 0.722 0.481 0.847 * 0.724 0.474 0.866 * 0.712 0.494 0.856 * 0.710 0.471 0.871 TCT\u00bbMonoT5-3b 0.720 0.498 0.830 0.725 0.513 0.830 0.723 0.534 0.848 0.733 0.544 0.848 w/ GarBM25 *0.748 *0.521 *0.857 *0.759 0.521 *0.885 *0.743 0.546 *0.864 *0.771 *0.555 *0.890 w/ GarTCT * 0.742 * 0.517 0.849 * 0.749 0.516 * 0.868 * 0.741 * 0.545 * 0.861 * 0.759 0.551 * 0.880 TCT\u00bbColBERT 0.708 0.464 0.830 0.701 0.452 0.830 0.698 0.476 0.848 0.697 0.470 0.848 w/ GarBM25 *0.729 *0.480 0.853 *0.727 0.459 0.876 *0.715 0.485 0.857 *0.722 *0.477 *0.877 w/ GarTCT * 0.722 0.474 0.845 * 0.715 0.452 0.852 * 0.711 * 0.484 *0.857 * 0.713 0.473 0.864 D2Q\u00bbMonoT5-base 0.736 0.503 0.830 0.747 0.531 0.830 0.726 0.499 0.839 0.731 0.508 0.839 w/ GarBM25 * 0.748 0.506 0.848 0.757 0.519 *0.880 * 0.734 0.497 * 0.847 0.748 0.504 * 0.880 w/ GarTCT *0.760 *0.528 0.850 *0.766 0.533 * 0.879 0.740 0.508 *0.856 0.748 0.499 *0.895 D2Q\u00bbMonoT5-3b 0.737 0.506 0.830 0.751 0.542 0.830 0.738 0.531 0.839 0.753 0.557 0.839 w/ GarBM25 0.744 0.512 * 0.850 0.772 0.549 *0.880 * 0.751 0.535 * 0.852 * 0.781 0.561 * 0.887 w/ GarTCT 0.755 0.524 *0.857 0.769 0.544 *0.880 *0.764 0.550 *0.860 *0.790 0.565 *0.905 D2Q\u00bbColBERT 0.724 0.475 0.830 0.733 0.501 0.830 0.718 0.483 0.839 0.717 0.479 0.839 w/ GarBM25 0.734 0.484 0.845 0.753 0.505 * 0.876 * 0.731 0.487 * 0.849 * 0.737 0.482 * 0.872 w/ GarTCT *0.744 *0.496 0.849 * 0.752 0.503 *0.878 *0.735 0.488 *0.856 *0.746 0.485 *0.893 SPLADE\u00bbMonoT5-base 0.750 0.506 0.872 0.737 0.487 0.872 0.748 0.505 0.899 0.731 0.480 0.899 w/ GarBM25 *0.762 0.509 0.888 0.745 0.487 0.893 *0.757 0.509 0.902 0.737 0.479 0.909 w/ GarTCT * 0.759 0.512 0.878 0.737 0.481 0.875 0.751 0.506 0.903 0.734 0.475 0.908 SPLADE\u00bbMonoT5-3b 0.761 0.526 0.872 0.764 0.533 0.872 0.774 0.559 0.899 0.775 0.560 0.899 w/ GarBM25 *0.775 0.532 *0.891 0.774 0.533 0.896 *0.780 0.559 0.903 *0.788 0.562 *0.919 w/ GarTCT * 0.773 0.539 0.884 0.769 0.531 0.881 *0.780 0.561 0.905 0.783 0.559 0.910 SPLADE\u00bbColBERT 0.741 0.479 0.872 0.727 0.456 0.872 0.747 0.495 0.899 0.733 0.474 0.899 w/ GarBM25 *0.753 0.490 0.885 0.730 0.456 0.875 *0.755 0.501 0.902 *0.742 *0.477 0.914 w/ GarTCT * 0.750 0.489 0.876 0.727 0.455 0.868 * 0.752 0.500 0.903 0.740 * 0.476 0.911 statistically significant and are usually accompanied by significant improvements to other measures (e.g., the R@1k improves). We note that the same trends appear for both our validation set (DL19) and our held-out test set (DL20), suggesting that Gar is not over-fitted to the data that we used during the development of Gar. Finally, we test Gar on the MS MARCO dev (small) set. This setting differs from the TREC DL experiments in that each of the queries has only a few (usually just one) passages that are labeled as relevant, but has far more queries (6,980 compared to 43 in DL19 and 54 in DL20). Thus, experiments on this dataset test a pipeline\u2019s capacity to retrieve a single (and somewhat arbitrary) relevant passage for a query.9 Due to the cost of running multiple versions of 9The suitability of this dataset for evaluation is debated in the community (e.g., [1, 25]), but we include it for completeness. highly-expensive re-ranking pipelines, we limit this study to a low re-ranking budget \ud835\udc50= 100 and to the two less expensive scoring functions (MonoT5-base and ColBERT). Table 3 presents the results. We find that Gar offers the most benefit in pipelines that suffer from the lower recall \u2013 namely, the BM25-based pipelines. In this setting, the improved R@1k also boosts the RR@10. In the TCT, D2Q, and SPLADE pipelines, R@1k often significantly improved, but this results in non-significant (or marginal) changes to RR@10. To answer RQ1, we find that Gar provides significant benefits in terms of precisionand recall-oriented measures. The results hold across a variety of initial retrieval functions, re-ranking functions, and re-ranking budgets. The most benefit is apparent when the initial pool has low recall, though we note that Gar also improves over systems with high initial recall \u2013 particularly by enabling \fAdaptive Re-Ranking with a Corpus Graph CIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA Table 3: Effectiveness of Gar on the MS MARCO dev (small) set with a re-ranking budget of \ud835\udc50= 100. The top result for each pipeline is in bold. Significant differences with the baseline (typical re-ranking) are marked with * (paired t-test, \ud835\udc5d< 0.05, using Bonferroni correction). \u00bbMonoT5-base \u00bbColBERT Pipeline RR@10 R@1k RR@10 R@1k BM25\u00bb 0.356 0.868 0.323 0.868 GarBM25 0.358 * 0.881 0.323 * 0.882 GarTCT *0.369 *0.903 *0.333 *0.902 TCT\u00bb 0.388 0.970 0.345 0.970 GarBM25 0.389 *0.973 *0.346 *0.973 GarTCT 0.388 *0.973 0.346 * 0.972 D2Q\u00bb 0.386 0.936 0.345 0.936 GarBM25 0.386 * 0.941 0.345 * 0.941 GarTCT 0.386 *0.949 0.344 *0.948 SPLADE\u00bb 0.389 0.983 0.345 0.983 GarBM25 0.389 0.984 *0.346 0.984 GarTCT 0.388 *0.984 *0.346 0.984 higher precision at a lower re-ranking budget. Overall, we find that Gar is safe to apply to any re-ranking pipeline (i.e., it will not harm the effectiveness), and it will often improve performance (particularly when the re-ranking budget is limited or when a lowcost first stage retriever is used). To illustrate the ability of Gar to promote low-ranked documents under limited ranking budgets, Figure 2 plots the initial rank (x-axis) of documents and their final rank (y-axis), for a particular query. Each point represents a retrieved document, with colour/size indicative of the relevance label. Lines between points indicate links followed in the corpus graph. It can be seen that by leveraging the corpus graph, Gar is able to promote highly relevant documents that were lowly scored in the initial ranking, as well as retrieve \u2018new\u2019 relevant documents, which are not retrieved in the initial BM25 pool. For instance, GAR is able to select five rel=2 documents from around initial rank 250-300, and ultimately score them within the top 40 documents. Meanwhile, it retrieves two rel=2 and one rel=3 documents that were not found in the first stage. 6.2 Computational Overhead Gar is designed to have a minimal impact on query latency. By relying on a pre-computed corpus graph that will often be small enough to fit into memory (283MB with \ud835\udc58= 8 for MS MARCO), neighbour lookups are performed in \ud835\udc42(1) time. With the frontier \ud835\udc39 stored in a heap, insertions take only \ud835\udc42(1), meaning that finding neighbours and updating the frontier adds only a constant time for each scored document. Sampling the top \ud835\udc4fitems from the heap takes \ud835\udc42(\ud835\udc4flog\ud835\udc50), since the number of items in the heap never needs to exceed the budget \ud835\udc50. To obtain a practical sense of the computational overhead of Gar, we conduct latency tests. To isolate the effect of Gar itself, we find it necessary to factor out the overhead from the re-ranking model itself, since the variance in latency between neural scoring runs often exceeds the overhead introduced by Gar. To this end, we unretrieved 0 100 200 300 400 500 initial rank 0 20 40 60 80 100 final rank rel 0 1 2 3 unjudged Figure 2: Plot of the initial and final rankings of BM25\u00bbMonoT5-base using GarTCT with \ud835\udc50 = 100 for the DL19 query \u2018how long is life cycle of flea\u2019. The colour/size of dots indicate the relevance label. Lines between points indicate links followed in the corpus graph. pre-compute and store all the needed query-document scores and simply look them up as they would be scored. We then test various re-ranking budgets (\ud835\udc50) for DL19, and take 10 latency measurements of the typical re-ranking and Gar processes. Table 4 reports the differences between the latency of Gar and the typical re-ranking results, isolating the overhead of Gar itself. We find that Gar introduces less than 37.37ms overhead per 1000 documents scores (i.e., 2.68-3.73ms overhead per 100 documents scored), on average, using 16 documents per batch. We report results using the semantic TCTbased corpus graph, though we find little difference when using the lexical BM25-based corpus graph. The overhead can be further reduced (down to 3.1ms per 100 documents) by using a larger batch size, i.e., 64 documents per batch; we explore the effect of the batch size parameter on effectiveness in Section 6.3. When compared to the cost of monoT5 scoring (rightmost column in Table 4), the Gar process adds negligible overhead, typically amounting to less than a 2% increase in latency and falls within the variance of the scoring function\u2019s latency for low re-ranking budgets. This experiment answers RQ2: the online computational overhead of Gar is minimal. It can be efficiently implemented using a heap, and adds only around 3-4ms per 100 documents in the reranking budget. This overhead is negligible when compared with the latency of a leading neural scoring function, though it will represent a higher proportion for more efficient scoring functions. 6.3 Robustness to Parameters Recall that Gar introduces two new parameters: the number of nearest neighbours in the corpus graph\ud835\udc58and the batch size\ud835\udc4f. In this section, we conduct experiments to test whether Gar is robust to the settings of these parameters.10 We separately sweep \ud835\udc58\u2208[1, 16] and \ud835\udc4f\u2208[1, 512] (by powers of 2) over DL19 with \ud835\udc50= 1000 for all Gar pipelines, and present the different effectiveness metrics in Figure 3. 10Due to the number of pipelines and parameter settings, an exhaustive grid search over these parameters is prohibitively expensive. \fCIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA MacAvaney, et al. With regard to the number of graph neighbours \ud835\udc58, the nDCG, MAP and recall metrics are relatively stable from around \ud835\udc58= 6 to \ud835\udc58= 16 for almost all pipelines. The MAP performance appears to be the least stable in this range, with some fluctuations in performance between \ud835\udc58= 7 and \ud835\udc58= 13. Recall appears to be most affected, with sharp gains for some pipelines between \ud835\udc58= 1 to \ud835\udc58= 4. This trend is present also for nDCG. The batch size \ud835\udc4fis remarkably stable from \ud835\udc4f= 1 to \ud835\udc4f= 128, with only a blip in effectiveness for the BM25 graph at \ud835\udc4f= 16. The most prominent shift in performance occurs at large batch sizes, e.g., \ud835\udc4f= 512. We note that, when \ud835\udc4f= 512, the corpus graph can only be traversed for a single hop \u2013 the neighbours of the top-scoring documents from the frontier batch are not able to be fed back into the re-ranking pool. This validates our technique of incorporating the feedback mechanism into the re-ranking process itself, which gives the model more chances to traverse the graph. While it may be tempting to prefer the stability of the system with very low batch sizes, we note that this has an effect on the performance: as seen in Section 6.2, lower batch sizes reduces the speed of Gar itself. Further, and more importantly, \ud835\udc4fimposes a maximum batch size of the scoring function itself; given that neural models benefit considerably in terms of performance with larger batch sizes (since the operations on the GPU are parallelised), larger values of \ud835\udc4f(e.g., \ud835\udc4f= 16 to \ud835\udc4f= 128) should be preferred for practical reasons. To answer RQ3, we find that the performance of Gar is stable across various pipelines when the number of neighbours is sufficiently large (\ud835\udc58\u22656) and the batch size is sufficiently low (\ud835\udc4f\u2264128). 6.4 Baseline Performance Section 6.1 established the effectiveness of Gar as ablations over a variety of re-ranking pipelines. We now explore how the approach fits into the broader context of the approaches proposed for passage retrieval and ranking. We explore two classes of pipelines: \u2018Kitchen Sink\u2019 approaches that combine numerous approaches and models together, and \u2018Single-Model\u2019 approaches that use only involve a single neural model at any stage. We select representative Gar variants based on the nDCG@10 performance on DL19 (i.e., as a validation set), with DL20 again treated as the held-out test set. All systems use a re-ranking budget of \ud835\udc50= 1000. In this table, we report nDCG@10 to allow comparisons against prior work. We also report the judgment rate at 10 to provide context about how missing information in the judgments may affect the nDCG@10 scores. The Kitchen Sink results are reported in the top section of Table 5. All systems involve three ranking components: an initial Table 4: Mean latency overheads (ms/query) for Gar with 95% confidence intervals. The latency of MonoT5-base scoring (with a model batch size of 64) is presented for context. GarTCT MonoT5-base c \ud835\udc4f= 16 \ud835\udc4f= 64 Scoring 100 2.68 \u00b1 0.02 0.57 \u00b1 0.01 267.06 \u00b1 6.12 250 8.10 \u00b1 0.05 4.34 \u00b1 0.01 652.30 \u00b1 7.53 500 17.38 \u00b1 0.07 13.66 \u00b1 0.02 1, 362.14 \u00b1 5.27 750 26.96 \u00b1 0.12 22.29 \u00b1 0.07 2, 047.20 \u00b1 6.71 1000 37.37 \u00b1 0.07 30.82 \u00b1 0.04 2, 631.75 \u00b1 6.28 1 3 5 7 9 11 13 15 graph neighbours k 0.5 0.6 0.7 0.8 0.9 MAP nDCG R@1k 1 4 16 64 256 batch size b 0.5 0.6 0.7 0.8 0.9 MAP nDCG R@1k Figure 3: Performance of Gar when the number of neighbours in the corpus graph \ud835\udc58and the batch size \ud835\udc4fvary. Each line represents a system from Table 2. The dashed blue (solid green) lines are for the BM25 (TCT) graph. retriever \ud835\udc450, a Mono-scorer \ud835\udc451 (which assigns a relevance score to each document), and a Duo-scorer \ud835\udc452 (which scores and aggregates pairs of documents). The Duo-style models are known to improve the ranking of the top documents [32]. Although we leave the exploration of how Gar can be used to augment the Duo process directly for future work, we still want to check what effect Gar has on these pipelines. We ablate two Duo systems (either based on D2Q or SPLADE) using Gar for the first-stage re-ranker and a DuoT5-3b-based second-stage re-ranker (second stage uses the suggested cutoff of 50 from [32]). We observe that there is no significant difference in terms of precision of the top 10 results. However, Gar can still provide a significant improvement in terms of nDCG later in the ranking and in terms of recall. These results suggest that although Gar identifies more relevant documents, the Duo models are not capable of promoting them to the top ranks. We next explore Single-Model systems, which are shown in the bottom section of Table 5. Having only a single models likely has some practical advantages: pipelines that use a single model tend to be simpler, and practitioners only need to train a single model. Here, we compare with a variety of systems that fall into this category, most notably the recently-proposed ColBERT-PRF approaches that operate over dense indexes [39]. A GarBM25 pipeline that operates over BM25 results also falls into this category, since only a single neural model (the scorer) is needed. Among this group, Gar performs competitively, outmatched only by ColBERT-PRF [39] and the recent SPLADE [7] model (though the differences in performance are not statistically significant). Compared to these methods, though, Gar requires far less storage \u2013 the corpus graph for Gar is only around 283MB, while the index for SPLADE is 8GB, and the vectors required for ColBERT-PRF are 160GB. To answer RQ4: We observe that Gar can be incorporated into a variety of larger, state-of-the-art re-ranking pipelines. It frequently boosts the recall of systems that it is applied to, though the scoring functions we explore tend to have difficulty in making use of the additional relevant passages. This motivates exploring further improvements to re-ranking models. For instance, cross-encoder \fAdaptive Re-Ranking with a Corpus Graph CIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA Table 5: Performance of Gar, compared to a variety of other baselines. Significant differences are computed within groups, with significance denoted as superscript letters \ud835\udc4e\u2212\ud835\udc50(paired t-test, \ud835\udc5d< 0.05, Bonferroni correction). Rows marked with \u2020 are given to provide additional context, but the metrics were copied from other papers so do not include statistical tests. DL19 (validation) DL20 (test) \ud835\udc450 \ud835\udc451 \ud835\udc452 RR nDCG@10 nDCG R@1k Judged@10 nDCG@10 nDCG R@1k Judged@10 Kitchen Sink Systems D2Q \u00bbMonoT5-3b \u00bbDuoT5-3b 0.771 0.756 \ud835\udc4e\ud835\udc4f0.830 0.958 0.785 \ud835\udc4e\ud835\udc4f0.754 \ud835\udc4e\ud835\udc4f0.839 0.996 SPLADE \u00bbMonoT5-3b \u00bbDuoT5-3b 0.768 0.772 0.872 0.953 0.787 \ud835\udc4e0.781 \ud835\udc4e0.899 0.987 a SPLADE \u00bbMonoT5-3b \u00bbDuoT5-3b GarBM25 0.767 0.781 0.896 0.951 0.787 0.794 0.919 0.989 b D2Q \u00bbMonoT5-3b \u00bbDuoT5-3b GarTCT 0.766 0.775 0.880 0.953 0.788 0.793 0.905 0.993 \u2020 TAS-B+D2Q [8] \u00bbMonoT5-3b \u00bbDuoT5-3b 0.759 0.882 0.783 0.895 Single-Model Systems ColBERT ANN \u00bbColBERT \u00bbColBERT-PRF 0.739 0.764 0.871 0.907 0.715 0.746 0.884 0.946 SPLADE 0.731 0.755 0.872 0.926 0.720 0.750 0.899 0.970 c BM25 \u00bbMonoT5-3b GarBM25 0.729 0.741 0.831 0.947 0.756 0.772 0.881 0.972 BM25 \u00bbMonoT5-3b 0.722 0.700 \ud835\udc500.755 0.944 0.749 \ud835\udc500.728 \ud835\udc500.805 0.980 TCT 0.721 0.708 0.830 0.914 \ud835\udc500.686 \ud835\udc500.689 0.848 0.931 ColBERT ANN \u00bbColBERT 0.693 0.687 0.789 0.884 \ud835\udc500.687 \ud835\udc500.711 0.825 0.937 \u2020 ANCE 0.648 0.755 0.851 0.646 0.776 0.865 D2Q \ud835\udc500.615 \ud835\udc500.678 0.830 0.916 \ud835\udc500.608 \ud835\udc500.676 0.839 0.956 Table 6: Intra-List Similarity (ILS) among retrieved relevant documents. Since the set of retrieved documents does not change using typical Re-Ranking (RR), each value in this column is only listed once. ILS scores that are statistically equivalent to the RR setting are indicated with * (procedure described in Section 6.5). GarBM25 GarTCT Pipeline RR \ud835\udc50=100 \ud835\udc50=1k \ud835\udc50=100 \ud835\udc50=1k BM25\u00bbMonoT5-base 0.947 * 0.946 * 0.946 * 0.947 * 0.946 BM25\u00bbMonoT5-3b * 0.946 * 0.946 * 0.946 * 0.946 BM25\u00bbColBERT * 0.946 * 0.946 * 0.947 * 0.946 TCT\u00bbMonoT5-base 0.969 * 0.969 * 0.968 * 0.969 * 0.969 TCT\u00bbMonoT5-3b * 0.969 * 0.968 * 0.969 * 0.969 TCT\u00bbColBERT * 0.969 * 0.969 * 0.969 * 0.969 D2Q\u00bbMonoT5-base 0.969 * 0.968 * 0.968 * 0.969 * 0.968 D2Q\u00bbMonoT5-3b * 0.968 * 0.968 * 0.968 * 0.968 D2Q\u00bbColBERT * 0.968 * 0.968 * 0.969 * 0.968 SPLADE\u00bbMonoT5-base 0.969 * 0.968 * 0.968 * 0.969 * 0.969 SPLADE\u00bbMonoT5-3b * 0.968 * 0.968 * 0.968 * 0.969 SPLADE\u00bbColBERT * 0.968 * 0.969 * 0.969 * 0.969 models have largely relied on simple BM25 negative sampling (e.g., from the MS MARCO triples file) for training. Techniques like hard negative sampling [40] and distillation [19] (employed to train models like SPLADE and TCT) have so far been largely unexplored for cross-encoder models; these techniques may help them recognise more relevant documents. 6.5 Diversity of Retrieved Passages Next, we test whether Gar results in a more homogeneous set of retrieved relevant passages, compared to typical re-ranking. Among the set of relevant passages each system retrieved,11 we compute the Intra-List Similarity (ILS) [41] using our TCT embeddings. ILS is the average cosine similarity between all pairs of items in a set, 11We are only concerned with the diversity among the relevant passages (\ud835\udc5f\ud835\udc52\ud835\udc59= 2 or \ud835\udc5f\ud835\udc52\ud835\udc59= 3) because non-relevant passages are inherently dissimilar from relevant ones. so a higher ILS values here indicate that the relevant documents are more similar to one another. Table 6 compares the ILS of each initial ranking function (BM25, TCT, D2Q, and SPLADE) with the GarBM25 and GarTCT counterparts. Using two-one-sided t-tests (TOSTs) with bounds of 0.005 and \ud835\udc5d< 0.05 (including a Bonferonni correction), we find that Gar yields statistically equivalent diversity to the typical re-ranking system. These results answer RQ5: despite using document similarity to help choose additional documents to score, Gar does not result in the system retrieving a more homogeneous set of relevant passages. 7" + }, + { + "url": "http://arxiv.org/abs/2108.04026v1", + "title": "IntenT5: Search Result Diversification using Causal Language Models", + "abstract": "Search result diversification is a beneficial approach to overcome\nunder-specified queries, such as those that are ambiguous or multi-faceted.\nExisting approaches often rely on massive query logs and interaction data to\ngenerate a variety of possible query intents, which then can be used to re-rank\ndocuments. However, relying on user interaction data is problematic because one\nfirst needs a massive user base to build a sufficient log; public query logs\nare insufficient on their own. Given the recent success of causal language\nmodels (such as the Text-To-Text Transformer (T5) model) at text generation\ntasks, we explore the capacity of these models to generate potential query\nintents. We find that to encourage diversity in the generated queries, it is\nbeneficial to adapt the model by including a new Distributional Causal Language\nModeling (DCLM) objective during fine-tuning and a representation replacement\nduring inference. Across six standard evaluation benchmarks, we find that our\nmethod (which we call IntenT5) improves search result diversity and attains\n(and sometimes exceeds) the diversity obtained when using query suggestions\nbased on a proprietary query log. Our analysis shows that our approach is most\neffective for multi-faceted queries and is able to generalize effectively to\nqueries that were unseen in training data.", + "authors": "Sean MacAvaney, Craig Macdonald, Roderick Murray-Smith, Iadh Ounis", + "published": "2021-08-09", + "updated": "2021-08-09", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Although contextualized language models (such as BERT [19] and T5 [42]) have been shown to be highly effective at adhoc ranking [30, 36, 37], they perform best with queries that give adequate context, such as natural-language questions [16]. Despite the rise of more expressive querying techniques (such as in conversational search systems [41]), keyword-based querying remains a popular choice for users.1 However, keyword queries can often be underspecified, giving rise to multiple possible interpretations or intents [8]. Unlike prior lexical models, which do not account for word senses or usage in context, contextualized language models are prone to scoring based on a single predominant sense, which can hinder search result quality for under-specified queries. For instance, the results in Figure 1(a) are all similar and do not cover a variety of information needs. Under-specified queries can be considered ambiguous and/or faceted [12]. For ambiguous queries, intents are distinct and often 1https://trends.google.com Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). , , \u00a9 Copyright held by the owner/author(s). (a) monoT5 (no diversity) 1 (A) Penguins (order Sphenisciformes, family Spheniscidae) are a group of aquatic, flightless birds living almost exclusively in the Southern Hemisphere, especially in Antarctica... 2 (A) Penguins are a group of aquatic, flightless birds. They live almost exclusively in the Southern Hemisphere, with only one species, the Galapagos penguin, found north of the equator... 3 (A) Penguins are flightless birds that are highly adapted for the marine environment. They are excellent swimmers, and can dive to great depths, (emperor penguins can dive to over.. 4 (A) Penguins are an iconic family of aquatic, flightless birds. Although we think of penguins as Antarctic birds some, like the Galapagos penguin, live in much warmer climates near... 5 (A) Penguins are torpedo-shaped, flightless birds that live in the southern regions of the Earth. Though many people imagine a small, black-and-white animal when they think of penguins... (b) monoT5 (using IntenT5 + DCLM + RS + xQuAD) 1 (A) Penguins (order Sphenisciformes, family Spheniscidae) are a group of aquatic, flightless birds living almost exclusively in the Southern Hemisphere, especially in Antarctica... 2 (H) Television coverage of the event will be provided by WPXI beginning at 12 p.m. ET and running through the entire event. You can also stream the parade live on the Penguins\u2019... 3 (A) The most abundant species of penguin is the \u201cMacaroni\u201d with an approximate population of 20 to 25 Million individuals while there are only around 1,800 Galapagos penguins left... 4 (H) Marc-Andre Fleury will get the start in goal for Pittsburgh on Saturday. The Penguins are in Buffalo for the season finale and need a win to clinch a playoff spot. Fleury was in goal... 5 (H) It is the home of the Pittsburgh Penguins, who moved from their former home across the street prior to the 2010-11 NHL season. The CONSOL Energy Center has 18,387 seats for... Figure 1: Top results for the query \u201cpenguins\u201d re-ranked using monoT5 without diversity measures and using our IntenT5 approach on the MS MARCO passage corpus. We manually removed near-duplicate passages from the results. Note that the IntenT5 model has a better variety of topranked passages related to both the (A)nimal and (H)ockey. correspond to different word senses. For example, the query \u201cpenguins\u201d may refer to either the animal or the American ice hockey team (among other senses). In Figure 1, we see that the monoT5 model [37] only identifies passages for the former sense in the top results. In fact, the first occurrence of a document about the hockey team is ranked at position 158, likely meaning that users with this arXiv:2108.04026v1 [cs.IR] 9 Aug 2021 \fInitial Retrieval User Query IntenT5 Query Intent 1 Query Intent 2 Query Intent n \u2026 Aggregation \u2026 Scoring Search Result List e.g., penguins e.g., schedule e.g., population e.g., tickets e.g., DPH, BM25 e.g., monoT5, ColBERT e.g., sQuAD, PM2 Figure 2: Overview of our search result diversification system using IntenT5 to generate potential query intents. query intent would likely need to reformulate their query to satisfy their information need. In a faceted query, a user may be interested in different aspects about a given topic. In the example of \u201cpenguins\u201d, a user may be looking for information about the animal\u2019s appearance, habitat, life cycle, etc., or the hockey team\u2019s schedule, roster, score, etc. Here again, the monoT5 results also lack diversity in terms of facets, with the top results all focusing on the habitat and appearance. Search result diversification approaches aim to overcome this issue. In this setting, multiple potential query intents are predicted and the relevance scores for each intent are combined to provide diversity among the top results (e.g., using algorithms like IASelect [1], xQuAD [43] or PM2 [18]). Intents can be inferred from manually-constructed hierarchies [1] or from interaction data [43], such as popular searches or reformulations. Although using interaction data is possible for large, established search engines, it is not a feasible approach for search engines that do not have massive user bases, nor to academic researchers, as query logs are proprietary data. Researchers have instead largely relied on search result suggestions from major search engines, which are black-box algorithms, or using the \u201cgold\u201d intents used for diversity evaluation [18]. Thus, an effective approach for generating potential query intents without needing a massive amount of interaction data is desirable. Given the recent success of Causal Language Models (CLMs) such as T5 [42] in a variety of text generation tasks, we propose using these models to generate potential intents for under-specified queries. Figure 2 provides an overview of our approach (IntenT5). We fine-tune the model on a moderately-sized collection of queries (ORCAS [15]), and evaluate using 6 TREC diversity benchmark datasets. We find that our approach improves the search result diversity of both a lexical and neural re-ranking models, and can even exceed the diversity performance when using Google query suggestions and the gold TREC intent descriptions. We also find that our approach has the biggest gains on queries that occur infrequently (or never) in the collection of training queries, showing that the approach is able to generalize effectively to unseen queries. Through analysis of our proposed IntenT5 model, we find that it has difficulty improving over plain adhoc ranking models for ambiguous queries. Indeed, we find this to be challenging for other approaches as well. In an attempt to better handle ambiguity, we explore two novel techniques for improving the variety of generated intents. First, we propose a Distributional Causal Language Modeling (DCLM) objective. This approach targets the observation that a typical CLM trained for this task tends to over-predict general terms that are not specific to the query (e.g., \u2018information\u2019, \u2018meaning\u2019, \u2018history\u2019) since these are highly-represented in the training data. This approach simultaneously optimizes the model to generate all subsequent tokens that can follow a prefix, rather than just a single term, which should help the model better learn the variety of senses that terms can exhibit. We also introduce a clustering-based Representation Swapping (RS) approach that replaces the internal term representations with a variety of possible alternate senses. Qualitatively, we find that these approaches can help improve the diversity of ambiguous queries in isolated cases. For instance, in Figure 1(b), multiple senses are identified and accounted for. However, in aggreagte, we found insufficient evidence that they improve an unmodified IntenT5 model. Nevertheless, our study opens the door for more research in this area and motivates the creation of larger test sets with ambiguous queries. In summary, our contributions are: \u2022 We propose using causal language models for predicting query intents for search result diversification. \u2022 Across 6 TREC diversity benchmarks, we show that this approach can outperform query intents generated from massive amount of interaction data, and that the model effectively generalizes to previously unseen queries. \u2022 We introduce a new distributional causal language modeling objective and a representation replacement strategy to better handle ambiguous queries. \u2022 We provide an analysis that investigates the situations where IntenT5 is effective, and qualitatively assess the generated intents. 2 BACKGROUND AND RELATED WORK In this section, we cover background and prior work related to search result diversification (Section 2.1), causal language modeling (Section 2.2), and neural ranking (Section 2.3). 2.1 Search Result Diversification Search result diversification techniques aim to handle ambiguous queries. Early works aimed to ensure that the retrieved documents addressed distinct topics for instance, Maximal Marginal Relevance (MMR) [4] can be used to promote documents that are relevant to the user\u2019s query, but are dissimilar to those document already retrieved. In doing so, the typical conventional document independence assumption inherent to the Probability Ranking Principle is relaxed. Indeed, by diversifying the topics covered in the top-ranked documents, diversification approaches aim to address the risk that there are no relevant documents retrieved for the user\u2019s information need [48]. Other approaches such as IA-Select used category hierarchies to identify documents with different intents [1]. Given an under-specified (ambiguous or multi-faceted) query \ud835\udc5eand a candidate set of documents \ud835\udc51, the potential query intents {\ud835\udc561,\ud835\udc562, ..\ud835\udc56\ud835\udc57} can be identified as sub-queries, i.e., query formulations that more clearly identify relevant documents about a particular interpretation of the query. These intents are usually identified from interaction data, such as query logs [43]. In academic research, query suggestions from major search engines often serve as a standin for this process [18, 43]. The candidate documents are re-scored for each of the intents. The scores from the individual intents are then aggregated into a final re-ranking of documents, using an algorithm such as xQuAD or PM2. Aggregation strategies typically attempt to balance relevance and novelty. xQuAD [43] iteratively selects documents that exhibit \fhigh relevance to the original query and are maximally relevant to the set of intents. As documents are selected, the relevance scores of documents to intents already shown are marginalized. The balance between relevance to the original query and the intents are controlled with a parameter \ud835\udf06. PM2 [18] is an aggregation strategy based on a proportional representation voting scheme. This aggregation strategy ignores relevance to the original query, and iteratively selects the intent least represented so far. The impact of the selected intent and the intents that are not selected when choosing the next document is controlled with a parameter \ud835\udf06(not to be confused with xQuAD\u2019s \ud835\udf06). More recently, there has been a family of work addressing learned models for diversification [20, 49, 51]; we see this as orthogonal to our work here, since we do not consider learned diversification approaches. Indeed, in this work, we study the process of query intent generation for search result diversification, rather than aggregation strategies. For further information about search result diversification, see [44]. 2.2 Causal Language Modeling Causal Language Models (CLMs) predict the probability of a token \ud835\udc64\ud835\udc58given the prior tokens in the sequence: \ud835\udc43(\ud835\udc64\ud835\udc58|\ud835\udc64\ud835\udc58\u22121,\ud835\udc64\ud835\udc58\u22122, ...,\ud835\udc641). This property makes a CLM able to generate text: by providing a prompt, the model can iteratively predict a likely sequence of following tokens. However, a complete search of the space is exponential because the probability of each token depends on the preceding generated tokens. Various strategies exist for pruning this space. A popular approach for reducing the search space is a beam search, where a fixed number of high-probability sequences are explored in parallel. Alternative formulations, such as Diverse Beam Search [46] have been proposed, but we found these techniques unnecessary for short texts like queries. We refer the reader to Meister et al. [32] for more further details about beam search and text generation strategies. While CLMs previously accomplished modeling with recurrent neural networks [24, 33], this modeling has recently been accomplished through transformer networks [17]. Networks pre-trained with a causal language modeling objective, for instance T5 [42], can be an effective starting point for further task-specific training [42]. In the case of T5, specific tasks are also cast as sequence generation problems by encoding the source text and generating the model prediction (e.g., a label for classification tasks). In this work, we explore the capacity of CLMs (T5 in particular) for generating a diverse set of possible query intents. This differs from common uses of CLMs due to the short nature of the text (keyword queries rather than natural-language sentences or paragraphs) and the focus on the diversity of the predictions. 2.3 Neural Ranking Neural approaches have been shown to be effective for adhoc ranking tasks, especially when the intents are expressed clearly and in natural language [16]. Pre-trained contextualized language models, such as BERT [19] and ELECTRA [6] have been particularly effective for adhoc ranking [30, 36]. A simple application of these models is the \u201cvanilla\u201d setting (also called CLS, mono, and cross), where the query and document text is jointly encoded and the model\u2019s classification component is tuned to provide a ranking score. Due to the expense of such approaches, neural models are typically used as re-rankers; that is, an initial ranking model such as BM25 is used to provide a pool of documents that can be re-ranked by the neural method. Neural approaches have also been applied as firststage rankers [25, 50, 52] (also called dense retrieval). The ColBERT model [25] scores documents based on BERT-based query and document term representations. The similarity between the query and document representations are summed to give a ranking score. This model can be used as both a first-stage dense ranker (through an approximate search over its representations) as well as a re-ranker (to produce precise ranking scores). We focus on re-ranking approaches in this work, which is in line with prior work on diversification [44]. CLMs have been used in neural ranking tasks. Nogueira et al. [37] predicted the relevance of a document to a query using the T5 model (monoT5). Pradeep et al. [40] further explored this model and showed that it can be used in conjunction with a version that scores and aggregates pairs of documents (duoT5). Both these models only use CLMs insofar as predicting a single token (\u2018true\u2019 or \u2018false\u2019 for relevance) given the text of the query and document and a prompt. Here, the probability of \u2018true\u2019 is used as the ranking score. Doc2query models [38, 39] generate possible queries, conditioned on document text, to include in an inverted index. Unlike prior neural ranking efforts, we focus on diversity ranking, rather than adhoc ranking. Specifically, we use a neural CLM to generate possible query intents given a query text, which differs from prior uses of CLMs in neural ranking. These query intents are then used to score and re-rank documents. To our knowledge, this is the first usage of neural ranking models for diversity ranking. For further details on neural ranking and re-ranking models, see [27, 34]. 3 GENERATING QUERY INTENTS In this section, we describe our proposed IntenT5 model for query intent generation (Section 3.1), as well as two model adaptations intended to improve the handling of ambiguous queries: distributional causal language modeling (Section 3.2) and representation swapping (Section 3.3). 3.1 IntenT5 Recall that we seek to train a model that can be used to generate potential query intents. We formulate this task as a sequence-tosequence generation problem by predicting additional terms for the user\u2019s initial (under-specified) query. We first fine-tune a T5 [42] model using a causal language modeling objective over a collection of queries. Note that this training approach does not require search frequency, session, or click information; it only requires a collection of query text. This makes a variety of data sources available for training, such as the ORCAS [15] query collection. This is a desirable quality because releasing query text poses fewer risks to personal privacy than more extensive interaction information. Recall that for a sequence \ud835\udc64consisting of \ud835\udc58tokens, causal language models optimize for \ud835\udc43(\ud835\udc64\ud835\udc58|\ud835\udc64\ud835\udc58\u22121,\ud835\udc64\ud835\udc58\u22122, ...,\ud835\udc641). To generate intents, we use a beam search to identify highly-probable sequences. No length penalization is applied, but queries are limited to 10 generated tokens.2 We apply basic filtering techniques to remove generated intents that do not provide adequate additional context. In particular, we first remove terms that appear in the original query. 2We found that this covers the vast majority of cases for this task, in practice. \fDocuments (MS MARCO passages) Penguins are a group of aquatic, flightless birds. They live... Penguins are birds, not mammals. Many bird species are... Penguins are carnivores with piscivorous diets, getting all... Penguins are especially abundant on islands in colder climates... Keyword Queries (ORCAS) penguins hockey penguins adaptations penguins hockey game penguins animals penguins hockey game tonight penguins animals facts penguins hockey live streaming Figure 3: Comparison of document language and query language. Queries naturally lend themselves to a tree structure, motivating our DCLM approach. Since neural retrieval models are sensitive to word morphology [29], we only consider exact term matches for this filter. We also discard intents that are very short (less than 6 characters, e.g., \u201c.com\u201d), as we found that these usually carry little valuable context. Among the filtered intents, we select the top \ud835\udc5bmost probable sequences. Note that this generation process is fully deterministic, so the results are entirely replicable. Each retrieved document is scored for each of the generated intents, and the intent scores are aggregated using an established diversification algorithm, like xQuAD [43] or PM2 [18]. Instead of T5, our approach could also be applied to other pretrained causal language models, such as BART [26], however, we leave such a study for future work. 3.2 Distributional Causal Language Modeling Typical natural language prose, such as the type of text found in documents, lends itself well to CLM because the text quickly diverges into a multitude of meanings. For instance, in Figure 3, we see that the prefix \u201cPenguins are\u201d diverges into a variety of sequences (e.g., \u201ca group of\u201d, \u201cbirds, not\u201d, \u201ccarnivores with\u201d, etc.). If structured by prefixes, this results in long chains of tokens. Keyword queries, on the other hand, typically have a hierarchical prefixbased nature. When structured as a tree, it tends to be shallow and dense. For instance, in Figure 3, a distribution of terms is likely to follow the prefix \u2018penguins\u2019 (e.g., adaptations, animals, hockey, etc.). Similarly, a distribution of terms follows the prefix \u2018penguins hockey\u2019 (e.g., game, live, news, score, etc.). Based on this observation, we propose a new variant of Causal Language Modeling (CLM) designed for keyword queries: Distributional Causal Language Modeling (DCLM). In contrast with CLM, DCLM considers other texts in the source collection when building the learning objectives through the construction of a prefix tree. In other words, while CLM considers each sequence independently, DCLM builds a distribution of terms that follow a given prefix. Visually, the difference between the approaches are shown in Figure 4. When training, the prefix tree is used to find all subsequent tokens across the collection given a prefix and optimizes the output of the model to generate all these tokens (with equal probability). The training process for DCLM is given in Algorithm 1. 3.3 Representation Swapping In a transformer model \u2013 which is the underlying neural architecture of T5 \u2013 tokens are represented as contextualized vector representations. These representations map tokens to a particular sense \u2013 this is exploited by several neural ranking models (like Algorithm 1 DCLM Training Procedure tree \u2190BuildPrefixTree(corpus) repeat prefix \u2190{RandomSelect(tree.children)} targets \u2190prefix.children while |prefix[\u22121].children| > 0 do Optimize \ud835\udc43(prefix[\u22121].children|prefix) prefix = {prefix, RandomSelect(prefix[\u22121].children)} end while until converged (a) Causal Language Modeling (b) Distributional Causal Language Modeling \u2026 \u2026 \u2026 penguins adaptation animal animal facts adaptation animal wikipedia facts wikipedia animal penguins penguins penguins penguins penguins Figure 4: Graphical distinction between Causal Language Modeling (CLM, (a)) and our proposed Distributional Causal Language Modeling (DCLM, (b)). Given a prompt (e.g., \u2018penguins\u2019), a DCLM objective optimizes for all possible subsequent tokens (e.g., adaptation, animal, antarctica, etc.) rather than implicitly learning this distribution over numerous training samples. CEDR [30], TK [22], and ColBERT [25]) to match particular word senses. Normally, the surrounding words in a piece of text offer adequate context to disambiguate word senses. However, short queries inherently lack such context. For instance, in the case where a query contains only a single term, we find that transformer models simply choose a predominant sense (e.g., the animal sense for the query penguins). When used with the IntenT5 model, we find that this causes the generated intents to lack diversity of word senses (we demonstrate this in Section 6). We introduce an approach we call Representation Swapping (RS) to overcome this issue. RS starts by building a set of \ud835\udc58prototype representations for each term in a corpus.3 For a given term, a random sample of passages from a corpus that contain the term are selected. Then, the term\u2019s internal representations are extracted for each passage.4 Note that because of the context from other terms in the passage, 3In practice, terms are filtered to only those meeting a frequency threshold, as infrequent terms are less prone to being ambiguous. In an offline experimentation setting, only the terms that appear in the test queries need to be considered. 4Note that terms are often broken down into subwords by T5\u2019s tokenizer. In this case, we concatenate the representations of each of the constituent subwords. Although the size of the concatenated representations across terms may differ, the representations for a single term are the same length. \fthese representations include the sense information. All of the these representations for a given term are then clustered into \ud835\udc58clusters. A single prototype representation for each cluster is selected by finding the representation that is closest to the median value across the representations in the cluster. Using this approach, we find that the sentences from which the prototypes were selected often express different word senses. Finally, when processing a query, the IntenT5 model is executed multiple times: once with the original representation (obtained from encoding the query alone), and then \ud835\udc58additional times for each term. In these instances, the internal representations of a given term are swapped with the prototype representation. This allows the T5 model to essentially inherit the context from the prototype sentence for the ambiguous query, and allows the model to generate text based on different senses. This approach only needs to be applied for shorter queries, since longer queries provide enough context on their own. This introduces a parameter \ud835\udc59, as the maximum query length. The final intents generated by the model are selected using a diversity aggregation algorithm like xQuAD, which ensures a good mix of possible senses in the generated intents. 4 EXPERIMENTAL SETUP We experiment to answer the following research questions: RQ1: Can the intents generated from IntenT5 be used to diversify search results? RQ2: Do queries that appeared in the IntenT5 training data perform better than those that did not? RQ3: Does IntenT5 perform better at ambiguous or multi-faceted queries? RQ4: Does training with a distributional causal language modeling objective or performing representation swapping improve the quality of the generated intents? 4.1 IntenT5 Training and Settings Although IntenT5 can be trained on any moderately large collection of queries, we train the model using the queries from the ORCAS [15] dataset. With 10.4M unique queries, this dataset is both moderately large5 and easily accessible to researchers without signed data usage agreements. The queries in ORCAS were harvested from the Bing logs, filtered down to only those where users clicked on a document found in the MS MARCO [3] document dataset. Queries in this collection contain an average of 3.3 terms, with the majority of queries containing either 2 or 3 terms. The IntenT5 model is fine-tuned from t5-base using default parameters (learning rate: 5 \u00d7 10\u22125, 3 epochs, Adam optimizer). When applying RS, we use \ud835\udc58= 5, \ud835\udc59= 1, xQuAD with \ud835\udf06= 1, and use agglomerative clustering, based on qualitative observations during pilot studies. We select 1,000 passages per term from the MS MARCO document corpus, so as not to potentially bias our results to our test corpora. 4.2 Evaluation We evaluate the effectiveness of our approach on the TREC Web Track (WT) 2009\u201314 diversity benchmarks [8\u201311, 13], consisting of 300 topics and 1,090 sub-topics in total. Table 1 provides a summary 5It is widely known that Google processes over 3B queries daily, with roughly 15% being completely unique. Table 1: Dataset statistics. # Terms # Subtopics # Pos. Qrels Dataset # Topics per topic (per topic) (per topic) WT09 [8] 50 2.1 243 (4.9) 6,499 (130.0) WT10 [9] 50 2.1 218 (4.4) 9,006 (180.1) WT11 [10] 50 3.4 168 (3.4) 8,378 (167.6) WT12 [11] 50 2.3 195 (3.9) 9,368 (187.4) WT13 [13] 50 3.3 134 (2.7) 9,121 (182.4) WT14 [14] 50 3.3 132 (2.6) 10,629 (212.6) Total 300 2.8 1,090 (3.6) 53,001 (176.7) of these datasets. These benchmarks span two corpora: WT09\u201312 use ClueWeb09-B (50M documents) and WT13\u201314 use ClueWeb12B13 (52M documents). We use the keyword-based \u201ctitle\u201d queries, simulating a setting where the information need is under-specified. To the best of our knowledge, these are the largest and most extensive public benchmarks for evaluating search result diversification. We measure system performance with three diversification-aware variants of standard evaluation measures: \ud835\udefc-nDCG@20 [7], ERRIA@20 [5], and NRPB [12]. These are the official task evaluation metrics for WT10\u201314 (WT09 used \ud835\udefc-nDCG@20 and P-IA@20). \ud835\udefcnDCG is a variant of nDCG [23] that accounts for the novelty of topics introduced. We use the default \ud835\udefcparameter (the probability of an incorrect positive judgment) of 0.5 for this measure. ERR-IA is a simple mean over the Expected Reciprocal Rank of each intent, since WT09\u201314 weight the gold query intents uniformly. NRBP (Noveltyand Rank-Biased Precision) is an extension of RBP [35], measuring the average utility gained as a user scans the search results. Metrics were calculated from the official task evaluation script ndeval.6 Furthermore, as these test collections were created before the advent of neural ranking models, we also report the judgment rate among the top 20 results (Judged@20) to ascertain the completeness of the relevance assessment pool in the presence of such neural models. To test the significance of differences, we use paired t-tests with \ud835\udc5d< 0.05, accounting for multiple tests where appropriate with a Bonferroni correction. In some cases, we test for significant equivalences (i.e., that the means are the same). For these tests, we use a two one-sided test (TOST [47]) with \ud835\udc5d< 0.05. Following prior work using a TOST for retrieval effectiveness [31], we set the acceptable equivalence range to \u00b10.01. 4.3 Baselines To put the performance of our method in context, we include several adhoc and diversity baselines. As adhoc baselines, we compare with DPH [2], a lexical model (we found it to perform better than BM25 in pilot studies), Vanilla BERT [30], monoT5 [37], and a ColBERT [25] re-ranker.7 Since the neural models we use have a maximum sequence length, we apply the MaxPassage [16] scoring approach. Passages are constructed using sliding windows of 150 tokens (stride 75). For Vanilla BERT, we trained a model on MS MARCO using the original authors\u2019 released code. For monoT5 and ColBERT, we use versions released by the original authors that were trained on the MS MARCO dataset [3]. This type of zero-shot transfer 6https://trec.nist.gov/data/web/10/ndeval.c 7We only use ColBERT in a re-ranking setting due to the challenge in scaling its space intensive dense retrieval indices to the very large ClueWeb datasets. \ffrom MS MARCO to other datasets has generally shown to be effective [28, 37], and reduces the risk of over-fitting to the test collection. Google Suggestions. We compare with the search suggestions provided by the Google search engine through their public API. Though the precise details of the system are private, public information states that interaction data plays a big role in the generation of their search suggestions [45], meaning that this is a strong baseline for an approach based on a query log. Furthermore, this technique was used extensively in prior search result diversification work as a source of query intents [18, 43]. Note that the suggestions are sensitive to language, geographic location and current trends; we use the suggestions in English for United States (since the TREC assessors were based in the US); we will release a copy of these suggestions for reproducibility. Gold Intents. We also compare with systems that use the \u201cGold\u201d intents provided by the TREC task. Note that this is not a realistic system, as these intents represent the evaluation criteria and are not known a priori. Furthermore, the text of these intents are provided in natural language (similar to TREC description queries), unlike the keyword-based queries they elaborate upon. Hence, these Gold intents are often reported to represent a potential upperbound on diversification effectiveness, however, later we will show that intents generated by IntenT5 can actually outperform these Gold intents.8 4.4 Model Variants and Parameter Tuning We aggregate the intents from IntenT5, Google suggestions, and the Gold intents using xQuAD [43] and PM2 [18], representing two strong unsupervised aggregation techniques.9 For all these models, we tune the number of generated intents and the aggregation \ud835\udf06 parameter using a grid search over the remaining collections (e.g., WT09 parameters are tuned in WT10\u201314). We search between 1\u201320 intents (step 1) and \ud835\udf06between 0\u20131 (step 0.1). Neural models rerank the top 100 DPH results. In summary, an initial pool of 100 documents is retrieved using DPH. Intents are then chosen using IntenT5 (or using the baseline methods). The documents are then re-scored for each intent using DPH, Vanilla BERT, monoT5, or ColBERT. The scores are then aggregated using xQuAD or PM2. 5 RESULTS In this section, we provide results for research questions concerning the overall effectiveness of IntentT5 (Section 5.1), the impact of queries appearing in the training data (Section 5.2), the types of under-specification (Section 5.3) and finally the impact on ambiguous queries in particular (Section 5.4). Later in Section 6, we provide a qualitative analysis of the generated queries. 5.1 RQ1: IntenT5 Effectiveness We present the diversification results for WT09\u201314 in Table 2. We generally find that our IntenT5 approach can improve the search result diversity for both lexical and neural models, when aggregating using either PM2 or xQuAD. In fact, there is only one setting (monoT5 scoring with xQuAD aggregation) where diversity is 8For instance, this may be caused by an intent identified by an IntenT5 model resulting in a more effective ranking than the corresponding Gold intent. 9Although \u201clearning-to-diversify\u201d approaches, such as M2Div [21], can be effective, our work focuses on explicit diversification. The explicit setting is advantageous because it allows for a greater degree of model interpretability and transparency with the user. Table 2: Results over the combined TREC WebTrack 2009\u201314 diversity benchmark datasets. \ud835\udefc-nDCG, ERR-IA, and Judged are computed with a rank cutoff of 20. The highest value in each section is listed in bold. PM and xQ indicate the PM2 and xQuAD aggregators, respectively. Statistically significant differences between each value and the corresponding non-diversified baseline are indicated by * (paired t-test, Bonferroni correction, \ud835\udc5d< 0.05). System Agg. \ud835\udefc-nDCG ERR-IA NRBP Judged DPH 0.3969 0.3078 0.2690 62% + IntenT5 PM * 0.4213 * 0.3400 * 0.3053 56% + Google Sug. PM * 0.4232 * 0.3360 * 0.3000 58% + Gold PM * 0.4566 * 0.3629 * 0.3254 53% + IntenT5 xQ 0.4049 0.3213 0.2845 58% + Google Sug. xQ * 0.4203 * 0.3326 0.2963 59% + Gold xQ * 0.4545 * 0.3616 * 0.3230 56% Vanilla BERT 0.3790 0.2824 0.2399 54% + IntenT5 PM * 0.4364 * 0.3475 * 0.3104 60% + Google Sug. PM * 0.4110 * 0.3119 0.2689 57% + Gold PM * 0.4214 * 0.3214 * 0.2803 50% + IntenT5 xQ * 0.4328 * 0.3448 * 0.3085 59% + Google Sug. xQ * 0.4120 * 0.3140 * 0.2722 59% + Gold xQ * 0.4228 * 0.3236 * 0.2813 55% monoT5 0.4271 0.3342 0.2943 58% + IntenT5 PM * 0.4510 0.3589 0.3213 58% + Google Sug. PM * 0.4506 0.3567 0.3181 58% + Gold PM * 0.4722 * 0.3777 * 0.3409 58% + IntenT5 xQ 0.4444 0.3549 0.3183 58% + Google Sug. xQ * 0.4492 * 0.3625 * 0.3276 58% + Gold xQ * 0.4574 * 0.3696 * 0.3353 58% ColBERT 0.4271 0.3334 0.2953 57% + IntenT5 PM * 0.4711 * 0.3914 * 0.3616 58% + Google Sug. PM * 0.4561 0.3654 0.3316 57% + Gold PM 0.4548 0.3520 0.3106 51% + IntenT5 xQ * 0.4707 * 0.3890 * 0.3584 63% + Google Sug. xQ * 0.4552 * 0.3638 * 0.3287 61% + Gold xQ * 0.4560 0.3608 0.3226 56% not significantly improved when using IntenT5. The overall bestperforming result uses IntenT5 (ColBERT scoring with PM2 aggregation). These results also significantly outperform the corresponding versions that use Google suggestions and the Gold intents. Similarly, when using Vanilla BERT, IntenT5 also significantly outperforms the model using Google suggestions. For DPH and monoT5, the diversity effectiveness of IntenT5 is similar to that of the Google suggestions; the differences are not statistically significant. However, through equivalence testing using a TOST we find that there is insufficient evidence that the means are equivalent across all evaluation metrics. Curiously, both BERT-based models (Vanilla BERT and ColBERT) are more receptive to the IntenT5 queries than Google suggestions or Gold intents. This suggests that some underlying language models (here, BERT), may benefit more from artificially-generated intents than others. \fTable 3: Diversification performance stratified by the frequency of the query text in ORCAS. Ranges were selected to best approximate 3 even buckets. Statistically significant differences between the unmodified system are indicated by * (paired t-test, Bonferroni correction, \ud835\udc5d< 0.05). \ud835\udefc-nDCG@20 System Agg. 0\u20131 2\u201337 38+ DPH 0.4930 0.3876 0.3048 + IntenT5 PM 0.5217 * 0.4283 0.3091 + Google Sug. PM 0.5026 * 0.4435 0.3204 + Gold PM * 0.5311 * 0.4648 * 0.3704 + IntenT5 xQ 0.4970 * 0.4177 0.2960 + Google Sug. xQ 0.5004 * 0.4379 0.3192 + Gold xQ * 0.5259 * 0.4708 * 0.3640 Vanilla BERT 0.4467 0.3756 0.3112 + IntenT5 PM * 0.5043 * 0.4403 0.3615 + Google Sug. PM 0.4634 0.4186 0.3487 + Gold PM * 0.4831 * 0.4290 0.3492 + IntenT5 xQ * 0.5041 * 0.4308 0.3598 + Google Sug. xQ * 0.4853 0.4027 0.3439 + Gold xQ * 0.4770 * 0.4362 0.3531 monoT5 0.5080 0.4331 0.3364 + IntenT5 PM 0.5262 * 0.4707 0.3530 + Google Sug. PM 0.5305 0.4598 0.3577 + Gold PM * 0.5421 0.4712 * 0.3997 + IntenT5 xQ 0.5180 * 0.4648 0.3474 + Google Sug. xQ 0.5217 0.4509 * 0.3714 + Gold xQ 0.5270 0.4657 * 0.3762 ColBERT 0.4938 0.4241 0.3600 + IntenT5 PM * 0.5484 * 0.4934 0.3685 + Google Sug. PM 0.5067 0.4625 0.3968 + Gold PM 0.5254 0.4726 0.3635 + IntenT5 xQ * 0.5440 * 0.4868 0.3783 + Google Sug. xQ 0.5141 * 0.4631 0.3857 + Gold xQ 0.5121 * 0.4874 0.3669 (Count) (105) (96) (99) These results provide a clear answer to RQ1: the intents generated from IntenT5 can be used to significantly improve the diversity of search results. Further, they can also, surprisingly, outperform Google suggestions and Gold intents. 5.2 RQ2: Effect of Queries in Training Data It is possible that the IntenT5 model simply memorizes the data that is present in the training dataset, rather than utilizing the language characteristics learned in the pre-training process to generalize to new queries. To investigate this, we stratify the dataset into three roughly equal-sized buckets representing the frequency that the query appears in ORCAS. We use simple case-insensitive string matching, and count matches if they appear anywhere in the text (not just at the start of the text). We find that roughly one third of the WebTrack diversity queries either do not appear at all in ORCAS, or only appear once. For these queries, IntenT5 is forced to generalize. The next bucket (2\u201337 occurrences in ORCAS) contains roughly the next third of the WebTrack queries, and 38 or more queries forms the final bucket. We present the results of this experiment in Table 3. Here, we find that our IntenT5 model excels at cases where it either needs to generalize (first bucket) or where memorization is manageable (second bucket); in 11 of the 16 cases, IntenT5 scores higher than the Google suggestions. Further, IntenT5 boasts the overall highest effectiveness in both buckets: 0.5484 and 0.4934, respectively (for ColBERT + IntenT5 + PM2). In the final case, where there are numerous occurrences in the training data, IntenT5 never significantly outperforms the baseline system. Unsurprisingly, Google suggestions score higher than IntenT5 for these queries (6 out of 8 cases). Since frequent queries in ORCAS are likely frequent in general, Google suggestions can exploit frequency information from their vast interaction logs (which are absent from ORCAS). To gain more insights into the effects of training data frequency, we qualitatively evaluate examples of generated intents. For instance, the query \u201cgmat prep classes\u201d (which only occurs once in ORCAS, as a verbatim match), generates intents such as \u201crequirements\u201d, \u201cregistration\u201d, and \u201ctraining\u201d. Although these are not perfect matches with the Gold intents (companies that offer these courses, practice exams, tips, similar tests, and two navigational intents), they are clearly preferable to the Google suggestions, which focus on specific locations (e.g., \u201cnear me\u201d, \u201conline\u201d, \u201cchicago\u201d, etc.), and demonstrate the ability for the IntenT5 model to generalize. For the query \u201cused car parts\u201d, which occurs in ORCAS 13 times, IntenT5 generates some of the queries found in ORCAS (e.g., \u201cnear me\u201d) but not others (e.g., \u201ccatalog\u201d). For the query \u201ctoilets\u201d, which occurs 556 times in ORCAS, IntenT5 again generates some queries present in the training data (e.g., \u201creviews\u201d) and others that are not (e.g., \u201cinstallation cost\u201d). These results answer RQ2: IntenT5 effectively generalizes beyond what was seen in the training data. However, it can struggle with cases that occur frequently. This suggests that an ensemble approach may be beneficial, where intents for infrequent queries are generated from IntenT5, and intents for frequent queries are mined from interaction data (if available). We leave this to future work. 5.3 RQ3: Types of Under-specification Recall that under-specified queries can be considered multi-faceted or ambiguous. To answer RQ3, we investigate the performance of IntenT5 on different types of queries, as indicated by the TREC labels. Note that WT13\u201314 also include a total of 49 queries that are fully specified (\u201csingle\u201d, e.g., \u201creviews of les miserables\u201d). Table 4 provides these results. We find that IntenT5 excels at handling faceted queries, often yielding significant gains. When it comes to ambiguous queries, however, IntenT5 rarely significantly improves upon the baseline. Note that all intent strategies, including when using the Gold intents, struggle with ambiguous queries. However, we acknowledge that the ambiguous query set is rather small (only 62 queries). This could motivate the creation of a larger ambiguous web search ranking evaluation dataset in the future to allow further study of this interesting and challenging problem. Finally, we notice that IntenT5 can also improve the performance of the fully-specified queries, most notably for Vanilla BERT and ColBERT where the non-diversified models otherwise significantly underperform DPH. \fTable 4: Diversification performance by query type. Significant differences between the unmodified system are indicated by * (paired t-test, Bonferroni correction, \ud835\udc5d< 0.05). \ud835\udefc-nDCG@20 System Agg. Faceted Ambiguous Single DPH 0.3804 0.2525 0.6399 + IntenT5 PM * 0.4093 0.2576 0.6711 + Google Sug. PM * 0.4130 0.2629 0.6621 + Gold PM * 0.4626 0.2908 0.6399 + IntenT5 xQ 0.3922 0.2433 0.6550 + Google Sug. xQ * 0.4070 0.2724 0.6553 + Gold xQ * 0.4565 0.2997 0.6399 Vanilla BERT 0.3809 0.2501 0.5323 + IntenT5 PM * 0.4244 0.3087 * 0.6415 + Google Sug. PM * 0.4149 0.2993 0.5353 + Gold PM * 0.4392 0.2831 0.5253 + IntenT5 xQ * 0.4214 * 0.2997 * 0.6421 + Google Sug. xQ 0.4086 0.2970 0.5681 + Gold xQ * 0.4409 0.2850 0.5253 monoT5 0.4184 0.3012 0.6172 + IntenT5 PM 0.4445 0.3165 0.6429 + Google Sug. PM 0.4405 0.3342 0.6340 + Gold PM * 0.4802 0.3307 0.6177 + IntenT5 xQ 0.4363 0.3214 0.6285 + Google Sug. xQ * 0.4385 * 0.3327 0.6353 + Gold xQ * 0.4607 0.3182 0.6177 ColBERT 0.4237 0.3267 0.5655 + IntenT5 PM * 0.4629 0.3246 * 0.6848 + Google Sug. PM 0.4558 0.3198 0.6268 + Gold PM * 0.4740 0.3068 0.5655 + IntenT5 xQ * 0.4596 0.3355 * 0.6816 + Google Sug. xQ * 0.4536 0.3355 0.6102 + Gold xQ * 0.4775 0.3016 0.5655 (Count) (189) (62) (49) Curiously, we do not observe similar behavior for monoT5, suggesting that this behavior may depend on the underlying language model (BERT vs. T5). These results answer RQ3: IntenT5 improves the diversity of multi-faceted queries and even improves ColBERT\u2019s performance for fully-specified queries. However, like alternative approaches, it struggles to generate effective intents for ambiguous queries. 5.4 RQ4: Handling Ambiguous Queries Given that ambiguous queries appear to be difficult to handle, we investigate two proposed approaches for overcoming this problem: Distributional Causal Language Modeling (DCLM, introduced in Section 3.2) and Representation Swapping (RS, introduced in Section 3.3). Since monoT5 and ColBERT most effectively use IntenT5 on ambiguous queries, we focus our investigation on these models. Table 5 presents the effectiveness of these approaches, stratified by query type. In general, we observe only marginal differences by using combinations of these approaches. The most effective combination for ambiguous queries (monoT5 + IntenT5 + DCLM Table 5: Diversification performance by query type when using distributional causal language modeling (DCLM) and representation swapping (RS). Statistically significant differences between the unmodified IntenT5 approach are indicated by * (paired t-test, Bonferroni correction, \ud835\udc5d< 0.05). \ud835\udefc-nDCG@20 System Agg. Faceted Ambiguous Single monoT5 + IntenT5 PM 0.4445 0.3165 0.6429 + DCLM PM 0.4424 0.3219 0.6628 + RS PM 0.4481 0.3279 0.6297 + DCLM + RS PM 0.4457 0.3213 0.6173 monoT5 + IntenT5 xQ 0.4363 0.3214 0.6285 + DCLM xQ 0.4333 0.3421 0.6469 + RS xQ 0.4341 0.3348 0.6404 + DCLM + RS xQ 0.4324 0.3129 0.6249 ColBERT + IntenT5 PM 0.4629 0.3246 0.6848 + DCLM PM * 0.4339 0.3255 0.6584 + RS PM 0.4645 0.3185 0.6848 + DCLM + RS PM 0.4420 0.3174 0.6356 ColBERT + IntenT5 xQ 0.4596 0.3355 0.6816 + DCLM xQ 0.4469 0.3260 0.6795 + RS xQ 0.4564 0.3263 0.6816 + DCLM + RS xQ 0.4478 0.3250 0.6795 + xQuAD) is not significantly more effective than the monoT5 + IntenT5 + xQuAD. Digging deeper into the queries generated for each approach, we find that there are indeed cases where the generated intents using DCLM and RS are substantially more diverse than the base IntenT5 model. The top intents generated for the query penguins by IntenT5 are meaning, history, habitat, information, and definition; in fact, all of the top 20 intents either relate to the animal (rather than the hockey team) or are very general. Meanwhile, DCLM overcomes many of the general intents, but the queries skew heavily toward the hockey team: schedule, website, wikipedia, highlights, and merchandise. This problem is addressed when applying both DCLM and RS, which generates: wikipedia, tickets, population, schedule, and website; it covers both senses. Despite the clear benefits for some queries, the approach can cause drift on other queries, and sometimes does not pick up on important intents. For instance, the intents generated for the query iron with IntenT5 + DCLM + RS focus heavily on the nutrient sense, and do not identify the element or appliance sense. To answer RQ4, although approaches like DCLM and RS can improve the diversity in isolated cases, there is insufficient evidence that these approaches can improve ranking diversity overall. We also find no significant differences in effectiveness between the DCLM and RS approaches. 6 ANALYSIS One advantage of performing search result diversification explicitly is that the generated intents are expressed in natural language and can be interpreted. In Table 6, we present the top 5 intents generated by our models, as well as the top query suggestions from Google. For the running example of penguins, we see that Google \fTable 6: Top intents generated using our IntenT5 model and Google search suggestions. IntenT5 IntenT5 + DCLM + RS Google Suggestions penguins meaning wikipedia of madagascar history tickets schedule habitat population score information schedule hockey definition website game mitchell college football tuition baseball meaning football covid vaccine address athletics athletics basketball admissions basketball website bookstore of business wendelton college address tuition (none) football athletics website bookstore tuition faculty application address electoral college meaning wikipedia map definition meaning definition florida definition map 2020 history articles definition government michigan election college votes solar panels meaning installation for sale explained installed for home calculator installation cost cost installation on sale for rv home depot for home for house condos in florida for sale rentals on the beach meaning beachfront for rent near me for sale on the beach for sale reviews near me keys by owner reservations keys for sale condos in new york meaning for sale for rent near me to rent zillow chicago address manhattan florida weather state for sale nyc ny identifies two senses (an animated film and the hockey team) while our model can identify the animal and the hockey team. For the query mitchell college, our model identifies several salient facets, as do the Google search suggestions. Note that this is not due to memorization; the only queries with the text mitchell college in the training collection are william mitchell college of law and william mitchell college of law ranking. This quality is appealing because it shows that the model is capable of generalizing beyond its training data. On the other hand, our model can be prone to constructing information, such as for the (fictitious) wendleton college. We see that these generalizations are not baked entirely into the prompt of college, however, given that the prefix electoral college (a process of the United States government) does not generate similar queries. These results provide qualitative evidence for our observations in Section 5.2; IntenT5 is able to effectively generalize beyond what is seen in the training data. However, we acknowledge that this quality may be undesirable in some circumstances. For the query solar panels, we see that our model can generate multi-word intents (which can be beneficial to neural models [16]), but can sometimes get stuck on common prefixes (e.g., \u201cinstall\u201d). We also find that our model can struggle with providing valuable recommendations based on a specified location. Although IntenT5 with DCLM and RS can predict salient intents like beachfront and nyc for the queries condos in florida and condos in new york, respectively, the base IntenT5 model relies primarily on generic intents, or even suggests alternative locations. Meanwhile, the Google suggestions are able to consistently provide location-specific intents. Overall, this analysis shows that IntenT5 generates intents that exhibit awareness of the query at hand, and that the DCLM and RS approaches can change the output of the model substantially. The intents are often comparable with those provided by a commercial search engine from interaction data. 7" + }, + { + "url": "http://arxiv.org/abs/2103.02280v2", + "title": "Simplified Data Wrangling with ir_datasets", + "abstract": "Managing the data for Information Retrieval (IR) experiments can be\nchallenging. Dataset documentation is scattered across the Internet and once\none obtains a copy of the data, there are numerous different data formats to\nwork with. Even basic formats can have subtle dataset-specific nuances that\nneed to be considered for proper use. To help mitigate these challenges, we\nintroduce a new robust and lightweight tool (ir_datasets) for acquiring,\nmanaging, and performing typical operations over datasets used in IR. We\nprimarily focus on textual datasets used for ad-hoc search. This tool provides\nboth a Python and command line interface to numerous IR datasets and\nbenchmarks. To our knowledge, this is the most extensive tool of its kind.\nIntegrations with popular IR indexing and experimentation toolkits demonstrate\nthe tool's utility. We also provide documentation of these datasets through the\nir_datasets catalog: https://ir-datasets.com/. The catalog acts as a hub for\ninformation on datasets used in IR, providing core information about what data\neach benchmark provides as well as links to more detailed information. We\nwelcome community contributions and intend to continue to maintain and grow\nthis tool.", + "authors": "Sean MacAvaney, Andrew Yates, Sergey Feldman, Doug Downey, Arman Cohan, Nazli Goharian", + "published": "2021-03-03", + "updated": "2021-05-10", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION The datasets and benchmarks we use are a cornerstone of Information Retrieval (IR) research. Unfortunately, many of these datasets remain frustrating to find and manage. Once obtained, the variety of data formats can be a challenge to work with. Even data formats that seem simple can hide subtle problems. For example, the TSV files used by the MS-MARCO [66] has a double-encoding problem that affects special characters in roughly 20% of documents. Recently, several tools have begun to incorporate automatic dataset acquisition. These include Capreolus [93], PyTerrier [58] and OpenNIR [55]. These reduce the user burden of finding the dataset source files and figuring out how to parse them correctly. However, the dataset coverage of each individually is patchy, as shown in Table 1. Further, using the dataset interfaces outside of these tools can be difficult, as they are often tightly coupled with the tool\u2019s primary functionality. Finally, each of these tools keep their own copy of data, leading to wasted storage. Thus, it is advantageous to have a lightweight tool that focuses on data acquisition, management, and typical operations like lookups. Many tools rely on manual instructions for downloading, extracting, and processing datasets.1 We believe providing a tool to automatically perform as much of this work as possible is clearly preferable to this approach since it ensures proper processing of data. A common automatic tool has additional advantages, such as reducing redundant copies of datasets and easily allowing tools to be run on alternative or custom datasets with little effort. Anserini [91] and its Python interface Pyserini [53] use a hybrid approach by distributing copies of queries and relevance judgments in the package itself and primarily relying on manual instructions for document processing. Sometimes Anserini provides document content via downloadable indices. Other dataset distribution tools are not well-suited for IR tasks. For instance, packages like HuggingFace Datasets [90] and TensorFlow Datasets [3] take a record-centric approach that is not well-suited for relational data like documents, queries, and querydocument relevance assessments. Furthermore, IR work involves additional important use cases when working with datasets, such as efficiently looking up a document by ID, for which the designs of prior libraries is not conducive. Dataset schemata, such as DCAT and schema.org, provide a common format machine-readable dataset documentation, which could be supported in the future. 1Such as https://github.com/castorini/anserini/blob/master/docs/experimentsmsmarco-passage.md, https://github.com/thunlp/OpenMatch/blob/master/docs/ experiments-msmarco.md, https://github.com/microsoft/ANCE#data-download, etc. arXiv:2103.02280v2 [cs.IR] 10 May 2021 \fSIGIR \u201921, July 11\u201315, 2021, Virtual Event, Canada MacAvaney, et al. Table 1: Dataset support in Capreolus [93] (Cap.), PyTerrier [58] (PT), OpenNIR [55] (ONIR), Anserini [91] (Ans.), and ir_datasets (IRDS). \" indicates built-in support that automatically provides documents, queries, and query relevance judgments (i.e., as an automatic download). \u2662indicates support for a dataset with some manual effort (e.g., specifying the document parser and settings to use). Datasets marked with * have licenses that require manual effort (e.g., requesting from NIST), and therefore can at most have \u2662. Dataset/Benchmark Cap. PT ONIR Ans. IRDS News NYT* [57, 75] \u2662 \u2662 \u2662 \u2662 TREC Arabic* [33\u201335] \u2662 \u2662 \u2662 \u2662 TREC Common Core* [7] \u2662 \u2662 \u2662 TREC Mandarin* [73, 78, 89] \u2662 \u2662 \u2662 \u2662 TREC News* [79, 80] \u2662 \u2662 \u2662 TREC Robust* [83, 85] \u2662 \u2662 \u2662 \u2662 \u2662 TREC Spanish* [36, 37, 74] \u2662 \u2662 \u2662 \u2662 Question Answering ANTIQUE [38] \" \u2662 \" \u2662 \" MS-MARCO Doc. [66] \" \u2662 \" \" MS-MARCO Pass. [66] \" \" \" \" \" MS-MARCO QnA [66] \" Natural Questions [48, 50] \u2662 \" TREC CAR [28, 29] \" \u2662 \" TREC DL [25, 26] \" \" \" \" \" TREC DL-Hard [59] \u2662 \u2662 \u2662 \" TriviaQA [47, 48] \u2662 \" Scientific, Bio-medical, Health Cranfield [1] \" CLEF eHealth* [64, 94] \u2662 \u2662 \u2662 NFCorpus [9] \" \" TREC CDS [71, 72, 77] \" TREC COVID [84, 88] \" \" \" \" \" TREC Genomics [40\u201343] \" TREC Health Misinfo.* [4] \u2662 \u2662 \u2662 TREC PM [68\u201370] \u2662 \" TripClick* [67] \u2662 \u2662 \u2662 Vaswani [2] \" \" Web NTCIR WWW* [54, 62] \u2662 \u2662 \u2662 ORCAS [21] \u2662 \u2662 \u2662 \" TREC Million Query* [5, 6, 11] \u2662 \u2662 \u2662 TREC Terabyte* [10, 12, 13] \u2662 \u2662 \u2662 TREC Web* [14\u201317, 19, 20, 22\u201324] \u2662 \u2662 \u2662 Other/Miscellaneous BEIR [8, 9, 18, 30, 39, 44, 50, 60, 66, 81, 82, 84, 86\u201388, 92] \" CodeSearchNet [45] \" \" TREC Microblog [51, 52, 76] \u2662 \" WikIR [31, 32] \" \" In this work, we present ir_datasets, a tool to aid IR researchers in the discovery, acquisition, and management of a variety of IR datasets. The tool provides a simple and lightweight Python and command line interface (see Figure 1) allowing users to iterate the documents, queries, relevance assessments, and other relations provided by a dataset. This is useful for indexing, retrieval, and evaluation of ad-hoc retrieval systems. A document lookup API provides fast access to source documents, which is useful for recent text-based ranking models, such as those that use BERT [27]. PyTerrier [58], Capreolus [93], and OpenNIR [55] recently added support for ir_datasets, greatly expanding the number of datasets they support, and other tools like Anserini [91] can utilize our tool using the command line interface. Finally, the ir_datasets catalog2 acts as a documentation hub, making it easy to find datasets and learn about their characteristics. We intend to continue to backfill prior datasets and add support for new datasets as they are released. The package is open source,3 and we welcome contributions. 2 IR_DATASETS ir_datasets is a lightweight tool focused on providing easy access to a variety of IR datasets and benchmarks. It provides both a Python and command line interface (see Figure 1), allowing it to be easily used by a variety of toolkits, or simply for ad-hoc data exploration. To achieve these goals, ir_datasets adheres to several design principles. First, to stay lightweight, the tool is focused on core dataset operations, such as downloading content, iterating through queries or documents, and performing document lookups by ID. This policy explicitly leaves functionality like full-text indexing or neural network processing to other tools. Further, to be practical in a variety of environments, ir_datasets attempts to keep a low memory footprint by using inexpensive data structures and iterators. Finally, in order to leave maximum flexibility to the tool\u2019s users, we attempt to perform \u201cjust enough\u201d processing of the data to account for various formats, while not removing information that is potentially useful. We hope that this commitment to being lightweight and flexible makes ir_datasets an attractive tool to jump-start or enhance other tools for doing IR research. 2.1 Dataset Identifiers Since no standard identifiers (IDs) exist for datasets in IR, we propose hierarchical dataset IDs. These IDs allow datasets to be looked up in the Python API, command line interface, and online documentation. IDs are usually in the format of corpus/benchmark. For instance, the TREC COVID [84] benchmark uses the CORD-19 [88] document corpus and is given an ID of cord19/trec-covid. In this case, cord19 provides documents, while cord19/trec-covid provides queries and relevance judgments for those documents. 2.2 Simple & Memorable Python API A dataset object can be obtained simply by calling: import ir_datasets ds = ir_datasets.load(\"dataset-id\") Each dataset objects provides access to a number of entity types (see Table 2). Dataset objects are stateless; they simply define the 2https://ir-datasets.com/ 3https://github.com/allenai/ir_datasets/ \fSimplified Data Wrangling with ir_datasets SIGIR \u201921, July 11\u201315, 2021, Virtual Event, Canada Table 2: Entity types in ir_datasets. Entity Type Python API Example Description docs ds.docs_iter() A document (or passage for passage retrieval). Contains a doc_id and one or more text fields. queries ds.queries_iter() A query (topic). Contains a query_id and one or more text fields. qrels ds.qrels_iter() A query relevance assessment. Maps a query_id and doc_id to a relevance score or other human assessments. scoreddocs ds.scoreddocs_iter() (uncommon) A scored document (akin to a line from a run file). Maps a query_id and doc_id to a ranking score from a system. Available for datasets that provide an initial ranking (for testing reranking systems). docpairs ds.docpairs_iter() (uncommon) A pair of documents (useful for training). Maps a query_id to two or more doc_ids. Available for datasets that provide suggested training pairs. 1 import i r _ d a t a s e t s 2 dataset = i r _ d a t a s e t s . load ( ' msmarco\u2212passage / t r a i n ' ) 3 f o r doc in d a t a s e t s . d o c s _ i t e r ( ) : # documents 4 print ( doc ) 5 \u21a3# GenericDoc ( doc_id = ' 0 ' , t e x t = ' The presence of commun . . . 6 \u21a3# GenericDoc ( doc_id = ' 1 ' , t e x t = ' The Manhattan P r o j e c t . . . 7 \u21a3# . . . 8 9 f o r query in dataset . q u e r i e s _ i t e r ( ) : # q u e r i e s 10 print ( query ) 11 \u21a3# GenericQuery ( query_id = '121352 ' , t e x t = ' defi ne extreme ' ) 12 \u21a3# GenericQuery ( query_id = '634306 ' , t e x t = ' what does chatt . . . 13 \u21a3# . . . 14 15 f o r q r e l in dataset . q r e l s _ i t e r ( ) : # r e l e v a n c e judgments 16 print ( q r e l s ) 17 \u21a3# TrecQrel ( query_id = '1185869 ' , doc_id = ' 0 ' , r e l e v a n c e =1) 18 \u21a3# TrecQrel ( query_id = '1185868 ' , doc_id = ' 1 6 ' , r e l e v a n c e =1) 19 \u21a3# . . . 20 21 # Look up documents by ID 22 docs_store = dataset . docs_store ( ) 23 docs_store . get ( \" 16 \" ) 24 \u21a3# GenericDoc ( doc_id = ' 1 6 ' , t e x t = ' The approach i s based . . . 1 $ i r _ d a t a s e t s export msmarco\u2212passage docs 2 \u21a3# 0 The presence of communication amid s c i e n t i f i c . . . 3 \u21a3# 1 The Manhattan P r o j e c t and i t s atomic bomb hel . . . 4 \u21a3# . . . 5 6 $ i r _ d a t a s e t s export msmarco\u2212passage / t r a i n q u e r i e s 7 \u21a3# 121352 de fine extreme 8 \u21a3# 634306 what does c h a t t e l mean on c r e d i t h i s t o r y 9 \u21a3# . . . 10 11 $ i r _ d a t a s e t s export msmarco\u2212passage / t r a i n q r e l s 12 \u21a3# 1185869 0 0 1 13 \u21a3# 1185868 0 16 1 14 \u21a3# . . . 15 16 # Look up documents by ID 17 $ i r _ d a t a s e t s lookup msmarco\u2212passage / t r a i n 16 18 \u21a3# 16 The approach i s based on a theory of j u s t i c e . . . Figure 1: Parallel examples of common use cases in ir_datasets using Python and the command line interface. capabilities and the procedures for obtaining and processing the data. Most ad-hoc retrieval datasets consist of 3 main entity types: documents (docs), queries/topics (queries), and query relevance assessments (qrels). In the spirit of being simple, lightweight, and low-memory, entities are provided as namedtuple instances from iterators. For each entity type provided by a particular dataset, there is a corresponding ds.{entity}_iter() function that returns an iterator (e.g., ds.docs_iter()). Since the particular attributes returned for an entity differ between datasets (e.g., some provide only an ID and text for a document, while others also include a title field), type definitions can be accessed via ds.{entity}_cls(). The type definitions include type annotations for each field, and try to adhere to conventions when possible (e.g., the ID of documents is the first field and named doc_id). The iterator approach is versatile. In some cases, it is only necessary to operate over a single entity at a time, minimizing the memory overhead. In other cases, particularly in neural networks, operations happen in batches, which can also be accomplished trivially through an iterator. And finally, in cases where all data needs to be loaded, all entities can be easily loaded, e.g., by passing the iterator into the Python list constructor, or the dataframe constructor in Pandas [65]. Some datasets provide other entity types, such as sample document rankings or training sequences. For the former, we have a scoreddocs entity type, which by default is a tuple containing a query ID, a document ID, and a score. For the latter, we have a docpairs entity, which consists of a query and a pair of contrasting document IDs (e.g., one relevant and one non-relevant). 2.3 Command Line Interface ir_datasets also provides a Command Line Interface (CLI) for performing basic operations over supported datasets. This is helpful for integration with tools not written in Python, or simply for ad-hoc data exploration. The primary operations of the CLI are export (corresponding to Python\u2019s dataset.*_iter() functions) and lookup (corresponding to Python\u2019s docstore.get_many_iter()). Examples of these operations are shown in right-hand side of Figure 1. The command line interface supports multiple output formats, including TSV and JSON lines. The output fields can also be specified, if only certain data is desired. 2.4 Data Acquisition When possible, ir_datasets downloads content automatically from the original public sources as needed. In cases where a data usage agreement exists, the user is notified before the file is downloaded. The download process is robust; it verifies the integrity of the downloaded content via a hash and is resilient to interrupted downloads by re-issuing the request if the connection is broken (using Range HTTP requests, if supported by the server). Further, \fSIGIR \u201921, July 11\u201315, 2021, Virtual Event, Canada MacAvaney, et al. the access to and integrity of downloadable content is automatically checked periodically using a continuous integration job so that if access to some resources are lost (e.g., a file is moved) the problem can be quickly investigated and fixed. There are nearly 350 downloadable files supporting the current datasets in ir_datasets, each validated weekly. Some data are not publicly available. For instance, due to its size, the ClueWeb 2009 and 2012 collections (used for tasks like the TREC WebTrack and NTCIR WWW tasks) are obtained via hard drives. Other datasets, like the Arabic Newswire collection (used for the TREC Arabic tasks) contain copyrighted material and are only available with a usage agreement and subscription to the Linguistic Data Consortium. In these cases, the user is presented with instructions on how to acquire the dataset and where to put it. Once acquired by the user, ir_datasets will take care of any remaining processing. There are currently 12 document collections that require a manual process to acquire. 2.5 Supported datasets ir_datasets supports a wide variety of datasets (see Table 1). These include some of the most popular evaluation benchmarks (e.g., TREC Robust [83]), large-scale shallow datasets (e.g., MSMARCO [66]), biomedical datasets (e.g., TREC CDS [71, 72, 77]), multiand cross-lingual datasets (e.g., TREC Arabic [33, 34]), a content-based weak supervision dataset (NYT [57]), a large-scale click dataset (ORCAS [21]), and a ranking benchmark suite (BEIR [81]). To our knowledge, this represents the largest collection and variety of IR datasets supported by any tool. To facilitate experiments with custom datasets, the Python API provides an easy mechanism to build a dataset object from files that use simple data formats: ds = ir_datasets.create_dataset(docs_tsv=\"path/docs.tsv\", queries_tsv=\"path/queries.tsv\", qrels_trec=\"path/qrels\") 2.6 Document lookups It is a common task to look up documents by their ID. For instance, when training or running a neural IR model, it is often necessary to fetch the text of the current document to perform processing. Another example would be a researcher who is looking into cases in which their model fails may want to see the text of the offending documents. One option is to load all documents into an in-memory hashmap. This may be appropriate in some cases, such a long-running process where the large upfront cost is negligible and memory is plentiful (enough for the entire collection). Building an in-memory hashmap for a collection is trivial with the Python interface: doc_map = {doc.doc_id: doc for doc in dataset.docs_iter()} To support other cases, ir_datasets provides a docs_store API that simplifies the process of looking up documents from disk. This API supports fetching individual or multiple documents by their ID: docs_store = dataset.docs_store() docs_store.get_many(['D1', 'D2']) # {'D1': GenericDoc('D1', ...), 'D2': GenericDoc('D2', ...)} it = docs_store.get_many_iter(['D1', 'D2']) # An iterator of D1 and D2 (order not guaranteed) Table 3: Document lookup benchmarks on small datasets. Time/query System HDD SSD Warm Size msmarco-passage/trec-dl-2019 (avg. 949 docs/query) ir_datasets 2.34 s 66 ms 7 ms 2.8 GB MongoDB 3.62 s 130 ms 14 ms 2.7 GB SQLite 3.72 s 94 ms 27 ms 4.1 GB Pyserini 2.34 s 85 ms 51 ms 2.4 GB PyTerrier 3.40 s 138 ms 68 ms 2.3 GB cord19/fulltext/trec-covid (avg. 1,386 docs/query) ir_datasets 1.19 s 0.11 s 36 ms 1.3 GB MongoDB 3.65 s 0.19 s 65 ms 1.8 GB SQLite 5.99 s 0.19 s 50 ms 2.8 GB Pyserini 2.05 s 0.32 s 51 ms 1.5 GB PyTerrier 3.72 s 1.70 s 1,620 ms 4.2 GB Table 4: Document lookup benchmarks on large datasets. Storage costs are listed as space beyond the source files. Time/query Strategy HDD Warm Size clueweb12/trec-web-2014 (avg. 289 docs/query) ir_datasets 44.4 s 14 ms 4.5 GB (w/o checkpoints) 369.4 s 14 ms 0.3 GB Pyserini 19.7 s 1,210 ms 6,041.5 GB tweets2013-ia/trec-mb-2013 (avg. 1,188 docs/query) ir_datasets 23.3 s 24 ms 120 GB Pyserini 17.6 s 115 ms 323 GB The implementation of docs_store() varies based on the dataset. For many small datasets (those with up to a few million documents), we build a specialized lookup structure for the entire collection on disk as needed. A specialized structure was built for this package to provide a good trade-off between lookup speed and storage costs. All documents are compressed using lz4 and stored in sequence. A separate sorted document ID and corresponding index offset structure is also built on disk. Although simple, we found that this structure enables lookups that exceed the performance of leading indexes and databases (see Table 3). In this experiment, we used the metadata lookup functionality of Anserini [91] and Terrier [63] and key-value storage with SQLite and MongoDB. The average duration was computed per query for TREC DL 2019 passage task [26] (with the official set of reranking documents), and for TREC COVID complete [84] (using the judged documents). We also find that the storage cost is reasonable, with a total storage size comparable to MongoDB for the MS-MARCO passage collection and smaller than all others for the CORD19 collection. For large collections, it is impractical and undesirable to make a copy of all documents. For instance, the ClueWeb09 and ClueWeb12 collections (for TREC Web Track) are several TB in size, even when \fSimplified Data Wrangling with ir_datasets SIGIR \u201921, July 11\u201315, 2021, Virtual Event, Canada heavily compressed. Luckily, for these datasets, their directory structure mimics the structure of the document IDs, which allows the desired source file containing a given document ID to be easily identified. To speed up lookups within a given file, we use zlib-state 4 to take periodic checkpoints of the zlib decoding state of the source files. This eliminates the need to read all the source file contents up to the desired document and greatly speeds up lookups of documents that appear late in the source files. The pre-built checkpoints are automatically downloaded and used when appropriate. Furthermore, we cache fetched documents on disk for even faster subsequent lookups. Different approaches are taken for other large collections, such as Tweets2013-ia [76] (for the TREC Microblog task [51, 52]). See Table 4 for a comparison between document lookup times using ir_datasets and Pyserini (from stored document source). Even though ir_datasets is slower than Pyserini on the first lookup, the cache greatly speeds up subsequent fetches (see \u201cWarm\u201d). Since experiments in neural IR frequently only work with a small subset of documents, this is very beneficial for these pipelines. We also observe that the checkpoint files for ClueWeb12 speed up lookups considerably, without adding much overhead in terms of storage; since Anserini keeps a copy of all documents, it accumulates around 6TB of storage overhead, compared to 4.5GB using ir_datasets. Note that the other approaches explored in Table 1 would accumulate similar storage overheads, as they also copy the data. Tweets2013-ia accumulates considerable storage costs, as the source hierarchy is not conducive to document lookups. In this case, ir_datasets builds an ID-based lookup file hierarchy. 2.7 Fancy slicing In many cases, it is beneficial to select a segment of a document collection. For instance, some techniques involve pre-computing neural document representations to speed up reranking [56] or for performing first-stage retrieval [49]. In this case, dividing the operation over multiple GPUs or machines can yield substantial speed gains, as the process is embarrassingly parallel. To divide up the work, it is helpful to be able to select ranges of the document collection for processing. The Python standard library islice function is not ideal for this task because I/O and processing of documents would be performed for skipped indices. Instead, all objects returned form doc_iter can themselves be sliced directly. The implementation of the slicing depends on the particular dataset, but in all implementations avoid unnecessary I/O and processing by seeking to the appropriate location in the source file. This fancy slicing implementation mostly follows typical Python slicing semantics, allow for different workers to be assigned specific ranges of documents: dataset.docs_iter()[:10] # the first 10 docs dataset.docs_iter()[-10:] # the last 10 docs dataset.docs_iter()[100:110] # 10 docs starting at index 100 dataset.docs_iter()[3::5] # every 5 docs, starting at index 3 dataset.docs_iter()[:1/3] # the first third of the collection 2.8 Documentation Documentation about datasets are available from the ir_datasets catalog.4 An overview list shows all available datasets and their 4https://github.com/seanmacavaney/zlib-state Figure 2: Example from the ir_datasets catalog. Users can easily check which datasets are available for automatic downloads (green checkbox) and those that require obtaining data from a third party (yellow triangle). Figure 3: Example documentation for cord19/trec-covid. capabilities (Figure 2). The documentation page for each individual dataset includes a brief description, relevant links (e.g., to shared task website and paper), supported relations, citations, and code samples. An example is shown in Figure 3 for the TREC COVID dataset [84]. 2.9 Automated Testing ir_datasets includes several suites of automated tests to ensure the package works as expected, functionality does not regress as changes are made, and to ensure that downloaded content remains available and unchanged. The automated testing suite includes include unit tests, integration/regression tests, and tests to ensure downloadable content remains available and unchanged. 3 INTEGRATION WITH OTHER TOOLS The CLI makes ir_datasets easy to use with various tools (e.g., the PISA engine [61] can index using the document export). However, deeper integration can provide further functionality, as we demonstrate in this section with four tools. Note that ir_datasets does not depend on any of these tools; instead they use ir_datasets. \fSIGIR \u201921, July 11\u201315, 2021, Virtual Event, Canada MacAvaney, et al. Capreolus [93] is a toolkit for training and evaluating neural learning-to-rank models through Python and command line interfaces. In terms of data, it includes components for \u201ccollections\u201c (sets of documents) and \u201cbenchmarks\u201d (sets of queries and qrels). Though it has some built-in datasets, it also supports all datasets available from ir_datasets in its pipelines: import capreolus as cap collection, benchmark = cap.get_irds(\"pmc/v2/trec-cds-2016\", fields=[\"abstract\"], query_type=\"summary\") index = cap.AnseriniIndex({\"stemmer\": None}, collection) index.create_index() benchmark.qrels benchmark.queries PyTerrier [58] is a Python interface to the Terrier search engine [63] that enables the creation of flexible retrieval pipelines. It has a native dataset API, but it now also automatically adds all datasets from ir_datasets, expanding the number of available datasets. They can be accessed via the dataset ID with an irds: prefix, and then used seamlessly with the rest of PyTerrier: import pyterrier as pt pt.init() ds = pt.get_dataset('irds:cord19/trec-covid') indexer = pt.index.IterDictIndexer('./cord19') indexer.index(ds.get_corpus_iter(), fields=('abstract',)) topics = ds.get_topics(variant=\"description\") qrels = ds.get_qrels() OpenNIR [55] provides a command line neural reranking pipeline for several standard IR benchmarks. OpenNIR supports ir_datasets for its training, validation, and testing dataset components. Queries and qrels are trivially fed into the training and validation processes. Documents are automatically indexed with Anserini for first-stage retrieval, and document lookups are used to fetch the text when training and scoring. Here is an example testing on the TREC COVID dataset: $ scripts/pipeline.sh test_ds=irds test_ds.ds=cord19/trec-covid Anserini [91], and its Python-wrapper counterpart Pyserini [53] focus on reproducibility in IR. They provide a wrapper and suite of tools around a Lucene index. As such, operations on datasets in this tool are tightly coupled with the Lucene and Anserini packages. Though it has support for a wide variety of query and relevance assessments (distributed with the package), the support for document content is sparse, since only a few collections have automaticallydownloadable indices. The remainder rely on manual instructions. Queries and qrels from ir_datasets can be used with Anserini by using the export CLI (as TSV or TREC format). The CLI can also efficiently output documents in a format it can index in parallel: $ ir_datasets doc_fifos medline/2017 # To index with Anserini, run: # IndexCollection -collection JsonCollection -input # /tmp/tmp6sope5gr -threads 23 -index DiffIR [46] is a tool that enables the visualization and qualitative comparison of search results. Using ir_datasets, it shows the textual content of the top results for queries and highlights modelspecific impactful text spans. 4 COMMUNITY CONTRIBUTIONS We welcome (and encourage) community contributions. Extending ir_datasets as a separate package is straightforward,5 and we also welcome pull requests to the main package. To maintain quality in ir_datasets, we require considerations of ease-of-use, efficiency, data integrity, and documentation. We request that issues are opened before implementation to ensure proper consideration of these aspects. ir_datasets provides tools for handling typical data formats (e.g., TREC, TSV, CSV), making the process relatively straightforward. Atypical formats likely require special processing. There are plenty of examples to help guide the contributor. 5 FUTURE APPLICATIONS We envision ir_datasets enabling a variety of useful applications. Training/evaluation in private settings. This tool could facilitate experiments and tasks that involve keeping data private. This is a realistic setting in several circumstances. For instance, a shared task involving searching through clinical notes would likely face challenges distributing this collection due to patient privacy concerns. Or a company may want to offer a shared task using a proprietary document collection or query log. In both these cases, a version of ir_datasets could be built that provides this data that is only available in a secure environment (e.g., one where networking is disabled). Participants could feel confident that their code is processing the data correctly, given that it supports the ir_datasets API; their code can switch to this dataset simply by using the dataset ID of the private dataset. Dataset exploration GUI. Performing ad-hoc data analysis using ir_datasets is an improvement over prior approaches. The user experience could be further improved through a graphical user interface that facilitate common dataset exploration tasks. For instance, this tool could graphically present the list of queries and link to the text of judged documents. Though this functionality is easy through the Python and command line interfaces, a graphical interface would further reduce friction and ease exploration. 6" + }, + { + "url": "http://arxiv.org/abs/2011.00696v2", + "title": "ABNIRML: Analyzing the Behavior of Neural IR Models", + "abstract": "Pretrained contextualized language models such as BERT and T5 have\nestablished a new state-of-the-art for ad-hoc search. However, it is not yet\nwell-understood why these methods are so effective, what makes some variants\nmore effective than others, and what pitfalls they may have. We present a new\ncomprehensive framework for Analyzing the Behavior of Neural IR ModeLs\n(ABNIRML), which includes new types of diagnostic probes that allow us to test\nseveral characteristics -- such as writing styles, factuality, sensitivity to\nparaphrasing and word order -- that are not addressed by previous techniques.\nTo demonstrate the value of the framework, we conduct an extensive empirical\nstudy that yields insights into the factors that contribute to the neural\nmodel's gains, and identify potential unintended biases the models exhibit.\nSome of our results confirm conventional wisdom, like that recent neural\nranking models rely less on exact term overlap with the query, and instead\nleverage richer linguistic information, evidenced by their higher sensitivity\nto word and sentence order. Other results are more surprising, such as that\nsome models (e.g., T5 and ColBERT) are biased towards factually correct (rather\nthan simply relevant) texts. Further, some characteristics vary even for the\nsame base language model, and other characteristics can appear due to random\nvariations during model training.", + "authors": "Sean MacAvaney, Sergey Feldman, Nazli Goharian, Doug Downey, Arman Cohan", + "published": "2020-11-02", + "updated": "2023-07-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "main_content": "Introduction Pre-trained contextualized language models such as BERT (Devlin et al., 2019) are state-of-the-art \u2217Currently at the University of Glasgow. Work done in part during an internship at the Allen Institute for AI. 1Code: https://github.com/allenai/abnriml for a wide variety of natural language processing tasks (Xia et al., 2020). In Information Retrieval (IR), these models have brought about large improvements in the task of ad-hoc retrieval\u2014 ranking documents by their relevance to a textual query (Lin et al., 2020; Nogueira and Cho, 2019; MacAvaney et al., 2019a; Dai and Callan, 2019b)\u2014where the models increasingly dominate competition leaderboards (Craswell et al., 2019; Dalton et al., 2019). Despite this success, little is understood about why pretrained language models are effective for ad-hoc ranking. Previous work has shown that traditional IR axioms, e.g. that increased term frequency should correspond to higher relevance, do not explain the behavior of recent neural models (C\u00e2mara and Hauff, 2020). Outside of IR, others have examined what characteristics contextualized language models learn in general (Liu et al., 2019a; Rogers et al., 2020; Loureiro et al., 2020), but it remains unclear if these qualities are valuable for ad-hoc ranking specifically. Thus, new approaches are necessary to characterize models. We propose a new framework aimed at Analyzing the Behavior of Neural IR ModeLs (ABNIRML2), which aims to probe the sensitivity of ranking models on specific textual properties. Probes consist of samples comprised of a query and two contrastive documents. We propose three strategies for building probes. The \u201cmeasure and match\u201d strategy (akin to the diagnostic datasets proposed by Rennings et al. (2019)) constructs probing samples by controlling one measurement (e.g., term frequency) and varying another (e.g., document length) using samples from an existing IR collection. Unlike Rennings et al. (2019), our framework generalizes the idea to any mea2Pronounced /ab\u2019n@rm@l/, similar to \u201cabnormal\u201d. arXiv:2011.00696v2 [cs.CL] 20 Jul 2023 \fsurable characteristic, rather than relying chiefly on prior proposed IR axioms. A second strategy, \u201ctextual manipulation,\u201d probes the effect that altering the text of a document text has on its ranking. Finally, a \u201cdataset transfer\u201d strategy constructs probes from non-IR datasets. The new probes allow us to isolate model characteristics\u2014 such as sensitivity to word order, degree of lexical simplicity, or even factuality\u2014that cannot be analyzed using other approaches. Using our new framework, we perform the first large-scale analysis of neural IR models. We compare today\u2019s leading ranking techniques, including those using BERT (Devlin et al., 2019) and T5 (Raffel et al., 2020), methods focused on efficiency like DocT5Query (Nogueira et al., 2020) and EPIC (MacAvaney et al., 2020), and dense retrieval models like ANCE (Xiong et al., 2021) and ColBERT (Khattab and Zaharia, 2020).3 Some of our results establish widely-believed, but notyet-verified conjectures about neural models. For example, we show that neural models can exploit richer linguistic signals than classical termmatching metrics like BM25: when controlling for term frequency match, the neural models detect document relevance much more accurately than the BM25 baseline. Similarly, unlike prior approaches, rankers based on BERT and T5 are heavily influenced by word order: shuffling the words in a document consistently lowers the document\u2019s score relative to the unmodified version, and neural rankers show a sensitivity to sentence order that is completely absent in classical models. Other findings from ABNIRML are more surprising. For example, we find that the T5 and ColBERT models we examine prefer answers that are factually correct, implying that they encode and utilize some real-world knowledge. Further, although this knowledge may be a result of the model\u2019s pre-training process, it is not necessarily utilized as a ranking signal, given that other models that use the same base language model do not have the same preference. Our battery of probes also uncover a variety of other findings, including that adding additional text to documents can often exhibit adverse behavior in neural models\u2014 decreasing the document\u2019s score when the added text is relevant, and increasing score when the 3Although a multitude of other models exist, it is impractical to investigate them all. We instead focus on a representative sample of the recent and successful models and well-known baselines to provide context. added text is irrelevant. In summary, we present a new framework (ABNIRML) for performing analysis of ad-hoc ranking models. We then demonstrate how the framework can provide insights into ranking model characteristics by providing the most comprehensive analysis of neural ranking models to date. Our software implementation of the framework is easily extensible, facilitating the replication of our results and further analyses in future work. 2 ABNIRML In order to characterize the behavior of ranking models we construct several diagnostic probes. Each probe aims to evaluate specific properties of ranking models and probe their behavior (e.g., if they are heavily influenced by term matching, discourse and coherence, conciseness/verbosity, writing styles, etc). We formulate three different approaches to construct probes (Measure and Match, Textual Manipulation, and Dataset Transfer). In ad-hoc ranking, a query (expressed in natural language) is submitted by a user to a search engine, and a ranking function provides the user with a list of natural language documents sorted by relevance to the query. More formally, let R(q, d) \u2208 R be a ranking function, which maps a given query q and document d (each being a natural-language sequence of terms) to a real-valued ranking score. At query time, documents in a collection D are scored using R(\u00b7) for a given query q, and ranked by the scores (conventionally, sorted descending by score). Learning-to-rank models optimize a set of parameters for the task of relevance ranking based on training data. 2.1 Document Pair Probing We utilize a document pair probing strategy, in which probes are comprised of samples, each of which consists of a query and two documents that differ primarily in some characteristic of interest (e.g., succinctness). The ranking scores of the two documents are then compared (with respect to the query). This allows the isolation of particular model preferences. For instance, a probe could consist of summarized and full texts of news articles; models that consistently rank summaries over full texts prefer succinct text. More formally, each document pair probe consists of a collection of samples S, where each \u27e8q, d1, d2\u27e9\u2208S is a 3-tuple consisting of a query \fInferred IR Dataset MMPs (Measure and Match Probes) Query Doc Doc Doc Doc Doc \u2026 Measurement and Matching Query Query Doc Doc Doc Doc Doc \u2026 TMPs (Textual Manipulation Probes) M(Doc) M(Doc) M(Doc) M(Doc) M(Doc) Doc Doc Doc Doc Doc \u2026 Doc Doc Doc Doc Doc \u2026 \u2026 \u2026 Query Query Query Query Query DTPs (Dataset Transfer Probes) = Manipulated( ) IR Dataset Repurposed Dataset Control Variable Matches documents given a constant control and differing variable. Perform automatic manipulations to document text. Repurposes non-IR datasets to probe other characteristics. Figure 1: Overview of strategies for constructing probes. Each probe in ABNIRML is comprised of samples, each of which consists of a query (q) and two documents (d1 and d2). (or query-like text, q), and two documents (or document-like texts, d1 and d2). The relationship between d1 and d2 (with respect to q) for each sample defines the probe. E.g., a probe testing summarization could be defined as: (1) d2 is a summary of d1, and (2) d1 is relevant to query q. Almost all of our probes are directional, where d2 has some attribute that d1 lacks, and we measure the effect of this attribute on ranking. Specifically, each sample in the probe is scored as: (+1) scoring d1 above d2 (a positive effect), (\u22121) scoring d2 above d1 (a negative effect), or (0) a neutral effect. Formally, the effect eff (\u00b7) of a given sample is defined as: eff (q, d1, d2) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1 R(q,d1)\u2212R(q,d2)>\u03b4 \u22121 R(q,d1)\u2212R(q,d2)<\u2212\u03b4 0 \u2212\u03b4\u2264R(q,d1)\u2212R(q,d2)\u2264\u03b4 (1) The parameter \u03b4 adjusts how large the score difference between the scores of d1 and d2 must be in order to count as positive or negative effect. This allows us to disregard small changes to the score that are unlikely to affect the final ranking. In practice, \u03b4 depends on the ranking model because each model scores on different scales. Therefore we tune \u03b4 for each model (see Section 3.3). Symmetric probes are different from directional ones in that d1 and d2 are exchangeable; for example, we experiment with one symmetric probe in which d1 and d2 are paraphrases of each other. For symmetric probes only the magnitude of score difference is meaningful, and thus eff outputs 1 if the absolute value of the difference is larger than \u03b4, and 0 otherwise. A model\u2019s performance on a particular probe is summarized by a single score s that averages the effect of all samples in the probe: s = 1 |S| P \u27e8q,d1,d2\u27e9\u2208S eff (q, d1, d2) (2) Note that this score is in the interval [\u22121, 1] for directional probes and [0, 1] for symmetric probes. For directional probes, positive scores indicate a stronger preference towards documents from group 1 (d1 documents), and negative scores indicate a preference towards documents from group 2 (d2 documents). Scores near 0 indicate no strong preference or preferences that are split roughly evenly; disentangling these two cases requires analyzing individual effect scores. There are several important differences between our setup and the \u201cdiagnostic dataset\u201d approach proposed by Rennings et al. (2019). First, by including the \u03b4 threshold, we ensure that our probes measure differences that can affect the final order in ranked lists. Second, by including the \u201cneutral effect\u201d case in our scoring function, we distinguish between cases in which d1 or d2 are preferred and cases where neither document is strongly preferred. And finally, our probes are aimed at describing model behavior, rather than evaluating models. For instance, one of our tests measures whether the model prefers succinct or elaborative text\u2014whether this preference is desirable depends on the application or even the particular user. 2.2 Document Pair Probing Strategies In this work, we explore three strategies for designing document pair probes. As discussed below, the strategies have different strengths and weaknesses. When used in concert, they allow us to characterize a wide variety of model behaviors. Figure 1 provides an overview of the strategies. 2.2.1 Measure and Match Probes (MMPs) Some surface-level characteristics of documents, such as its Term Frequency (TF) for a given query, are both easy to measure and valuable for characterizing models. Comparing the ranking scores of two documents that differ in one characteristic \fbut are otherwise similar yields evidence of how the characteristic influences model behavior. Measure and Match Probes (MMPs) follow such an approach. MMPs involve first measuring the characteristics of judged query-document pairs in an IR dataset. Then, the pairs are matched to form probe samples consisting of a control (a characteristic that approximately matches between the documents, such as document length), and a variable (which differs between documents, such as TF). Probes employed in previous work to verify existing ranking axioms (C\u00e2mara and Hauff, 2020; Rennings et al., 2019)4 are instances of MMPs. For our experiments, we design MMPs to explore the relationship between the primary IR objective (document relevance) and the classical IR ranking signal (TF, potentially controlling for document length). We are motivated to explore this relationship because TF has long been used as a core signal for ranking algorithms; a departure from monotonically increasing the score of a document as TF increases would represent a fundamental shift in the notion of relevance scoring (Fang et al., 2004). Specifically, we explore the following characteristics in MMPs: \u2022 Relevance: the human-assessed graded relevance score of a document to the given query. \u2022 Length: the document length, in total number of non-stopword tokens. \u2022 TF: the individual Porter-stemmed Term Frequencies of non-stopword terms from the query. To determine when the TF of two documents are different, we use the condition that the TF of at least one query term in d1 must be greater than the same term in d2, and that no term in d1 can have a lower TF than the corresponding term in d2. \u2022 Overlap: the proportion of non-stopword terms in the document that appear in the query. Put another way, the total TF divided by the document length. Each of these characteristics is used as both a variable (matching based on differing values) and a control (matching based on identical values). In our experiments, we examine all pairs of these characteristics, greatly expanding upon IR axioms investigated in prior work. We note that the MMPs that we explore in this work do not cover all prior IR axioms. For in4An example is TFC1 from (Fang et al., 2004), which suggests that higher TFs should be mapped to higher scores. stance, axioms SMTC1\u20133, proposed by Fang and Zhai (2006), suggest behaviors related to the occurrence of semantically-similar terms. Although MMPs can be constructed to test these, we assert that other types of probes are more suitable to testing these behaviors. We test textual fluency, formality, and simplicity (all of which are specific types of semantic similarity) while controlling for the meaning of the text using dataset transfer probes (Section 2.2.3). 2.2.2 Textual Manipulation Probes (TMPs) Not all characteristics are easily captured with MMPs. For instance, it would be difficult to probe the sensitivity to word order with MMPs; it is unlikely to find naturally-occurring document pairs that use the same words but in a different order. Nevertheless, it is valuable to understand the extent to which models are affected by characteristics like this, given that traditional bag-ofwords models are unaffected by word order and that there is evidence that word order is unimportant when fine-tuning recent neural models (Sinha et al., 2021; Alleman et al., 2021). To overcome these limitations, we propose Textual Manipulation Probes (TMPs). TMPs apply a manipulation function to scored documents from an existing IR dataset. For example, for probing word order, we can use a simple manipulation function that, given a document d1, creates a corresponding synthetic document d2 by shuffling the order of the words in each sentence. The degree to which a model prefers d1 is then a measure of its preference for proper word order. Prior works that use a similar approach for probing ranking methods include the collection perturbation tests of Fang et al. (2011) (which perform operations like removing documents from the collection and deleting individual terms from documents) and a diagnostic dataset proposed by Rennings et al. (2019) (which tests the effect of duplicating the document: an adaptation of a traditional ranking axiom). Although TMPs allow probing a wider variety of characteristics than MMPs, we note that they involve constructing artificial data; d2 may not resemble documents seen in practice. Despite this, their versatility make TMPs an attractive choice for a variety of characteristics. We now detail the specific TMPs we explore in our experiments. We use TMPs to verify a key difference we expect to hold between neural models and previous rankers: because neural \fmodels are pretrained on large bodies of running text, they should make better use of richer linguistic features like word order. We investigate this with TMPs that shuffle words in the document. We also probe which aspects of word order are important, through TMPs that only shuffle a small number of non-content words (prepositions) and TMPs that only shuffle the sentence order, but not the individual words within each sentence. Further, another important distinction of pretrained neural models is that they process unaltered text, without classical normalization like stopword removal or lemmatization; we introduce TMPs that study these manipulations.5 Recognizing changes such as lemmatization and word shuffling can drastically alter the text, we also include a more subtle TMP that applies typical typograhpical errors (typos) by replacing words with common misspellings.6 We also evaluate the recent, effective technique of using neural models to add content (DocT5Query terms (Nogueira et al., 2020)) to each document to aid IR, and contrast this with a complementary TMP that adds a non-relevant sentence to the document. 2.2.3 Dataset Transfer Probes (DTPs) Even with MMPs and TMPs, some characteristics may still be difficult to measure. For instance, for attributes like textual fluency (the degree to which language sounds like a native speaker wrote it), we would need to find pairs of otherwise-similar documents with measurable differences in fluency (for an MMP) or identify ways to automatically manipulate fluency (for a TMP), both of which would be difficult. To probe characteristics like these, we propose Dataset Transfer Probes (DTPs). In this setting, a dataset built for a purpose other than ranking is repurposed to probe a ranking model\u2019s behavior. For example, one could create a DTP from a dataset of human-written textual fluency pairs (e.g., from the JFLEG dataset (Napoles et al., 2017)) to sidestep challenges in both measurement and manipulation. Text pair datasets are abundant, allowing us to probe a wide variety of characteristics, like fluency, formality, and succinctness. With these probes, d1 and d2 can be eas5We use SpaCy\u2019s (Honnibal and Montani, 2017) lemmatizer, rather than e.g. a stemmer, because the outputs from a stemming function like Porter are often not found in the lexicon of models like BERT. 6We use this list of common errors in English text: https://en.wikipedia.org/wiki/Commonly_ misspelled_English_words ily defined by the source dataset. In some cases, external information can be used to infer a corresponding q, such as using the title of the article as a query for news article summarization tasks, a technique that has been studied before to train ranking models (MacAvaney et al., 2019b). In other cases, queries can be artificially generated, as long as the text resembles a likely query. We first use DTPs to investigate the important question of whether models exhibit confounding preferences for stylistic features of text are at least partially independent of relevance. Specifically, we first investigate paraphrases in general, and then move on to check the specific qualities of fluency, formality, simplicity, lexical bias, and succinctness. We then use DTPs to test the capacity of models to encode and utilize real-world knowledge through probes that measure a model\u2019s tendency to select factual answers. The TMPs described in the previous section probe the sensitivity of models to word order. In this case, the words remain the same, but meaning is altered. It is natural to wonder whether model behaviors would be similar if the meaning is preserved when using different words. This motivates a paraphrase DTP. We construct this probe from the Microsoft Paraphrase Corpus (MSPC).7 We select d1 and d2 from all text pairs labeled as paraphrases. Note that this is the first example of a symmetric probe, as there is no directionality in the paraphrase relation; the assignment of d1 and d2 is arbitrary. We generate q by randomly selecting a noun chunk that appears in both versions of the text, ensuring a query that is relevant to both texts. (If no such chunk exists, we discard the sample.) By selecting a noun chunk, the query remains reasonably similar to a real query. Although the paraphrase probe can tell us whether models distinguish between text with similar meaning, it cannot tell us what characteristics it favors when making such a distinction. To gain insights here, we propose several directional probes based on stylistic differences that result in similar meanings. One such characteristic is textual fluency. We propose a DTP using the JFLEG dataset (Napoles et al., 2017). This dataset contains sentences from English-language fluency tests. Each non-fluent sentence is corrected for fluency by four fluent English speakers to make 7https://www.microsoft.com/en-us/ download/details.aspx?id=52398 \fthe text sound \u2018natural\u2019 (changes include grammar and word usage changes). We treat each fluent text as a d1 paired with the non-fluent d2, and use the strategy used for paraphrases to generate q. We probe formality by building a DTP from the GYAFC dataset (Rao and Tetreault, 2018). This dataset selects sentences from Yahoo Answers and has four annotators make edits to the text that either improve the formality (for text that is informal), or reduce the formality (for text that is already formal). We treat formal text as d1 and informal text as d2. Since the text came from Yahoo Answers, we can link the text back to the original questions using the Yahoo L6 dataset.8 We treat the question (title) as q. In cases where we cannot find the original text or there are no overlapping non-stopword lemmas from q in both d1 and d2, we discard the sample. The simplicity of text indicates the ease of reading a particular text. We test the effect of lexical text simplicity using the WikiTurk dataset provided by Xu et al. (2016). In this dataset, sentences from Wikipedia were edited to make them simpler by Amazon Turk workers. We treat the simplified text as d1, the original text as d2, and we use the query construction technique from the paraphrase probe for q. Text can also express similar ideas but with differing degrees of subjectivity or bias. We construct a neutrality DTP using the Wikipedia Neutrality Corpus (WNC) dataset (Pryzant et al., 2020). This corpus consists of sentences that were corrected by Wikipedia editors to enforce the platform\u2019s neutral point of view. We use the neutral text as d1, the biased text as d2, and we use the query construction technique from the paraphrase probe for q. An idea can also be expressed in greater or lesser detail. To probe whether models have a preference for succinctness, we construct DTPs from summarization datasets, using the assumption that a document\u2019s summary will be more succinct than its full text. We utilize two datasets to conduct this probe: XSum (Narayan et al., 2018), and CNN/DailyMail (See et al., 2017). The former uses extremely concise summaries from BBC articles, usually consisting of a single sentence. The CNN/DailyMail dataset uses slightly longer bullet point list summaries, usually consisting of around 3 sentences. For these probes, we use the title of 8https://webscope.sandbox.yahoo.com/ catalog.php?datatype=l&did=11 the article as q, the summarized text as d1, and the article body as d2. When there is no overlap between the non-stopword lemmas of q in both d1 and d2, we discard the samples. We further subsample the dataset at 10% because the datasets are already rather large. To handle the long full text in BERT and EPIC, we use the passage aggregation strategy proposed by MacAvaney et al. (2019a). Moving beyond probes that express similar ideas, we explore the extent to which models are aware of real-world knowledge using a factuality probe. This probe is motivated by the intuition that contextualized language models may be memorizing facts from the pre-training corpus when determining relevance. We construct this probe from the Natural Questions (Kwiatkowski et al., 2019) dataset. We make use of the known answer text from NQ by replacing it with a similar answer. Similar answers must be of the same entity type9 and have the same number of non-stopword tokens. We discard samples where the question text contains the answer text (e.g., this-or-that questions). We use the factual text as d1, the nonfactual version as d2, and the question text as q. Note that this probe can be considered both a DTP and a TMP. We decide to consider it to primarily be a DTP because it makes use of data specific to this external dataset (i.e., answer strings). 3 Experimental Setup 3.1 Datasets We use the MS-MARCO passage dataset (Campos et al., 2016) to train the neural ranking models. The training subset contains approximately 809k natural-language questions from a query log (with an average length of 7.5 terms) and 8.8 million candidate answer passages (with an average length of 73.1 terms). Due to its scale in number of queries, it is shallowly annotated, almost always containing fewer than 3 positive judgments per query. This dataset is frequently used for training neural ranking models. Importantly, it also has been shown to effectively transfer relevance signals to other collections (Nogueira and Lin, 2019), making it suitable for use with DTPs, which may include text from other domains. We build MMPs and TMPs using the TREC Deep Learning 2019 passage dataset (Craswell 9Entities extracted using SpaCy: Person (PER), Location (LOC), Geo-Political Entity (GPE), Nationality/Religion/etc. (NORP), or Organization (ORG). \fet al., 2019) and the ANTIQUE passage ranking dataset (Hashemi et al., 2020). TREC DL uses the MS-MARCO passage collection and has 43 queries with deep relevance judgments (on average, 215 per query). The judgments are graded as highly relevant (7%), relevant (19%), topical (17%), and non-relevant (56%), allowing us to make more fine-grained comparisons. We use the test subset of ANTIQUE, which contains 200 queries with 33 judgments per query. These judgments are graded as convincing (20%), possibly correct (18%), on-topic (37%), and off-topic (25%). We opt to perform our analysis in a passage ranking setting to eliminate effects of long document aggregation\u2014which is challenging for some neural models given a maximum sequence length in the underlying model\u2014given that this is an area with many model varieties that is still under active investigation (Li et al., 2020). 3.2 Models We compare a sample of several models covering a traditional lexical model (BM25), a conventional learning-to-rank approach (LightGBM), and neural models based on contextualized language models. We include two models that focus on querytime computational efficiency, and two representative models that use dense retrieval. The neural models represent a sample of the recent state-ofthe-art ranking models. For each model, we provide the MRR (minimum relevance of 2) performance on the TREC DL 2019 passage benchmark when re-ranking the provided candidate passages. BM25. We use the Terrier (Ounis et al., 2006) implementation of BM25 with default parameters. BM25 is an unsupervised model that incorporates the lexical features of term frequency (TF), inverse document frequency (IDF), and document length. (TREC DL 2019 MRR: 0.627.) WMD. As a second unsupervised model, we use the Word Mover\u2019s Distance (Kusner et al., 2015) over (non-contextualized) GloVe (Pennington et al., 2014) embeddings (glove-wiki-gigaword-100). We use the implementation from the Gensim (Rehurek and Sojka, 2011) Python package. (TREC DL 2019 MRR: 0.364.) SBERT. As an unsupervised model based on a contextualized language model, we use SBERT\u2019s (Reimers and Gurevych, 2019) pretrained Bi-encoder model, trained on Semantic Textual Similarity, Natural Language Inference, and Quora Duplicate Question Detection data in multiple languages.10 This approach has been shown by Litschko et al. (2021) to be able to effectively perform cross-lingual retrieval. (TREC DL 2019 MRR: 0.465.) LGBM (Ke et al., 2017). As a non-neural learning-to-rank baseline, we use the Light Gradient Boosting Machine model currently used by the Semantic Scholar search engine (Feldman, 2020).11 This public model was trained on clickthrough data from this search engine, meaning that it services various information needs (e.g., navigational and topical queries). Not all of the model\u2019s features are available in our setting (e.g., recency, in-links, etc.), so we only supply the textbased features like lexical overlap and scores from a light-weight language model (Heafield et al., 2013). (TREC DL 2019 MRR: 0.580.) VBERT (Devlin et al., 2019). We use a BERT model, which uses a linear ranking layer atop a BERT pretrained transformer language model (Nogueira and Cho, 2019; MacAvaney et al., 2019a; Dai and Callan, 2019b). (This setup goes by several names in the literature, including Vanilla BERT (VBERT), monoBERT, BERT-CAT, etc.) We fine-tune the bert-base-uncased model for this task using the official training sequence of the MS-MARCO passage ranking dataset. (TREC DL 2019 MRR: 0.809.) T5 (Raffel et al., 2020). The Text-To-Text Transformer ranking model (Nogueira and Lin, 2019) scores documents by predicting whether the concatenated query, document, and control tokens is likely to generate the term \u2018true\u2019 or \u2018false\u2019. We use the models released by the authors, which were tuned on the MS-MARCO passage ranking dataset. We test both the t5-base (T5-B) and t5-large (T5-L) models to gain insights into the effect of model size. (TREC DL 2019 MRR: 0.868 (T5-B), 0.857 (T5-L).) EPIC (MacAvaney et al., 2020). This is an efficiency-focused BERT-based model, which separately encodes query and document content into vectors that are the size of the source lexicon (where each element represents the importance of the corresponding term in the query/document). We use the bert-base-uncased model, and tune the model for ranking using the train split of 10distilbert-multilingual-nli-stsb-quora-ranking 11https://github.com/allenai/s2search \fthe MS-MARCO passage ranking dataset with the code released by the EPIC authors with default settings. (TREC DL 2019 MRR: 0.809.) DT5Q (Nogueira and Lin, 2019). The T5 variant of the Doc2Query model (DT5Q) generates additional terms to add to a document using a T5 model. The expanded document can be efficiently indexed, boosting the weight of terms likely to match queries. We use the model released by the authors, which was trained using the MS-MARCO passage training dataset. For our probes, we generate four queries to add to each document. As was done in the original paper, we use BM25 as a scoring function over the expanded documents. (TREC DL 2019 MRR: 0.692.) ANCE (Xiong et al., 2021). This is a representation-based dense retrieval model that is trained using a contrastive learning technique. It is designed for single-stage dense retrieval. We use the model weights released by the original authors, which is based on the RoBERTa (Liu et al., 2019b) base model. (TREC DL 2019 MRR: 0.852.) ColBERT (Khattab and Zaharia, 2020). This is a two-stage dense retrieval approach that uses multiple representations for each document (one per WordPiece token). It makes use of both a firststage approximate nearest neighbor search to find candidate documents and a re-ranking stage to calculate the precise ranking scores. It is based on the bert-base-uncased model. We use the model weights released by the original authors. (TREC DL 2019 MRR: 0.873.) 3.3 Choosing \u03b4 Recall that \u03b4 indicates the minimum absolute difference of scores in a document pair probe to have a positive or negative effect. Since each model scores documents on a different scale, we empirically choose a \u03b4 per model. We do this by re-ranking the official set from TREC DL 2019. Among the top 10 results, we calculate the differences between each adjacent pair of scores (i.e., {R(q, d1) \u2212R(q, d2), R(q, d2) \u2212 R(q, d3), ..., R(q, d9)\u2212R(q, d10)}, where di is the ith highest scored document for q). We set \u03b4 to the median difference. By setting the threshold this way, we can expect the differences captured by the probes to have an effect on the final ranking score at least half the time. We explore this further in Section 4.1. Note that choosing a constant \u03b4 over one that is assigned per-query allows for testing probes where a complete corpus is not available, as is the case for some DTPs. 3.4 Significance Testing We use a two-sided paired T-Test to determine the significance (pairs of R(q, d1) and R(q, d2)). We use a Bonferroni correction over each table to correct for multiple tests, and test for p < 0.01. 3.5 Software and Libraries We use the following software to conduct our experiments: PyTerrier (Macdonald et al., 2021), OpenNIR (MacAvaney, 2020), ir_datasets (MacAvaney et al., 2021), Transformers (Wolf et al., 2019), sentencetransformers (Reimers and Gurevych, 2019), Anserini (Yang et al., 2018), and Gensim (Rehurek and Sojka, 2011). 4 Results & Analysis We present results for MMPs in Table 1, TMPs in Table 2, and DTPs in Table 3 and highlight our key findings in the order they appear in the tables. Contextualized language models can distinguish relevance grades when TF is held constant. From Table 1, we see that SBERT, VBERT, EPIC, T5, ColBERT, and ANCE all are able to distinguish relevance when term frequency is constant with at least a score of +0.18 across both datasets. Perhaps surprisingly, this is even true for our transfer SBERT model, which is not trained on relevance ranking data. These results are in contrast with models that score lexically (BM25, LGBM, and DT5Q), which score at most +0.10. The contextualized language models also perform better at distinguishing relevance grades than the other models when length and overlap are held constant, though by a lesser margin. When controlling for model type, it appears that the model\u2019s size is related to its effectiveness in this setting: the large version of T5 (T5-L, +0.53) performs better the base model (T5-B, +0.43). Models generally have similar sensitivity to document length, TF, and overlap on TREC DL 2019. With the exception of models that use BM25 for scoring (BM25 and DT5Q), all the models we explore have similar behaviors when varying length, TF, and overlap. This suggests that although signals like TF are not required for EPIC, BERT, and T5 to rank effectively, they still remain an important signal when available. There are big\fVariable Control BM25 WMD SBERT LGBM DT5Q VBERT EPIC T5-B T5-L ColBERT ANCE Samples TREC DL 2019 Relevance Length +0.40 +0.27 +0.43 +0.40 +0.48 +0.58 +0.54 +0.61 +0.66 +0.61 +0.53 19676 TF \u22120.03 +0.11 +0.25 +0.04 +0.10 +0.34 +0.27 +0.43 +0.53 +0.47 +0.45 31619 Overlap +0.41 +0.15 +0.39 +0.34 +0.47 +0.55 +0.50 +0.61 +0.65 +0.60 +0.49 4762 Length Relevance \u22120.05 \u22120.10 \u22120.01 +0.04 \u22120.07 \u2217\u22120.01 \u22120.08 +0.01 +0.00 \u2217+0.00 +0.00 515401 TF \u22120.14 \u22120.08 \u2217+0.02 +0.02 \u22120.09 \u22120.09 \u22120.15 +0.01 \u2217\u22120.00 \u2217+0.03 +0.06 88582 Overlap +0.51 \u2217+0.02 +0.15 +0.26 +0.24 +0.20 +0.11 +0.19 +0.18 +0.18 +0.15 3963 TF Relevance +0.88 +0.49 +0.34 +0.50 +0.73 +0.41 +0.48 +0.38 +0.42 +0.39 +0.35 303058 Length +1.00 +0.65 +0.46 +0.59 +0.84 +0.54 +0.61 +0.51 +0.53 +0.53 +0.47 19770 Overlap +0.79 \u2217+0.02 +0.18 +0.37 +0.36 +0.26 +0.17 +0.26 +0.24 +0.25 +0.19 2294 Overlap Relevance +0.70 +0.47 +0.22 +0.20 +0.52 +0.19 +0.25 +0.17 +0.18 +0.14 +0.18 357470 Length +0.75 +0.59 +0.32 +0.35 +0.59 +0.31 +0.35 +0.28 +0.29 +0.27 +0.30 20819 TF +0.88 +0.25 \u2217\u22120.00 \u22120.03 +0.47 +0.11 +0.17 +0.04 +0.06 \u2217+0.03 +0.04 13980 ANTIQUE Relevance Length \u22120.17 \u2217\u22120.09 +0.12 \u22120.15 \u22120.09 +0.23 \u2217\u22120.01 +0.26 +0.35 +0.13 +0.24 2257 TF \u22120.07 \u2217+0.01 +0.18 \u2217+0.02 +0.04 +0.23 +0.23 +0.34 +0.46 +0.28 +0.33 5586 Overlap \u2217\u22120.01 \u2217+0.00 +0.26 \u2217+0.03 +0.12 +0.39 +0.16 +0.42 +0.47 +0.31 +0.36 1211 Length Relevance +0.04 \u2217\u22120.07 +0.13 +0.23 +0.02 \u22120.07 +0.22 +0.12 +0.17 +0.17 +0.23 36164 TF \u22120.47 \u2217\u22120.09 +0.12 +0.04 \u22120.23 \u22120.13 +0.25 +0.03 +0.19 +0.15 +0.24 8296 Overlap +0.67 \u2217+0.07 +0.17 +0.33 +0.34 \u2217+0.04 +0.35 +0.12 +0.17 +0.21 +0.28 902 TF Relevance +0.69 \u2217+0.23 +0.37 +0.57 +0.56 +0.24 +0.53 +0.38 +0.42 +0.46 +0.45 19900 Length +1.00 \u2217+0.48 +0.50 +0.68 +0.84 +0.39 +0.59 +0.36 +0.31 +0.55 +0.40 1397 Overlap +0.92 \u2217+0.06 +0.14 +0.36 +0.45 \u2217+0.08 +0.35 +0.15 +0.22 +0.25 +0.28 553 Overlap Relevance +0.42 \u2217+0.29 +0.09 +0.01 +0.35 +0.21 \u2217+0.01 +0.07 \u2217+0.03 +0.04 \u2217\u22120.01 27539 Length +0.67 \u2217+0.33 +0.22 +0.35 +0.48 +0.10 +0.25 +0.13 \u2217+0.08 +0.20 +0.18 1224 TF +0.87 \u2217+0.21 +0.07 \u22120.05 +0.44 +0.14 \u22120.13 \u22120.01 \u22120.07 \u2217\u22120.00 \u22120.08 4498 Table 1: Results of Measure and Match Probes (MMPs) on the TREC DL 2019 and ANTIQUE datasets. Positive scores indicate a preference towards a higher value of the variable. Scores marked with * are not statistically significant (see Section 3.4). ger differences between models when exploring the ANTIQUE dataset, suggesting differences in the models\u2019 capacity to generalize. We note that some of the largest differences relate to the relevance measurement, highlighting the differences in label definitions between the two datasets. Trained Contextualized language models are adversely affected by heavily-destructive preprocessing steps. From Table 2, we find that removing stopwords and punctuation, performing lemmatization, and shuffling words negatively impacts most models across both datasets. Perhaps this is expected, given that this text is dissimilar to the text the models were pre-trained on. However, we note that the transfer SBERT model is far less affected by these operations, suggesting that these characteristics are not intrinsic to the contextualized language models, but rather a consequence of training them for relevance ranking. To gain further insights into the importance of word order, we control for local word order by only shuffling sentence order. We see that an effect remains for the contextualized models, though it is substantially reduced. This suggests that discourse-level signals (e.g., what topics are discussed earlier in a document) have some effect on the models, or the models encode some positional bias (e.g., preferring answers at the start of documents). To understand if the word usage of particular terms is important (rather than overall coherence), we also try shuffling only the prepositions in the sentence. We find that this has an effect on some models (most notably, both T5 models and ANCE), but not other models, suggesting that some end up learning that although these terms have meaning in the text, they are often unimportant when it comes to ranking. Lexical models handle typographical errors better than trained contextualized language models. In all but one case (ANCE DL19), BM25, LGBM, and DT5Q are negatively affected by typographical errors less than the trained contextualized language models. This is a surprising result, given that contextualized language models should be able to learn common misspellings and treat them similarly to the original words (the transfer SBERT model largely ignores typos). This problem is particularly apparent for EPIC and ColBERT, which perform matching on the WordPiece level. Trained Contextualized models behave unexpectedly when additional content is introduced in documents. We find that models that rely heavily on unigram matching (e.g., BM25) \fProbe Dataset BM25 WMD SBERT LGBM DT5Q VBERT EPIC T5-B T5-L ColBERT ANCE Samples Rem. Stops/Punct DL19 \u2217+0.00 \u2217\u22120.09 \u22120.23 \u22120.20 \u22120.04 +0.18 \u22120.78 \u22120.74 \u22120.80 \u22120.68 \u22120.59 9259 ANT \u2217+0.04 \u2217\u22120.19 \u22120.38 \u22120.24 \u22120.07 \u22120.25 \u22120.78 \u22120.64 \u22120.81 \u22120.74 \u22120.70 6540 Lemmatize DL19 +0.00 \u22120.18 +0.05 \u22120.02 \u2217+0.01 \u22120.04 \u22120.25 \u22120.42 \u22120.44 \u22120.38 \u22120.31 9259 ANT +0.04 \u2217\u22120.01 \u22120.04 \u22120.09 +0.00 \u22120.22 \u22120.25 \u22120.30 \u22120.47 \u22120.25 \u22120.31 6392 Shuf. Words DL19 \u2217+0.00 \u22120.21 \u22120.06 \u22120.25 \u22120.11 \u22120.38 \u22120.76 \u22120.65 \u22120.76 \u22120.76 \u22120.40 9260 ANT \u2217+0.04 \u2217\u22120.11 \u22120.10 \u22120.25 \u22120.13 \u22120.61 \u22120.67 \u22120.65 \u22120.75 \u22120.67 \u22120.58 6545 Shuf. Sents. DL19 \u2217\u22120.00 \u2217\u22120.01 \u22120.06 \u2217\u22120.00 \u2217\u22120.02 \u22120.13 \u22120.19 \u22120.20 \u22120.14 \u22120.14 \u22120.10 7290 ANT \u2217\u22120.00 \u2217\u22120.02 \u22120.04 \u2217\u22120.00 \u2217\u22120.02 \u22120.17 \u22120.20 \u22120.22 \u22120.22 \u22120.13 \u22120.14 4211 Shuf. Prepositions DL19 +0.01 \u22120.21 \u22120.02 \u22120.02 \u2217+0.02 \u22120.01 \u22120.11 \u22120.28 \u22120.31 \u22120.18 \u22120.24 9239 ANT +0.05 \u2217\u22120.11 \u22120.04 \u22120.03 +0.01 \u22120.12 \u22120.16 \u22120.30 \u22120.36 \u22120.18 \u22120.29 6186 Typos DL19 \u22120.23 \u22120.17 \u2217+0.07 \u22120.15 \u22120.18 \u22120.09 \u22120.50 \u22120.37 \u22120.27 \u22120.42 \u22120.20 8982 ANT \u22120.32 \u2217\u22120.41 \u22120.09 \u22120.27 \u22120.27 \u22120.40 \u22120.45 \u22120.38 \u22120.40 \u22120.56 \u22120.36 5551 + DocT5Query DL19 +0.34 +0.45 +0.33 +0.41 +0.15 \u22120.22 \u22120.63 \u22120.54 \u22120.60 \u22120.50 \u22120.47 9260 ANT +0.34 \u2217+0.14 +0.17 +0.32 \u2217+0.03 \u22120.42 \u22120.13 \u22120.67 \u22120.68 \u2217\u22120.10 \u22120.37 6589 + Non-Rel Sent. DL19 \u22120.03 \u22120.10 +0.34 +0.20 +0.04 +0.26 +0.11 +0.33 +0.27 +0.33 +0.39 9260 ANT +0.25 \u2217+0.04 +0.38 +0.31 +0.25 +0.08 +0.47 +0.28 +0.30 +0.34 +0.41 6346 Table 2: Results of Text Manipulation Probes (TMPs) on the TREC DL 2019 and ANTIQUE datasets. Positive scores indicate a preference for the manipulated document text; negative scores prefer the original text. Scores marked with * are not statistically significant (see Section 3.4). Probe Dataset BM25 WMD SBERT LGBM DT5Q VBERT EPIC T5-B T5-L ColBERT ANCE Samples Paraphrase MSPC 0.60 \u22170.89 0.94 \u22170.00 0.76 0.82 0.65 0.91 0.88 0.85 0.90 3421 Fluency JFLEG +0.03 \u2217\u22120.07 \u2217+0.00 \u2217\u22120.00 \u2217+0.02 +0.10 +0.22 +0.14 +0.07 +0.24 +0.17 5073 (spellchecked) \u2217+0.01 \u2217+0.05 \u2217\u22120.03 \u2217\u22120.00 \u2217\u22120.01 +0.07 +0.20 +0.14 +0.13 +0.18 +0.09 5187 Formality GYAFC \u22120.03 \u2217\u22120.03 \u22120.15 \u22120.09 \u22120.07 \u22120.05 +0.16 \u2217+0.01 +0.15 \u22120.07 \u2217+0.05 6721 entertain. +0.04 \u2217+0.01 \u2217\u22120.11 \u22120.04 \u22120.01 \u2217+0.04 +0.19 \u2217+0.11 +0.23 \u2217+0.01 +0.08 2960 family \u22120.08 \u2217\u22120.05 \u22120.18 \u22120.13 \u22120.11 \u22120.12 \u2217+0.13 \u22120.08 \u2217+0.08 \u22120.14 \u2217+0.03 3761 Simplicity WikiTurk +0.13 \u2217+0.21 +0.07 \u2217\u22120.00 +0.05 \u2217\u22120.03 \u2217\u22120.01 \u22120.08 \u22120.13 +0.01 \u2217\u22120.03 17849 Neutrality WNC +0.31 \u2217+0.34 +0.11 \u2217+0.00 +0.13 +0.11 +0.07 \u22120.00 +0.03 +0.13 \u22120.00 178252 Succinctness XSum +0.66 \u2217+0.91 +0.58 +0.18 +0.66 +0.49 +0.18 \u22120.09 +0.07 +0.33 +0.47 17938 CNN +0.37 \u2217+0.74 \u2217+0.02 \u22120.43 +0.41 +0.16 \u22120.72 \u22120.58 \u22120.54 \u22120.33 \u22120.28 7154 Daily Mail \u2217\u22120.01 +0.54 \u22120.37 \u22120.80 +0.06 \u22120.26 \u22120.93 \u22120.63 \u22120.58 \u22120.71 \u22120.56 18930 Factuality NQ: PER \u2217\u22120.00 +0.16 \u22120.02 \u22120.00 \u22120.02 \u2217\u22120.02 \u22120.07 +0.10 +0.14 +0.04 +0.04 72983 NQ: GPE \u2217\u22120.00 +0.22 +0.02 +0.00 \u2217+0.00 +0.09 +0.00 +0.27 +0.30 +0.22 +0.12 33528 NQ: LOC \u2217\u22120.03 +0.21 \u2217\u22120.12 \u2217\u22120.02 \u2217\u22120.02 \u2217+0.01 \u2217+0.02 +0.28 +0.29 +0.14 \u2217+0.11 962 NQ: NORP +0.02 +0.30 +0.01 +0.01 +0.03 +0.07 +0.07 +0.25 +0.33 +0.26 +0.10 4250 NQ: ORG +0.01 +0.34 +0.01 \u2217+0.00 \u2217+0.01 +0.07 \u22120.01 +0.33 +0.38 +0.19 +0.13 13831 Table 3: Results of Dataset Transfer Probes (DTPs). The paraphrase probe is unsigned, as it is symmetric. Positive scores indicate a preference for fluent, formal, simplified, neutral (non-biased), succinct, and factual text. Scores marked with * are not statistically significant (see Section 3.4). and the transfer SBERT model respond positively to the addition of DocT5Query terms. Even the DocT5Query model itself sees an additional boost, suggesting that weighting the expansion terms higher in the document may further improve the effectiveness of this model. However, the contextualized models often respond negatively to these additions. We also find that adding nonrelevant sentences to the end of relevant documents often increases the ranking score of contextualized models. This is in contrast with models like BM25, in which the scores of relevant documents decrease with the addition of non-relevant information. From the variable length MMPs, we know that this increase in score is likely not due to increasing the length alone. Such characteristics may pose a risk to ranking systems based on contextualized models, in which content sources could aim to increase their ranking simply by adding non-relevant content to their documents. Paraphrasing text can drastically change ranking scores. In Table 3, we observe high scores across most models for the paraphrase probe. For BM25, this is because the document lengths differ to a substantial degree. Contextualized models\u2014which one may expect to handle semantic equivalences like these well\u2014assign substantially different scores for paraphrases up to 94% of the time. To dig into specific stylistic differences that could explain the paraphrase discrepancies, we explore fluency, formality, simplicity, and neutrality. We find that fluency and formality have a greater effect than simplicity and neutrality. Most notably, EPIC and ColBERT prefer fluent text with scores of +0.18 to +0.24, while lexical models \f(a) MMP: Dataset=DL19, V=Rel, C=Len (b) TMP: Rem. Stops/Punct Dataset=DL19 (c) DTT: Paraphrase Dataset=MSPC 0 20 40 60 80 100 Percentile 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Score 0 20 40 60 80 100 Percentile 0.2 0.0 0.2 0.4 0.6 0.8 Score 0 20 40 60 80 100 Percentile 0.0 0.2 0.4 0.6 0.8 1.0 Score Figure 2: Plots of scores for three representative probes when varying \u03b4 to the specified percentile in TREC DL 2019. The vertical dashed line indicates the operational point of our experiments (the median value). have low or insignificant differences. Meanwhile, EPIC and T5-L prefer formal text, while ColBERT and T5-B either prefer informal text or have no significant differences. Finally, the largest preferences observed for simple and neutral text are from BM25\u2014which are likely a consequence of reduced document lengths. Model behaviors vary considerably with succinctness. First, BM25 has a strong (+0.66) preference for the summaries in XSum, a moderate preference for summaries in CNN (+0.37), and no significant preference for Daily Mail. This suggests different standards among the various datasets, e.g., XSum (BBC) must use many of the same terms from the titles in the summaries, and provide long documents (reducing the score) that may not repeat terms from the title much. WMD also appears to be heavily affected by summaries, though in two of the three probes, there is insufficient evidence to claim significance. The preference for summaries in XSum can be seen across all models except T5-B, which very slightly favors the full text. Although most contextualized models prefer the full text for CNN and Daily Mail, VBERT prefers summaries for CNN (+0.16) while it prefers full text for Daily Mail (\u22120.26). Such discrepancies warrant exploration in future work. WMD, T5, and ColBERT are biased towards factual answers. From our factuality probes, we see that most models have little preference for factual passages. However, WMD, both T5 variants, and ColBERT are biased towards answers that contain factually-correct information. For T5 and ColBERT, this suggests that these models both learn some real-world information (likely in pre-training), and use this information as a signal when ranking. The larger size of T5-L appears to equip it with more knowledge, particularly about people, nationalities, and organizations. Curiously, although ColBERT exploits this information, the VBERT model (which uses the same base language model) does not appear to learn to use this information. For WMD, which doesn\u2019t have nearly the modeling capacity of T5 and ColBERT, the preference for factual information must be due to the fact that the word embeddings of the question are more similar to the word embeddings from the factual phrase than to those of the non-factual phrase. Although the contextualized language models should have the capacity to learn these trends and make similar decisions, this would be subject to such trends being present and distinguishable during fine-tuning. This suggests that using WMD over contextualized word embeddings may also improve the capacity of models to select factual answers. 4.1 Effect of \u03b4 Recall that \u03b4 defines the model-specific threshold at which a difference in ranking score is considered important. To test the importance of the selection of \u03b4, we test all probes while varying this parameter. Since the suitable values depend upon the range of scores for the particular ranker, we select \u03b4 by percentile among differences in the top 10 scoring passages of TREC DL 2019. Figure 2 provides a representative sample of these plots. We find that for low percentiles (corresponding to settings where minute changes in score are considered important), the scores and rankings of systems can sometimes be unstable (e.g., see BM25 and DT5Q in (c)). This suggests that there are variations of the score distributions close to 0. However, we remind the reader that such differences are unlikely to have impactful changes in a real ranked list. We find that by the 50th percentile of \u03b4 (i.e., the value we use for our experiments), the rankings of the systems produced by \fProbes VBERT Stdev. EPIC Stdev. MMP 3.5 3.6 TMP 11.2 17.1 DTP 9.5 8.9 Table 4: Average standard deviations (square root of average variance) of 5 VBERT and EPIC models, by probe type. ABNIRML are generally stable. In most cases, the scores are stable as well, though in some cases drifting occurs (e.g., (c)). With a large \u03b4, nearly no differences are considered important. In (c), we observe that L-GBM has no sensitivity to the paraphrases present in the probe, regardless of \u03b4. These observations validate our technique for choosing \u03b4. 4.2 Effect of Model Training We observed that identical and similar base language models can differ in the behaviors they exhibit. To gain a better understanding of the origin of these differences, we probe 5 versions of the VBERT and EPIC models, each trained with different random seeds. We calculate the standard deviations of the performance over all the probes and report the average standard deviation for each probe type in Table 4. We find that among all probe types, MMPs are the most stable across random initializations and TMPs are the least stable. Curiously, the Stopword / punctuation removal TMP is the least stable probe across both models, with a stdev of 0.24 for VBERT and 0.27 for EPIC. In the case of VBERT, the probe score ranged from \u22120.33 to +0.31, highlighting that unexpected qualities can appear in models simply due to random variations in the training process. This is despite the fact that this probe is highly robust to the cutoff threshold on individual models (as seen in Figure 2(b)). Another probe with particularly high variance are the succinctness probe for VBERT using the CNN dataset, with a stdev of 0.23, and can either learn to prefer succinct (+0.15) or elaborative (\u22120.42) text, again due to the random initialization. These findings highlight that some biases can be introduced in the model training process randomly, rather than as a result of the pre-training process or model architecture. 5 Related Work Pretrained contextualized language models are neural networks that are initially trained on language modeling objectives and are later fine-tuned on task-specific objectives (Peters et al., 2018). Well-known models include ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), and T5 (Raffel et al., 2020). These models can effectively transfer signals to the task of ad-hoc retrieval, either by using the model directly (i.e., vanilla or mono models) (Nogueira and Cho, 2019) or by using the outputs as features into a larger model (MacAvaney et al., 2019a). There has been a multitude of work in this area; we refer the readers to Lin et al. (2020) for a comprehensive survey on these techniques. We shed light on the mechanisms, strengths and weaknesses of this burgeoning body of work. Diagnostic datasets, proposed by Rennings et al. (2019), reformulate traditional ranking axioms\u2014e.g., that documents with a higher term frequency should receive a higher ranking score (Fang et al., 2004)\u2014as empirical tests for analysing ranking models. Rennings et al. studied neural ranking architectures that predate the rise of contextualized language models for ranking, and focused on just four axioms. C\u00e2mara and Hauff (2020) extended this work by adding five more previously-proposed ranking axioms (e.g., term proximity (Tao and Zhai, 2007), and word semantics (Fang and Zhai, 2006)) and evaluating on a distilled BERT model. They found that the axioms are inadequate to explain the ranking effectiveness of their model. V\u00f6lske et al. (2021) examine the extent to which these axioms, when acting in concert, explain ranking model decisions. Unlike these prior lines of work, we propose new probes that shed light onto possible sources of effectiveness, and test against current leading neural ranking architectures. Although some insights about the effectiveness of contextualized language models for ranking have been gained using existing datasets (Dai and Callan, 2019b) and indirectly through various model architectures (Nogueira et al., 2019; Dai and Callan, 2019a; MacAvaney et al., 2020, 2019a; Hofst\u00e4tter et al., 2020; Khattab and Zaharia, 2020), they only provide circumstantial evidence. For instance, several works show how contextualized embedding similarity can be effective, but this does not imply that vanilla models utilize these signals for ranking. Rather than proposing \fnew ranking models, in this work we analyze the effectiveness of existing models using controlled diagnostic probes, which allow us to gain insights into the particular behaviors and preferences of the ranking models. Outside of the work in IR, others have developed techniques for investigating the behavior of contextualized language models in general. Although probing techniques (Tenney et al., 2019) and attention analysis (Serrano and Smith, 2019) can be beneficial for understanding model capabilities, these techniques cannot help us characterize and quantify the behaviors of neural ranking models. CheckList (Ribeiro et al., 2020) and other challenge set techniques (McCoy et al., 2019) differ conceptually from our goals; we aim to characterize the behaviors to understand the qualities of ranking models, rather than provide additional measures of model quality. 6" + }, + { + "url": "http://arxiv.org/abs/2010.05987v1", + "title": "SLEDGE-Z: A Zero-Shot Baseline for COVID-19 Literature Search", + "abstract": "With worldwide concerns surrounding the Severe Acute Respiratory Syndrome\nCoronavirus 2 (SARS-CoV-2), there is a rapidly growing body of scientific\nliterature on the virus. Clinicians, researchers, and policy-makers need to be\nable to search these articles effectively. In this work, we present a zero-shot\nranking algorithm that adapts to COVID-related scientific literature. Our\napproach filters training data from another collection down to medical-related\nqueries, uses a neural re-ranking model pre-trained on scientific text\n(SciBERT), and filters the target document collection. This approach ranks top\namong zero-shot methods on the TREC COVID Round 1 leaderboard, and exhibits a\nP@5 of 0.80 and an nDCG@10 of 0.68 when evaluated on both Round 1 and 2\njudgments. Despite not relying on TREC-COVID data, our method outperforms\nmodels that do. As one of the first search methods to thoroughly evaluate\nCOVID-19 search, we hope that this serves as a strong baseline and helps in the\nglobal crisis.", + "authors": "Sean MacAvaney, Arman Cohan, Nazli Goharian", + "published": "2020-10-12", + "updated": "2020-10-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "main_content": "Introduction The emergence of the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) prompted a worldwide research response. In the \ufb01rst 120 days of 2020, researchers published over 10,000 articles related to SARS-CoV-2 or COVID-19. Together with articles about similar viruses researched before 2020, the body of research approaches 60,000 articles. Such a large body of research results in a considerable burden for those seeking information about various facets of the virus, including researchers, clinicians, and policy-makers. To help improve COVID-19 search, we introduce SLEDGE-Z: a simple yet effective zero-shot baseline for coronavirus Scienti\ufb01c knowLEDGE \u2217This work was done while at an internship at the Allen Institute for AI. What are the initial symptoms of COVID-19? COVID-19 subset Health Filter Date Filter Standard ranking pipeline Model Training Tune for Relevance Ranking Source Collection BM25 Re-Ranker SciBERT CORD-19 Query MS-MARCO Med-MARCO Ranked Results Figure 1: Overview of SLEDGE-Z. search. SLEDGE-Z adapts the successful BERTbased (Devlin et al., 2020) re-ranking model (Vanilla BERT, MacAvaney et al. (2019)) for COVID-19 search with three simple techniques. First, we propose a training data \ufb01ltering technique to help the ranking model learn relevance signals typical in medical text. The training data we use comes entirely from another dataset (MSMARCO, Campos et al. (2016)), resulting in our model being zero-shot. Since MS-MARCO is a large collection of real user queries (over 800,000), it allows us to \ufb01lter aggressively and still have adequate training data. Second, we replace the general contextualized language model BERT with one pretrained on scienti\ufb01c literature (SciBERT, Beltagy et al. (2019)). This pre-training prepares the model for the type of language typically seen in scienti\ufb01c articles. Since the document collection (CORD-19, Wang et al. (2020)) contains articles about prior viruses, we \ufb01lter out articles published before 2020 to eliminate less pertinent articles. An overview of this process is shown in Figure 1. We show that each of the techniques mentioned above positively impacts the ranking effectiveness of SLEDGE-Z through an ablation analysis. Our zero-shot approach performs comparably to (or outperforms) top-scoring submissions to the TRECarXiv:2010.05987v1 [cs.CL] 12 Oct 2020 \fCOVID document ranking shared task (Roberts et al., 2020), a new testbed for evaluating of search methods for COVID-19. SLEDGE-Z tops the Round 1 leaderboard in the zero-shot setting, which is important in low-resource situations. Overall, our method establishes a strong performance for COVID-19 literature search. By releasing our models and code, we hope that it can help in the current global COVID-19 crisis.1 2 Related Work Ad-hoc document retrieval (of both scienti\ufb01c articles and general domain documents) has been long-studied (Lalmas and Tombros, 2007; Hersh and Voorhees, 2009; Lin, 2008; Medlar et al., 2016; Sorkhei et al., 2017; Huang et al., 2019; Hofst\u00a8 atter et al., 2020; Nogueira et al., 2020b). Most recent work for scienti\ufb01c literature retrieval has focused on tasks such as collaborative \ufb01ltering (Chen and Lee, 2018), citation recommendation (Nogueira et al., 2020a), and clinical decision support (Soldaini et al., 2017). Pre-trained neural language models (such as BERT (Devlin et al., 2020)) have recently shown to be effective when \ufb01ne-tuned for ad-hoc ranking (Nogueira and Cho, 2019; Dai and Callan, 2019; MacAvaney et al., 2019). These models also facilitate relevance signal transfer; Yilmaz et al. (2019) demonstrate that the relevance signals learned from BERT can transfer across collections (reducing the chance of over\ufb01tting a particular collection). Here, we use relevance signal transfer from an opendomain question answering dataset to the collection of COVID-19 scienti\ufb01c literature. Others have investigated COVID-19 document ranking. Zhang et al. (2020) chronicled their efforts to build a search engine for COVID-19 articles, using a variety of available ranking techniques, such as T5 (Raffel et al., 2019). In this work, we \ufb01nd that our approach outperforms this system in terms of ranking effectiveness. Contemporaneously with our work, Das et al. (2020) demonstrate how document clustering and summarization can be effective for COVID-19 retrieval. This paper extends our shared task submissions in Round 1 (MacAvaney et al., 2020). We note that the TREC COVID task proceeded for a total of 5 rounds, with various techniques emerging, such as passage aggregation (Li et al., 2020; Nguyen et al., 2020), and ensemble 1Code and models available at: https://github.com/ Georgetown-IR-Lab/covid-neural-ir. methods (Bendersky et al., 2020). 3 SLEDGE-Z: Zero-Shot COVID-19 Search To build a ranking model for COVID search, we modify the standard zero-shot Vanilla BERT document re-ranking pipeline (Yilmaz et al., 2019; MacAvaney et al., 2019). We \ufb01nd that while these modi\ufb01cations are simple, they are effective for maximizing ranking performance. We note that this process neither requires COVID relevance training data nor involves a priori inspection of the queries and their characteristics. Thus, we consider our method zero-shot. To train in a zero-shot setting, we employ a large dataset of general-domain natural language question and answer paragraphs: MS-MARCO (Campos et al., 2016). However, na\u00a8 \u0131ve domain transfer is not optimal since most questions in the dataset are not medical-related, causing a domain mismatch between the training and evaluation data. To overcome this challenge, we apply a heuristic to \ufb01lter the collection to only medical-related questions. The \ufb01lter removes questions that do not contain terms appearing in the MedSyn (Yates and Goharian, 2013), a lexicon of layperson and expert terminology for various medical conditions. We manually remove several common terms from the lexicon that commonly introduce queries that are not medical-related. For example, MedSyn includes the term gas (referring to the medical concept of \ufb02atulence in North American English), commonly also refers to gasoline or natural gas. See Appendix A.1 for a complete list of excluded MedSyn terms. Note that we made these decisions without considering COVID-19 speci\ufb01cally\u2014 only a broad relation to the medical domain. MSMARCO originally consists of 809K questions. After \ufb01ltering, 79K of the original questions remain (9.7%). We refer to this subset of MS-MARCO as Med-MARCO. From a random sample of 100 queries from Med-MARCO, 78 were judged by the authors as medical-related, suggesting the \ufb01lter has reasonable precision. Examples questions from this process include causes of peritoneal cancer prognosis and what is squalene anthrax sleep apnea. We make a list of the query IDs corresponding to Med-MARCO available,2 as well as additional 2https://github.com/Georgetown-IR-Lab/ covid-neural-ir/blob/master/ med-msmarco-train.txt \fexamples of \ufb01ltered queries (see Appendix A.2). Second, we replace the general-language BERT model with a variant tuned on scienti\ufb01c literature (including medical literature). Speci\ufb01cally, we use SciBERT (Beltagy et al., 2019), which has an identical structure as BERT, but was trained on a multidomain corpus of scienti\ufb01c publications. It also uses a WordPiece lexicon based on the training data, allowing the model to better account for subwords commonly found in scienti\ufb01c text. During model training, we employ the pairwise cross-entropy loss function from Nogueira and Cho (2019). Relevant and non-relevant documents are sampled in sequence from the of\ufb01cial MS-MARCO training pair list (\ufb01ltered down to Med-MARCO queries). Third, we apply a \ufb01lter to the document collection that removes any articles published before January 1, 2020. This \ufb01lter aims to improve the retrieval system\u2019s precision by eliminating articles that may discuss other topics. The date was chosen because little was known about COVID-19 prior to 2020, and some documents do not include a full publication date (only a year), making this \ufb01lter simple to apply. In real-life search engines, date \ufb01ltering can often be applied at the discretion of the user. 4 Experimental setup We now explore the ranking effectiveness of our approach. We evaluate the performance of SLEDGEZ using Round 1 and 2. At the time of writing, the only training data available for the task was the Round 1 data. of the TREC-COVID Information Retrieval Benchmark (Roberts et al., 2020).3 TREC-COVID uses the CORD-19 document collection (Wang et al., 2020) (2020-05-01 version, 59,943 articles), with a set of 35 topics related to COVID-19. These topics include natural questions such as: what is the origin of COVID-19 and how does the coronavirus respond to changes in the weather. The top articles of participating systems in each round were judged by expert assessors, who rated each article as non-relevant (0), partiallyrelevant (1), or fully-relevant (2) to the topic. In total, 20,728 relevance judgments were collected 3Round 2 uses residual collection evaluation, meaning that all documents judged in Round 1 are disregarded. Although this is an important setting for building up a dataset and allows for approaches like manual relevance feedback, we feel that this setting does not mimic an actual search engine, especially in the zero-shot setting. Thus, we evaluate on the concatenation of Round 1 and 2 settings and mark the systems that use Round 1 judgments for training or tuning of their system. (avg. 592 per topic), with 74% non-relevant, 12% partially relevant, and 14% fully-relevant. These rates remained nearly constant between rounds 1 and 2. We use normalized Discounted Cumulative Gain with a cutoff of 10 (nDCG@10), Precision at 5 of partially and fully-relevant documents (P@5), and Precision at 5 of only fully relevant documents (P@5 (F)). Both nDCG@10 and P@5 are of\ufb01cial task metrics; we include the P@5 \ufb01ltered to only fully-relevance documents because it exposed some interesting trends in our analysis. We also report the percentage of the top 10 documents for each query that have relevance judgments (J@10). In an additional evaluation, we measure the performance using only judged documents to ensure that unjudged documents do not impact our \ufb01ndings. We used trec eval4 for all metrics. These measures represent a precision-focused evaluation; since reranking methods like ours focus on improving precision, we leave recall-oriented evaluations to future work. Our initial ranking is conducted using BM25 with default settings over the full document text to adhere to the zero-shot setting. Re-ranking is conducted over the abstracts only, avoiding the need to perform score aggregation (since BERT models are limited in the document length). We utilize only the natural-language question (ignoring the keyword query and extended narrative). We conduct an ablation that compares SLEDGE-Z to versions using BERT (instead of SciBERT), and the full MS-MARCO dataset (MSM) (rather than the Med-MARCO subset (MedM)). We compare with several baselines under the same evaluation settings. BM25: the initial BM25 ranking. ConvKNRM: The convolutional KNRM model (Dai et al., 2018), trained on MSMARCO data. CEDR KNRM: The KNRM model, augmented with contextualized embeddings (MacAvaney et al., 2019), trained on MS-MARCO data. We use the bert-base-uncased model for the contextualized embeddings. Seq2seq T5: The text-to-text-transformer (T5) model (Raffel et al., 2019), tuned for ranking by predicting true or false as the next term in a sequence consisting of the query and document (Nogueira et al., 2020c). 4https://github.com/usnistgov/trec_eval \fIncluding Unjudged Judged Only Model Training nDCG@10 P@5 P@5 (F) J@10 nDCG@10 P@5 P@5 (F) BM25 * 0.368 * 0.469 * 0.331 75% * 0.436 * 0.520 * 0.383 + BERT MSM * 0.547 * 0.617 * 0.480 83% * 0.617 * 0.703 * 0.549 + BERT MedM 0.625 * 0.697 * 0.571 92% 0.657 * 0.737 * 0.606 + SciBERT MSM 0.667 0.754 0.611 88% 0.724 * 0.789 0.646 + SciBERT (SLEDGE-Z) MedM 0.681 0.800 0.663 90% 0.719 0.846 0.697 + ConvKNRM MSM 0.536 0.617 0.491 86% 0.580 0.645 0.508 + ConvKNRM MedM 0.565 0.668 0.525 86% 0.621 0.714 0.565 + CEDR-KNRM MSM 0.514 0.617 0.468 86% 0.524 0.628 0.474 + CEDR-KNRM MedM 0.619 0.714 0.560 89% 0.649 0.742 0.582 + Seq2seq T5 MSM 0.656 0.737 0.634 90% 0.685 0.765 0.651 + Seq2seq T5 MedM 0.626 0.714 0.594 86% 0.678 0.754 0.628 Fusion1 0.519 0.640 0.457 94% 0.534 0.640 0.457 Fusion2 0.601 0.737 0.565 96% 0.605 0.737 0.565 Table 1: Ablation results and comparison of our approach and other zero-shot baselines on TREC-COVID Rounds 1 and 2. The top results are shown in bold. SciBERT with MedM (SLEDGE-Z) signi\ufb01cantly outperforms values in the top (ablation) section marked with * (p < 0.05, paired t-test, Bonferroni correction). Fusion: a reciprocal rank fusion method (Cormack et al., 2009) of BM25 over the abstract, full text, and individual paragraphs. Fusion1 uses a concatenation of the keywords and question, and Fusion2 uses the entity extraction technique from the Round 1 udel submission.5 Our work utilizes a variety of existing opensource tools: OpenNIR (MacAvaney, 2020), Anserini (Yang et al., 2017), and the HuggingFace Transformers library (Wolf et al., 2019). We utilize a held-out subset of 200 queries from the MSMARCO training set as a validation set for the sole purpose of picking the optimal training epoch. Model hyper-parameters were chosen from values in prior work and can be found in Appendix A.4, along with information about the hardware used. The Vanilla BERT and SciBERT models take approximately 3 hours to train/validate, and inference on TREC-COVID takes approximately 15 minutes on modern GPUs. The BERT model has 157M parameters, and the SciBERT model has 158M parameters. 5 Results Ranking effectiveness is presented in Table 1. We \ufb01rst compare the ablations of our approach (top section). We note that SciBERT signi\ufb01cantly (p < 0.05, paired t-test, Boneferroni correction) outperforms BM25 and BERT trained on MSM across all metrics. There is a less dramatic jump between BERT MSM and BERT MedM, demonstrating the importance of \ufb01ltering the training data 5https://github.com/castorini/anserini/ blob/master/docs/experiments-covid.md properly. This is echoed between SciBERT MSM and SciBERT MedM, though the difference is only signi\ufb01cant for P@5 when only considering the judged documents. These results demonstrate the importance of both pre-training on appropriate data and \ufb01ne-tuning using a proper subset of the larger data. While both yield improvements (that can be additive), the pre-training objective appears to be more impactful, based on the overall better scores of SciBERT. Compared to baseline systems (bottom section), we observe that SLEDGE-Z offers superior effectiveness. Speci\ufb01cally, we see that ConvKNRM, CEDR-KNRM, and Seq2seq T5 all improve upon the initial BM25 ranking. Training on MedMARCO (rather than the full MS-MARCO) also improves each of the baselines, except, curiously, Seq2seq T5. This model may bene\ufb01t from the larger amount of training data the full MS-MARCO dataset offers. Finally, both fusion methods outperform the base BM25 model. However, we note that these models utilize two \ufb01elds available for each query: the keyword-based query and the full natural-language question text\u2014a luxury not available in practical search environments. (Recall that SLEDGE-Z and the other baselines in Table 1 only use the natural-language query.) We now compare our approach with the topperforming submissions to the TREC COVID shared task (many of which are not zero-shot methods). Full participating system descriptions are provided in Appendix A.3. We note that these experimental settings for these runs differ from our main experiments. For instance, mpiid5 run3 (Li \fIncluding Unjudged Judged Only Model Training nDCG@10 P@5 P@5 (F) J@10 nDCG@10 P@5 P@5 (F) SLEDGE-Z (ours) MedM 0.681 0.800 0.663 90% 0.719 0.846 0.697 covidex.t5\u2020 MSM, MedM 0.618 0.731 0.560 94% 0.643 0.731 0.560 with date \ufb01lter 0.652 0.760 0.600 92% 0.680 0.777 0.611 SparseDenseSciBert\u2020 MedM 0.672 0.760 0.646 96% 0.692 0.760 0.646 with date \ufb01lter 0.699 0.805 0.691 94% 0.724 0.811 0.691 mpiid5 run3\u2020 MSM, Rnd1 0.684 0.851 0.640 93% 0.719 0.851 0.640 with date \ufb01lter 0.679 0.834 0.657 90% 0.722 0.834 0.657 Table 2: TREC COVID Round 1 and 2 comparison between SLEDGE-Z and other top of\ufb01cial Round 2 submissions. We apply the date \ufb01lter for a more complete comparison. Note that experimental differences exist between our system and these submissions, including the use of multiple topic \ufb01elds and the utilization of Round 1 training data for training or tuning. The top result is marked in bold. System nDCG@10 P@5 P@5 (F) SLEDGE-Z (ours) 0.641 0.747 0.633 sab20.1.meta.docs 0.608 0.780 0.487 IRIT marked base 0.588 0.720 0.540 CSIROmedNIR 0.588 0.660 0.587 Table 3: TREC-COVID Round 1 leaderboard (automatic systems). SLEDGE-Z outperforms the highestscoring run in terms of nDCG@10 and P@5 (F). et al., 2020) and SparseDenseSciBERT use relevant information from Round 1 as training data, and covidex.t5 uses combined keyword query and natural-language questions. Therefore, these performance metrics are not directly comparable to our zero-shot runs. Despite this, SLEDGE-Z still achieves competitive performance compared to these models. For instance, it consistently scores comparably or higher than covidex.t5 (includes a more powerful language model, a more effective initial ranking model, and multiple topic \ufb01elds) and SparseDenseSciBert (which uses neural approaches for the initial ranking stage). Our method even performs comparably to the mpiid5.run3 model, which was trained directly on Round 1 judgments. Interestingly, we observe that our simple baseline approach of re-ranking using T5 strictly with the natural-language question against the paper title and abstract (Seq2seq T5 in Table 1) is more effective than the more involved approach employed by covidex.t5. When we apply the same date \ufb01ltering to the of\ufb01cial runs, we observe that the differences narrow. We also present SLEDGE-Z topping the Round 1 leaderboard in Table 3. We observe again that our model excels at \ufb01nding highly-relevant documents. To gain a better understanding of the impact of \ufb01ltering the document collection to only articles published on or after January 1, 2020, we \ufb01rst compare the performance of SLEDGE-Z with and without the \ufb01lter. Disregarding unjudged documents, it has an nDCG@10 of 0.668 (\u22120.051), P@5 of 0.777 (\u22120.069) and P@5 (F) of 0.589 (\u22120.108). All these differences are statistically signi\ufb01cant. By far the largest reduction is on fully-relevant P@5, meaning that it can be more dif\ufb01cult to \ufb01nd highly relevant documents when considering the full document collection. We observed similar trends for BM25, with and without the 2020 \ufb01lter. These trends also align with observations we made from the judgments themselves; we \ufb01nd that only 16% of judged documents from prior to 2020 were considered relevant (with only 5% fully relevant). Meanwhile, 32% of judged documents after 2020 were considered relevant (19% fully relevant). 6" + }, + { + "url": "http://arxiv.org/abs/2005.02365v3", + "title": "SLEDGE: A Simple Yet Effective Baseline for COVID-19 Scientific Knowledge Search", + "abstract": "With worldwide concerns surrounding the Severe Acute Respiratory Syndrome\nCoronavirus 2 (SARS-CoV-2), there is a rapidly growing body of literature on\nthe virus. Clinicians, researchers, and policy-makers need a way to effectively\nsearch these articles. In this work, we present a search system called SLEDGE,\nwhich utilizes SciBERT to effectively re-rank articles. We train the model on a\ngeneral-domain answer ranking dataset, and transfer the relevance signals to\nSARS-CoV-2 for evaluation. We observe SLEDGE's effectiveness as a strong\nbaseline on the TREC-COVID challenge (topping the learderboard with an nDCG@10\nof 0.6844). Insights provided by a detailed analysis provide some potential\nfuture directions to explore, including the importance of filtering by date and\nthe potential of neural methods that rely more heavily on count signals. We\nrelease the code to facilitate future work on this critical task at\nhttps://github.com/Georgetown-IR-Lab/covid-neural-ir", + "authors": "Sean MacAvaney, Arman Cohan, Nazli Goharian", + "published": "2020-05-05", + "updated": "2020-08-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "Introduction The emergence of the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) prompted a worldwide research response. In the \ufb01rst 100 days of 2020, over 5,000 research articles were published related to SARS-CoV-2 or COVID-19. Together with articles about similar viruses researched before 2020, the body of research exceeds 50,000 articles. This results in a considerable burden for those seeking information about various facets of the virus, including researchers, clinicians, and policy-makers. In the interest of establishing a strong baseline for retrieving scienti\ufb01c literature related to COVID19, we introduce SLEDGE: a simple yet effective baseline for coronavirus Scienti\ufb01c knowLEDGE 1https://github.com/Georgetown-IR-Lab/ covid-neural-ir search. Our baseline utilizes a combination of state-of-the-art techniques for neural information retrieval. Recent work in neural information retrieval shows the effectiveness of pretrained language models in document ranking. (MacAvaney et al., 2019; Nogueira and Cho, 2019; Hofst\u00a8 atter et al., 2020; Dai and Callan, 2019b; Nogueira et al., 2020b). Building upon success of these models, SLEDGE is comprised of a re-ranker based on SciBERT (Beltagy et al., 2019), a pretrained language model optimized for scienti\ufb01c text. Since at the time of writing there is no available training data for COVID-19 related search, we additionally use a domain transfer approach by training SLEDGE on MS-MARCO (Campos et al., 2016), a generaldomain passage ranking dataset, and apply it to COVID-19 literature search in zero-shot setting. We show that SLEDGE achieves strong results in the task of scienti\ufb01c literature search related to COVID-19. In particular, SLEDGE tops the leaderboard in Round 1 of the TREC-COVID Information Retrieval shared task (Roberts et al., 2020),2 a new test bed for evaluating effectiveness of search methods for COVID-19. We also provide an analysis into the hyperparameter tuning conducted, the effect of various query and document \ufb01elds, and possible shortcomings of the approach. Insights from the analysis highlight the importance of a date \ufb01lter for improving precision, and the possible bene\ufb01t of utilizing models that include count-based signals in future work. We hope that better natural language processing and search tools can contribute to the \ufb01ght against the current global crisis. 2https://ir.nist.gov/covidSubmit/ arXiv:2005.02365v3 [cs.IR] 3 Aug 2020 \f2 Related Work Retrieval of scienti\ufb01c literature has been longstudied (Lawrence et al., 1999; Lalmas and Tombros, 2007; Hersh and Voorhees, 2009; Lin, 2008; Medlar et al., 2016; Sorkhei et al., 2017; Huang et al., 2019). Most recent work for scienti\ufb01c literature retrieval has focused on tasks such as collaborative \ufb01ltering (Chen and Lee, 2018), citation recommendation (Nogueira et al., 2020a), and clinical decision support (Soldaini et al., 2017), rather than ad-hoc retrieval. Pre-trained neural language models (such as BERT (Devlin et al., 2019)) have recently shown to be effective when \ufb01ne-tuned for ad-hoc ranking. Nogueira and Cho (2019) demonstrate that these networks can be \ufb01ne-tuned for passage ranking tasks. Others later observed effectiveness at document ranking tasks, showing that these models can handle natural-language questions better than prior approaches (Dai and Callan, 2019b) and that they can be incorporated into prior neutral ranking techniques (MacAvaney et al., 2019). Although computationally expensive, researches have shown that this can be mitigated to an extent by employing more ef\ufb01cient modeling choices (Hofst\u00a8 atter et al., 2020; MacAvaney et al., 2020c), caching intermediate representations (Khattab and Zaharia, 2020; MacAvaney et al., 2020b; Gao et al., 2020), or by modifying the index with new terms or weights (Nogueira et al., 2019; Dai and Callan, 2019a; Nogueira, 2019). These models also facilitate effective relevance signal transfer; Yilmaz et al. (2019) demonstrate that the relevance signals learned from BERT can easily transfer across collections (reducing the chance of over\ufb01tting a particular collection). In this work, we utilize relevance signal transfer from an open-domain question answering dataset to the collection of COVID-19 scienti\ufb01c literature. In terms of biomedical-related ranking, MacAvaney et al. (2020a) observed the importance of using a domain-tuned language model (SciBERT (Beltagy et al., 2019)) when ranking in the biomedical domain (albeit working with clinical text rather than scienti\ufb01c literature). Some work already investigates document ranking and Question Answering (QA) about COVID-19. Zhang et al. (2020) chronicled their efforts of building and deploying a search engine for COVID-19 articles, utilizing a variety of available tools ranking techniques. In this work, we \ufb01nd that our approach outperforms this system in terms of ranking effectiveness. Tang et al. (2020) provide a QA dataset consisting of 124 COVID-19 question-answer pairs. 3 SLEDGE This section describes the details of SLEDGE, our method for searching scienti\ufb01c literature related to COVID-19. We utilize a standard two-stage reranking pipeline for retrieving and ranking COVID19 articles. The articles are curated from the CORD19 dataset (Wang et al., 2020) and provided by the task organizers. The \ufb01rst stage employs an inexpensive ranking model (namely, BM25) to generate a high-recall collection of candidate documents. The second stage re-ranks the candidate documents using an expensive but high-precision SciBERT-based (Beltagy et al., 2019) neural ranking model. 3.1 First-Stage Retrieval We \ufb01rst index the document collection using standard pre-processing methods: English stopword removal and Porter stemming. For the text, we use a concatenation of the title, abstract, and fulltext paragraphs and fulltext headings. The fulltext gives more opportunities for the \ufb01rst-stage ranker to match potentially relevant documents than the title alone would provide. When both the PDF and PubMed XML versions are available, we use the text extracted from the PubMed XML because it is generally cleaner. We then query the index for each topic with BM25. In this system, we used a \ufb01xed re-ranking threshold of 500; thus only the top 500 BM25 results are retrieved. In our experiments, we found that there was little recall gained beyond 500. 3.2 Neural Re-Ranking To best capture the domain-speci\ufb01c language related to scienti\ufb01c text we use the SciBERT (Beltagy et al., 2019) pretrained language model as the basis of a second-stage supervised re-ranker. This model is akin to the Vanilla BERT ranker from (MacAvaney et al., 2019), but utilizing the SciBERT model base (which is trained on scienti\ufb01c literature) instead. The query and document text are encoded sequentially, and relevance prediction is calculated based on the [CLS] token\u2019s representation (which was used for next sentence prediction during pre-training). Documents longer than the maximum length imposed by the positional embed\fdings are split into arbitrary equal-sized passages. We refer the reader to (MacAvaney et al., 2019) for more details about Vanilla BERT. At the time of writing there is no training data available for the COVID-19 related search and collecting such data is expensive. To mitigate this challenge, we utilize a domain transfer approach and apply the learned model to the new domain in a zero-shot setting. This approach also has the advantage of avoiding over\ufb01tting on the target dataset. Speci\ufb01cally, we train our model using the standard training sequence of the MS-MARCO passage ranking dataset (Campos et al., 2016). This dataset consists of over 800,000 query-document pairs in the general domain with a shallow labeling scheme (typically fewer than two positive relevance labels per query; non-relevance assumed from unlabeled passages). During model training, we employ the following cross-entropy loss function from Nogueira and Cho (2019): L(q, d+, d\u2212) = \u2212log(R(q, d+)) \u2212log(R(q, d\u2212)) (1) where q is the query, d+ and d\u2212are the relevant and non-relevant training documents, and R(q, d) is the relevance score. 4 Experimental setup We now explore the ranking effectiveness of our approach. We evaluate the performance of SLEDGE using the TREC-COVID Information Retrieval Challenge dataset (round 1) (Roberts et al., 2020). TREC-COVID uses the CORD-19 document collection (Wang et al., 2020) (2020-04-10 version, 51,045 articles), with a set of 30 topics related to COVID-19. These topics include natural queries such as: Coronavirus response to weather changes and Coronavirus social distancing impact. The top articles of participating systems (56 teams) were judged by expert assessors, who rated each article non-relevant (0), partially-relevant (1), or fullyrelevant (2) to the topic. In total, 8,691 relevance judgments were collected, with 74% non-relevant, 13% partially relevant, and 14% fully-relevant. Since the relevance judgments in this dataset are shallow (avg. 290 per query), we measure effectiveness of each system using normalized Discounted Cumulative Gain with a cutoff of 10 (nDCG@10), Precision at 5 of partially and fully-relevant documents (P@5), and Precision at 5 of only fully relevant documents (P@5 (Rel.)). Both nDCG@10 and P@5 are of\ufb01cial task metrics; we include the P@5 \ufb01ltered to only fully-relevance documents because it exposed some interesting trends in our analysis. Since not all submissions contributed to the judgment pool, we also report the percentage of the top 5 documents for each query that have relevance judgments (judged@5). These settings represent a high-precision evaluation; we leave it to future work to evaluate techniques for maximizing system recall, which may require special considerations (Grossman et al., 2015). Our work utilizes a variety of existing opensource tools, including OpenNIR (MacAvaney, 2020), Anserini (Yang et al., 2017), and the HuggingFace Transformers library (Wolf et al., 2019). Our experiments were conducted with a Quadro RTX 8000 GPU, and a learning rate of 2 \u00d7 10\u22125. Note on manual vs automatic runs TRECCOVID makes the distinction between manual and automatic runs. We adhere to the broad de\ufb01nition of manual runs, as speci\ufb01ed by the task guidelines: \u201cAutomatic construction is when there is no human involvement of any sort in the query construction process; manual construction is everything else... If you make any change to your retrieval system based on the content of the TREC-COVID topics (say add words to a dictionary or modify a routine after looking at retrieved results), then your runs are manual runs.\u201d3 In short, making any change to the system on the basis of observations of the query and/or results qualify as a manual run. 5 Results In this section we discuss our results in two evaluation settings. In the \ufb01rst setting we apply light hyperparmeter tuning on the pipeline which still counts as a manual run as discussed in \u00a74. In the second setting we do not perform any tuning of any sort and thus this setting is an automatic run. 5.1 Ranking with light hyperparmeter tuning Recall that the \ufb01rst stage of SLEDGE is based on an initial BM25 ranker, topics in the TREC Covid dataset include 3 different \ufb01elds: query, question and narrative, and the documents have title, abstract and full-text. Choices of the BM25 parameters and which \ufb01elds to include in the pipeline can affect the \ufb01nal performance. Therefore, in the \ufb01rst setting, we lightly tune these hyperparmeters using minimal human judgments on a subset of the topics. 3https://ir.nist.gov/covidSubmit/round1. html \fSystem nDCG@10 P@5 P@5 (Rel.) judged@5 Human intervention SLEDGE (ours, \u201crun1\u201d) 0.6844 0.7933 0.6533 100% Hyperparameter tuning on subset of queries BBGhelani2 0.6689 0.8200 0.5600 100% Human-in-loop active learning xj4wang run1 0.6513 0.8333 0.5933 100% Human-in-loop active learning UIUC DMG setrank ret 0.6082 0.7133 0.5333 100% unspeci\ufb01ed OHSU RUN2 * 0.5966 0.7000 * 0.5200 100% Query reformulation & hyperparameter tuning cu dbmi bm25 1 * 0.5887 0.7200 0.5667 96% Query reformulation sheikh bm25 manual * 0.5790 0.7267 * 0.5333 93% Query reformulation crowd1 * 0.5571 0.7067 * 0.4933 93% Manual relevance feedback CSIROmed RF * 0.5479 * 0.6400 * 0.5267 86% Manual relevance feedback dmis-rnd1-run3 * 0.4649 * 0.5867 * 0.4733 100% Query reformulation Table 1: Top results using any human intervention (manual runs). * indicates our system exhibits a statistically signi\ufb01cant improvement (paired t-test, p < 0.05). System nDCG@10 P@5 P@5 (Rel.) judged@5 Methodology sab20.1.meta.docs 0.6080 0.7800 0.4867 100% VSM, Multi-Index, Lnu.ltu weighting SLEDGE (ours, \u201crun2\u201d) 0.6032 0.6867 0.5667 88% BM25 + SciBERT IRIT marked base 0.5880 0.7200 0.5400 100% BM25+RM3 + BERT-base CSIROmedNIR* 0.5875 0.6600 0.5867 76% CovidBert-nli + Inv. Index base.unipd.it 0.5720 0.7267 0.5200 95% Elastic search + boolean queries udel fang run3 0.5370 0.6333 0.4267 98% F2EXP + CombSUM uogTrDPH QE 0.5338 0.6400 0.4667 100% Terrier + Query Exp. UP-rrf5rnd1 0.5316 0.6800 0.4800 100% unsupervised reciprocal rank fusion BioinfoUA-emb 0.5298 0.6333 0.4733 100% BM25 + DeepRank traind on BioAsk UIowaS Run3 0.5286 0.6467 0.4733 100% BM25 + \ufb01ltering Table 2: Top results without using any human intervention (automatic runs). No results exhibit a statistically signi\ufb01cant difference compared to our system (paired t-test, p < 0.05). \u201d*\u201d indicates that some sort of manually speci\ufb01ed \ufb01ltering was used which may contradict the de\ufb01nition of an automatic run by TREC (see note in Section 4). Speci\ufb01cally, we use shallow relevance judgments from 15 out of 30 topics assessed by non-experts.4 Unlike manual runs that require human intervention for query reformulation, active learning, or relevance feedback, we expect our system to be able to generalize to unseen queries in the domain because we use manual relevance signals only for hyperparameter tuning. By tuning the hyperparmeters of the initial retrieval method, the \ufb01elds of the topic (query, question, narrative) and document text (title, abstract, full text), and a date \ufb01lter, we found the following pipeline to be most effective based on our non-expert annotations (run tag: run1): 1. Initial retrieval using BM25 tuned for recall using a grid search (k1 = 3.9, b = 0.55), utilizing the keyword query \ufb01eld over the full text of the article. Articles from before January 1, 2020 are disregarded. 2. Re-ranking using a Vanilla SciBERT model trained on MS-MARCO. The topic\u2019s question 4Topics 1, 2, 6, 8, 9, 11, 13, 17, 18, 20, 21, 24, 27, 29, 30. 849 judgments were made in total. We found that our non-expert annotations did not align well with the of\ufb01cially released expert annotations \u00a75.3. \ufb01eld is scored over the article\u2019s title and abstract. We report the performance of the top system from the top 10 teams (among manual runs) for TREC-COVID in Table 1. Since the utilization of humans-in-the-loop vary considerably, we also indicate for each run the reported human intervention. We \ufb01nd that SLEDGE outperforms all the other manual runs in terms of nDCG@10 and P@5 (relevant only). Of the top 10 systems that report their technique for human intervention, ours is also the only one that relies on human judgments solely for hyperparameter tuning. This is particularly impressive because the next best systems (BBGhelani2 and xj4wang run1) involves human-in-theloop active learning to rank documents based on the manual assessor\u2019s relevance. In terms of statistical signi\ufb01cance (paired t-test, p < 0.05), our approach is on par with these active learning runs, and better than most other submissions in terms of nDCG@10 and P@5 (relevant). 5.2 Ranking without hyperparameter tuning We now evaluate our system in an environment that does not utilize human intervention, hyperparam\f(a) (b) (c) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 b 0.1 0.4 0.7 1.0 1.3 1.6 1.9 2.2 2.5 2.8 3.1 3.4 3.7 4.0 4.3 4.6 4.9 5.2 5.5 5.8 k1 recall@100 0.315 0.330 0.345 0.360 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 b 0.1 0.4 0.7 1.0 1.3 1.6 1.9 2.2 2.5 2.8 3.1 3.4 3.7 4.0 4.3 4.6 4.9 5.2 5.5 5.8 k1 recall@100 0.345 0.360 0.375 0.390 0.405 Figure 1: Comparison of grid search heatmaps for BM25 using the topic\u2019s query over article full text with (a) our relevance judgments, (b) the full set of of\ufb01cial judgments, and (c) the set of of\ufb01cial relevance judgments \ufb01ltered to only the topics we assessed. The x-axis sweeps b \u2208[0, 1] and the y-axis sweeps k1 \u2208[0.1, 6.0], and each cell represents the recall@100. eter tuning, or relevance judgements of any sort. This represents a full domain transfer setting. Our pipeline consists of (run tag: run2): 1. Initial retrieval using untuned BM25 (default parameters, k1 = 0.9, b = 0.4), utilizing the question text over the title and abstract of a article. (No date \ufb01ltering.) 2. Re-ranking using a Vanilla SciBERT model trained on a medical-related subset of MSMARCO training data. The topic\u2019s question \ufb01eld is scored over the article\u2019s title and abstract. The purpose of leveraging the medical-related subset of MS-MARCO is to reduce the risk of domain shift. To produce this subset, we use the MedSyn lexicon (Yates and Goharian, 2013), which includes layperson terminology for various medical conditions. Only queries that contain terms from the lexicon are considered in this dataset, leaving 78,895 of the original 808,531 training queries (9.7%).5 A list of the query IDs corresponding to this \ufb01ltered set is available.6 We observe that our automatic SLEDGE run performs highly competitively among other automatic submissions to the TREC-COVID shared task. Although the highest-scoring system in terms of nDCG@10 utilizes a traditional method, we observe that it falls short of neural (e.g., SLEDGE, IRIT marked base, CSIROmedNIR) in terms of 5Several common terms were manually excluded to increase the precision of the \ufb01lter, such as gas, card, bing, died, map, and fall. This does not qualify as manual tuning because these decisions were made only in consideration of the MSMARCO training queries, not any TREC-COVID topics. 6https://github.com/Georgetown-IR-Lab/ covid-neural-ir/med-msmarco-train.txt P@5 for fully-relevant articles and the difference between the result are not statistically signi\ufb01cant. Furthermore, due to the 88% and 76% judged@5 of SLEDGE and CSIROmedNIR, the actual P@5 scores for these systems may very well be higher. Curiously, however, other neural approaches that are generally high-performing (e.g., those used by Zhang et al. (2020)) did not rank in the top 10 runs. We do observe that other traditional approaches, such as those that perform query expansion (e.g., udel fang run3, and uogTrDPH QE) also perform competitively in the automatic setting. 5.3 Analysis Initial retrieval parameters We now evaluate the hyperparameter tuning process conducted. We \ufb01rst test the following \ufb01rst-stage ranking functions and tune for recall@100 using our judgments: BM25 (k1 \u2208[0.1, 6.0] by 0.1, b \u2208[0, 1] by 0.05), RM3 query expansion (Jaleel et al., 2004) over default BM25 parameters (feedback terms and feedback docs \u2208[1, 20] by 1), QL Sequential Dependency Model (SDM (Metzler and Croft, 2005), term, ordered, and un-ordered weights by 0.05). Each of these models is tested using with the query or question topic \ufb01eld, and over the article full text, or just the title and abstract. We \ufb01nd that using BM25 with k1 = 3.9 and b = 0.55, the topic\u2019s query \ufb01eld, and the article\u2019s full text to yield the highest recall. We compare the heatmap of this setting using our judgments, the full set of of\ufb01cial judgments, and the set of of\ufb01cial judgments \ufb01ltered to only the topics we judged in Figure 1. Although the precise values for the optimal parameter settings differ, the shapes are similar suggesting that the hyperparameter choices generalize. \fFirst-Stage Re-Rank Filter Query Document Query Document 2020 nDCG@10 P@5 P@5 (Rel.) judged@5 Question Full text Question Title+abstract 0.7333 0.6142 0.5467 90% Query Full text Query Title+abstract 0.4190 0.5067 0.3867 70% Query Full text Question Title+abstract 0.6244 0.7333 0.5667 94% Query Full text Narrative Title+abstract 0.6133 0.5089 0.4600 82% Question Full text Question Title+abstract \u2713 0.7733 0.6774 0.6333 91% Query Full text Query Title+abstract \u2713 0.5131 0.6267 0.4933 77% Query Full text Question Title+abstract \u2713 0.6844 0.7933 0.6533 100% * Query Full text Narrative Title+abstract \u2713 0.4898 0.5867 0.4733 70% Table 3: Performance of our system using various sources for the \ufb01rst-stage query text, re-ranking query text, and date \ufb01ltering. Our of\ufb01cial submission is marked with *. Topic \ufb01elds and date \ufb01ltering Important hyperparmeters of our system include which topic \ufb01eld (question, query, or narrative) to use in which stage, and whether to perform date \ufb01ltering. We present a study of the effects of these parameters in Table 3. First, we observe that the \ufb01ltering of articles to only those published after January 1, 2020 always improves the ranking effectiveness (as compared to models that retrieved from all articles). Indeed, we \ufb01nd that only 19% of judged documents from prior to 2020 were considered relevant (with only 7% fully relevant). Meanwhile, 32% of judged documents after 2020 were considered relevant (19% fully relevant). We note that although this \ufb01lter seems to be effective, it will ultimately limit the recall of the system. This observation underscores the value of including a user-con\ufb01gurable date \ufb01lter in COVID-19 search engines. We also observe in Table 3 that both \ufb01rst-stage ranking and re-ranking based on the question \ufb01eld may be more effective than using the query \ufb01eld for \ufb01rst-stage ranking and the question for re-ranking. Considering that the nDCG@10 already outperforms the performance of our of\ufb01cial submission, and P@5 (fully relevant only) is not far behind with only 91% of the top documents judged, we can expect that this is likely a better setting going forward. It also simpli\ufb01es the pipeline and re\ufb02ects a more realistic search environment in which the user simply enters a natural language question. However, this approach underperforms at identifying partially relevant documents, given by its much lower P@5. In an environment in which recall is important (such as systematic review), the hybrid query-question approach may be preferable. Interestingly, we \ufb01nd that the narrative \ufb01eld usually reduces ranking effectiveness compared to the other settings. This may be due to a large distance between the naturallanguage questions seen during training and the 0 1 2 our labels 0 1 2 of\ufb01cial labels 202 29 30 57 36 47 40 40 121 Figure 2: Confusion matrix between our non-expert annotations and the of\ufb01cial expert TREC labels. longer-form text seen at evaluation time. Non-expert judgements We found that our nonexpert relevance labels did not align well with the of\ufb01cial labels; there was only a 60% agreement rate among the overlapping labels. In 18% of cases, our labels rated the document as more relevant than the of\ufb01cial label; in 23% of cases ours was rated less relevant. A full confusion matrix is shown in Figure 2. Despite the low agreement rates, the use of domain transfer, and only leveraging the non-expert labels for hyperparameter tuning suggest that it would be dif\ufb01cult to over\ufb01t to the test collection. We further investigate whether the subset of queries we evaluated gained a substantial advantage. To this end, we plot the difference in the evaluation metrics between our system and an untuned BM25 ranker in Figure 3. As demonstrated by the \ufb01gure, there was no strong preference of our model towards queries that were annotated (marked with * and in blue). In fact, 9 of the 15 highest-performing queries were not in the annotated set (in terms of \u2206 nDCG@10). This suggests that our approach did not over\ufb01t to signals provided by the non-expert assessments, and that our trained ranker is generally applicable. Failure cases Although our system generally outperforms BM25 ranking, it substantially underper\f23 15 14 7 20* 30* 17* 27* 18* 10 1* 24* 9* 11* 12 28 4 29* 3 13* 6* 2* 19 21* 22 26 25 16 5 8* Query ID \u22121.0 \u22120.5 0.0 0.5 1.0 \u2206nDCG@10 23 7 15 14 17* 18* 22 20* 24* 27* 30* 10 28 12 1* 6* 21* 11* 4 26 29* 19 25 3 5 9* 2* 8* 16 13* Query ID \u22121.0 \u22120.5 0.0 0.5 1.0 \u2206P@5 23 7 14 24* 4 3 22 30* 27* 1* 20* 11* 18* 28 26 17* 13* 15 6* 16 25 9* 21* 12 10 19 2* 5 29* 8* Query ID \u22121.0 \u22120.5 0.0 0.5 1.0 \u2206P@5 (Rel.) Figure 3: Difference in ranking effectiveness between our system and an untuned BM25 model by query for nDCG@10, P@5, and P@5 (fully relevant only). Queries in blue and marked with * were annotated by nonexperts and used for hyperparameter tuning. forms for Query 23 (coronavirus hypertension). When observing the failure cases, we found that the BM25 model successfully exploited term repetition to identify its top documents as relevant. Meanwhile, our system ranked documents with incidental mentions of hypertension highly. This suggests that more effective utilization of approaches that include a count-based component in the ranking score (such as TK (Hofst\u00a8 atter et al., 2020) or CEDR-KNRM (MacAvaney et al., 2019)) could yield improvements. 6" + }, + { + "url": "http://arxiv.org/abs/2004.14269v2", + "title": "Training Curricula for Open Domain Answer Re-Ranking", + "abstract": "In precision-oriented tasks like answer ranking, it is more important to rank\nmany relevant answers highly than to retrieve all relevant answers. It follows\nthat a good ranking strategy would be to learn how to identify the easiest\ncorrect answers first (i.e., assign a high ranking score to answers that have\ncharacteristics that usually indicate relevance, and a low ranking score to\nthose with characteristics that do not), before incorporating more complex\nlogic to handle difficult cases (e.g., semantic matching or reasoning). In this\nwork, we apply this idea to the training of neural answer rankers using\ncurriculum learning. We propose several heuristics to estimate the difficulty\nof a given training sample. We show that the proposed heuristics can be used to\nbuild a training curriculum that down-weights difficult samples early in the\ntraining process. As the training process progresses, our approach gradually\nshifts to weighting all samples equally, regardless of difficulty. We present a\ncomprehensive evaluation of our proposed idea on three answer ranking datasets.\nResults show that our approach leads to superior performance of two leading\nneural ranking architectures, namely BERT and ConvKNRM, using both pointwise\nand pairwise losses. When applied to a BERT-based ranker, our method yields up\nto a 4% improvement in MRR and a 9% improvement in P@1 (compared to the model\ntrained without a curriculum). This results in models that can achieve\ncomparable performance to more expensive state-of-the-art techniques.", + "authors": "Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, Ophir Frieder", + "published": "2020-04-29", + "updated": "2020-05-21", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Deep learning techniques are of recent interest to solve information retrieval tasks such as answer ranking [26]. Most of such work Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGIR \u201920, July 25\u201330, 2020, Virtual Event, China \u00a9 2020 Association for Computing Machinery. ACM ISBN 978-1-4503-8016-4/20/07...$15.00 https://doi.org/10.1145/3397271.3401094 health benefits of eating vegetarian Relevance: BM25 score: Difficulty: \u2718 \u2714 (a) (b) (c) (d) Eating nuts and whole grains, while eliminating dairy products and meat, will improve your cardiovascular health. A British study indicates that a vegan diet reduces the risk for heart disease and Type 2 diabetes. Vegan diets go far in... In summary there is evidence that a vegetarian diet protects against cardio-vascular disease, particularly heart disease, and there may be some health benefits related to diabetes and colon cancer. Evidence is lacking, however, for any... \u279c \u279c You may also like to read: 10 reasons to eat this fruit! 10 health benefits of oranges (Gallery) 8 health benefits of turning vegetarian. 10 health benefits of strawberries. 11 health benefits of papayas. 8 reasons why you should start eating... Ovo-vegetarian refers to people who do not eat meat or dairy products but do eat eggs. Lacto-ovo vegetarian,...MORE that is, a vegetarian who eats both eggs and dairy products, is the most common kind of vegetarian. Learn more about... Relevance: BM25 score: Difficulty: \u2714 \u279c \u279c Relevance: BM25 score: Difficulty: \u279c \u279c Relevance: BM25 score: Difficulty: \u279c\u279c \u2718 Figure 1: Example of curriculum approach from MSMARCO dataset (question ID 199776). In this example, we predict (a) is \u2018easy\u2019 because it is relevant and has a high BM25 score. (d) is likewise \u2018easy\u2019 weight because it is non-relevant and has a low score. (b) is a \u2018difficult\u2019 sample because is relevant, yet has a low score due to the few term matches. We also predict (c) to be \u2018difficult\u2019 because it is non-relevant, yet it has a high score. Our approach begins by weighting \u2018easy\u2019 training samples high and \u2018difficult\u2019 training samples low. focuses on designing neural network architectures that are effective at predicting answer relevance to a particular question, while comparatively little attention aims to find optimal training configurations for these networks. More so, existing literature often falls short of expressing the most basic settings of the training environment, such as the choice of the loss function and training sample selection procedures, two critical components needed for successful reproduction of results. In contrast, we focus on the neural rankers training process for information retrieval. In particular, we demonstrate that weighting training examples early in the learning process can yield significant benefits in the effectiveness of the neural ranker. arXiv:2004.14269v2 [cs.IR] 21 May 2020 \fWe motivate our approach with the simple intuition that some answers are easier to assess the relevance of than others. For instance, consider a question about the health impacts of vegetarianism (see Figure 1). A passage written explicitly about this topic (e.g., (a)) should be relatively straightforward to identify, as it uses many of the terms in the question. This likely yields a high ranking position using conventional probabilistic approaches, such as BM25. A passage written about the health benefits of veganism (a more strict version of vegetarianism) may also answer the question (b). However, it involves more complicated reasoning and inference (such as the understanding of the relationship between the two diets) and semantic understanding of the way in which the content is presented. Similarly, consider two non-relevant answers: one that matches most of the query terms (c) and one that does not (d). We argue that the former is more difficult for the ranking model to identify as non-relevant due to the large number of matching terms, and the latter is easier due to critical missing terms (e.g., health benefits). While an ideal ranker would rank both (a) and (b) high, doing so we may add noise and complexity to the model that reduces the overall quality of ranking. Specifically, ranking (b) high may make it more difficult to identify (c) and (d) as non-relevant. Our method attempts to overcome this issue by forcing the ranker to focus primarily on the \u201ceasy\u201d training samples before gradually moving on to learning to rank all training samples via training sample weighting. We formulate this idea using the curriculum learning (CL) framework [1]. Learning through a curriculum is an idea borrowed from cognitive sciences according to which the learning process follows a multi-step training path. Initially, the learning algorithm is trained by using simple examples and smooth loss functions. Then it is progressively fine-tuned so as to deal with examples and loss functions of increasing complexity. We instantiate the CL framework in the learning to rank domain by assigning different weights to training samples through a heuristic. In early stages of training, high weights are assigned to easy training pairs while difficult samples are given low weights. As training progresses, we gradually smooth this imbalance. Eventually, all training samples are weighted equally, regardless of the estimated difficulty. To estimate the difficulty of question-answer pairs and to choose the training weight accordingly, we consider both information from an unsupervised baseline ranker (e.g., BM25) and the humanassessed relevance of the answer to the given question (see Figure 1). When an unsupervised ranker is able to identify the example effectively (i.e., it ranks a relevant document high or a non-relevant document low) the training sample is considered as \u201ceasy\u201d. On the other hand, when the unsupervised ranker fails to correctly score them, the sample is considered as \u201cdifficult\u201d. We show that our approach can be easily integrated into a neural ranking pipeline. We validate our approach using three weighting heuristics based on an unsupervised ranker using two leading neural ranking methods (BERT [9] and ConvKNRM [7]). Our code is available for reproducibility.1 Our results show significant ranking improvements when tested on three open-domain (i.e., not domainspecific) answer ranking benchmarks: TREC Deep Learning (DL), 1https://github.com/Georgetown-IR-Lab/curricula-neural-ir TREC Complex Answer Retrieval (CAR), and ANTIQUE. These datasets vary in scale (hundreds of thousands of answers to tens of millions) and source of relevance information (graded or positiveonly, human-annotated or inferred from document structure). We test using both pointwise and pairwise losses. In summary, our contributions are: \u2022 We propose a curriculum learning scheme for open-domain answer re-ranking. \u2022 We propose and evaluate three heuristics for weighting training samples while learning neural rankers, which utilize the ranking and score of the first-stage ranker. \u2022 We provide a comprehensive analysis of our proposed approaches for curriculum learning of neural rankers. Our results show the superiority of our approach as compared to standard weighting of training samples. \u2022 We show that our proposed curricula are effective on three answer re-ranking datasets. On TREC DL, our approach yields up to a 3.6% improvement in MRR and a 9.3% improvement in P@1 for a BERT-based ranker. For TREC CAR, the curricula yield a 4.2% and 3.7% boost to R-Precision and MAP, respectively, and achieves comparable performance to a larger version of BERT. For ANTIQUE, our approach yields a 3.4% and 6.0% improvement in terms of MRR and P@1. 2 BACKGROUND & RELATED WORK In this section, we provide background information about neural ranking approaches (Section 2.1) and prior work on curriculum learning (Section 2.2). 2.1 Neural ranking An ad-hoc neural ranking model maps a given query-document pair to a real-valued relevance score given the model parameters. For answer ranking, the question is treated as the query and answers are treated as the document. Through a set of training data, the parameters are optimized to maximize ranking performance on unseen data. Many neural architectures exist. They can be broadly classified as either representation-focused or interaction-focused [12]. Representation-focused models (also referred to as semantic matching models) learn mapping functions for the query and the document to a dense representation, and then compare these representations using a relatively simple comparison function (e.g., cosine similarity). Examples include DSSM [16], C-DSSM [36], and ARC-I [14]. These models rely on massive amounts of data (e.g., behavioral information from a query log) to learn semantic similarities. We do not focus on representation-focused models because of their general reliance on proprietary query logs, and their underperformance on standard test collections. On the other hand, interaction-focused models (also referred to as relevance matching models) explicitly model patterns of query term occurrences in the document. DRMM [12] models this concept by building a query-document similarity matrix where each cell represents the cosine similarity score between embeddings of each query term and document term. This allows the model to capture soft semantic term similarity (i.e., semantically-similar terms have a high score, and semantically-dissimilar terms have a low score). DRMM then models the term occurrences by building a histogram \fbased on the similarity score for each query term and by producing a relevance score by feeding these histograms into a multi-layer perceptron. KNRM [37] works similarly, but replaces the hard histogram buckets with soft Gaussian-kernel-based buckets. Other approaches model term adjacency, such as MatchPyramid [30], DeepRank [31], PACRR [17], and ConvKNRM [7]. Contrary to recent critiques of the actual effectiveness of neural ranking architectures [21, 39], recent work with transformer-based contextualized language models (e.g., BERT [9]) on document and answer ranking have shown clear ranking superiority over prior baselines [23, 28, 29]. These methods exploit the distributional characteristics of a language learned through pre-training a model on tasks with more data available (e.g., masked language model and a next sentence prediction). Due to the self-attention mechanism of transformers, these models can also be considerd interactionfocused. MacAvaney et al. [23] further demonstrated that signals from contextualized language models can be incorporated into other interaction-focused neural ranking architectures, boosting ranking performance beyond both transformer-based rankers and the non-contextualized interaction models. 2.2 Curriculum Learning Curriculum Learning (CL) can be considered a particular case of Continuation Methods, generally used when the target objective function is non-convex and its direct optimization may lead the training to converge to a poor local minimum [1, 4]. The basic idea to overcome this problem through a curriculum approach is to organize the learning process as a path where the easiest training samples are presented first and the complexity of the following ones is gradually increased. This strategy allows the learner to exploit previously seen concepts to ease the acquisition of subsequent more difficult ones. CL approaches are proved successful for training neural networks in different domains such as NLP [5, 15], language models (not used for ranking tasks) [1], image representation [3], network representation [33]. To our knowledge, the only attempt to explore how CL methods can be exploited in the document ranking domain is the one by Ferro et al. [11] where authors exploit the curriculum learning strategy in a gradient boosting algorithm that learns ranking models based on ensembles of decision trees. The results reported by Ferro et al. show that a curriculum learning strategy gives only a limited boost to the ranking performance of an ensemble of decision trees. Similar to our approach, Fidelityweighted learning [8] applies weights to training samples for learning ranking models. However, this approach focuses on estimating the quality of weak labels (namely, treating BM25 scores as labels), rather than the difficulty of training samples with higher-quality labels (e.g., human-annotated labels). Sachan and Xing [34] propose curriculum learning approaches for question answering, but in a closed-domain setting. In opendomain question answering, there are several challenges encountered, including that there is a much larger collection of answers to draw from (millions of answers) and multiple correct answers to a given question. Thus, we tackle this problem from an IR-perspective, utilizing signals from ranking models. Recently, Penha and Hauff [32] propose approaches for using curriculum learning to rank conversational responses, yielding up to a 2% improvement in ranking Table 1: Table of symbols. Symbol Definition R\u03b8 Neural ranking function with parameters \u03b8 L Loss function D Training sample difficulty function W Training sample weight function q Query (i.e., question) d Document (i.e., answer) d+ Relevant document d\u2212 Non-relevant document D Set of ranked documents s Manual relevance assessment score T Collection of training data t Training sample from T i Training iteration (epoch) m End of curriculum iteration (hyperparameter) effectiveness. The curricula proposed are specific to the domain of conversational responses and are non-trivial to apply to other domains. In contrast, we propose simple heuristics based on initial retrieval ranks and scores, and we show their effectiveness across multiple ranking models, loss functions, and answer ranking datasets in an open-domain setting. 3 METHODOLOGY We present our approach for applying curriculum learning to the training of neural rankers. At a high level, our approach applies a heuristic to determine the difficulty of a particular training sample. This difficulty estimation is then used for weighting training samples. In early stages of training, samples that the heuristic predicts as easy are given a higher weight, while samples predicted to be difficult are given a lower weight. Gradually, our approach eases off this crutch (controlled by a new hyper-parameter). Eventually, all training samples are weighted equally, regardless of the estimated difficulty. Our approach allows for fair comparisons to an unmodified training process because no changes are made to the selection of the training data itself; the effect is only on the weight of the sample during training. Furthermore, this allows for an easy integration of our approach into existing training pipelines; no changes to the data selection technique are required, and the heuristics rely on information readily available in most re-ranking settings. Our approach degrades into the typical training process in two ways: either (1) a heuristic can be used that gives every sample a weight of 1, or (2) the hyper-parameter that drives the degradation of the approach to equal weighting can be set to immediately use equal weights. 3.1 Notation and preliminaries A summary of the symbols used is given in Table 1. Let an ad-hoc neural ranking model be represented as R\u03b8 (q, d) \u2208R, which maps a given query-document pair (q, d) to a real-valued relevance score given the model parameters \u03b8. For simplicity, we refer to questions as queries and answers as documents. Through a set of training data points t \u2208T and a loss function L(t), the model parameters \u03b8 \fare optimized to maximize the ranking performance. The training data sample t \u2208T depends on the type of loss employed. Two common techniques employed for training neural rankers rely on pointwise or pairwise loss. For pointwise loss, training data consists of triples tpoint = \u27e8q, d,s\u27e9, where q is a query, d is a document, and s is its relevance score, e.g., the relevance score given to the querydocument pair by a human assessor. The loss for this sample often uses squared error between the predicted score and the relevance score s: Lpoint (q, d,s) = \u0000s \u2212R\u03b8 (q, d)\u00012 (1) On the other hand, pairwise loss uses two document samples for the same query (one relevant and one non-relevant), and optimizes to assign a higher score to the relevant document than the nonrelevant one. Training triples for pairwise loss are represented as tpair = \u27e8q, d+, d\u2212\u27e9, where q is the query, d+ is the relevant document, and d\u2212is the non-relevant document. One common pairwise loss function is the softmax cross-entropy loss: Lpair (q, d+, d\u2212) = exp \u0000R\u03b8 (q, d+)\u0001 exp \u0000R\u03b8 (q, d+)\u0001 + exp \u0000R\u03b8 (q, d\u2212)\u0001 (2) 3.2 Curriculum framework for answer ranking Let a difficulty function D : T 7\u2192[0, 1] define a weight D(t) for the training sample t \u2208T. Without loss of generality we now assume that a high value of D(t), i.e., a value close to 1, represents an easy sample, while a low value, i.e., a value close to 0, represents a difficult sample. Note that the heuristic D(t) necessarily depends on the type of loss function employed: for pointwise loss, it estimates the difficulty for assigning the relevance score s to \u27e8q, d\u27e9, while, for pairwise loss, it estimates the difficulty of scoring the relevant document pair \u27e8q, d+\u27e9above the non-relevant pair \u27e8q, d\u2212\u27e9. In our CL framework, during the first learning iteration, training samples are weighted according only to the difficulty function. To ease into the difficult samples, we employ a hyper-parameter m, which represents the training iteration at which to start to give every training sample equal weights.2 Between the start of training and the mth training iteration, we linearly degrade the importance of the difficulty heuristic. More formally, we define the iterationinformed training sample weight WD(t,i) given the training iteration i (0-based) as: WD(t,i) = ( D(t) + i m \u00001 \u2212D(t)\u0001 i < m 1 i \u2265m (3) We then define a new D-informed loss function by including the iteration-informed weight into the standard pointwise or pairwise loss function: LD(t,i) = WD(t,i) L(t) (4) 3.3 Difficulty heuristics In a re-ranking setting, a simple source of difficulty information can come from the initial ranking of the documents. Probability ranking models like BM25 rely on term frequency and inverse document frequency to score documents. These characteristics should generally be easy for models to learn because they can learn to identify term frequency information (either directly, as is done by 2We explore the importance of eventually converging to equal weights in Section 4.4. models like DRMM and KNRM, or implicitly, as is done by models like BERT through self-attention) and inverse document frequency, e.g., by down-weighting the importance of frequent terms. We postulate that it is inherently more difficult to perform semantic matching needed for identifying documents that have lower initial ranking scores. These scores are also easy to obtain, as they are readily available in a re-ranking setting. Thus, we use unsupervised ranking scores as the basis for our curriculum learning heuristics. Reciprocal rank heuristic. We define Drecip as a measure of difficulty from the reciprocal of the rank at which answers appear in a ranked list. We assume that an answer placed higher compared to the other retrieved answers is \u201ceasier\u201d for the ranker to place in that position. A high rank makes relevant documents easier and non-relevant documents harder. In the pointwise setting, relevant documents with a high reciprocal rank are considered \u201ceasier\u201d than relevant documents with a low reciprocal rank because the unsupervised ranker assigned a higher score. Conversely, non-relevant documents with a high rank are considered \u201charder\u201d than samples that are assigned a low rank. Given d from a set of ranked documents D for query q we have: recipq,D(d) = 1 rankq,D(d) (5) With these conditions in mind, we define Drecip for pointwise loss as: Dpoint recip (q, d,s) = ( recipq,D(d) s > 0 \u25b7relevant 1 \u2212recipq,D(d) s \u22640 \u25b7non-relevant (6) For pairwise loss, we define pairs that have a large difference between the reciprocal ranks to be very difficult (when the nonrelevant document is higher) or very easy (when the relevant document is higher). When the reciprocal ranks are similar, we define the difficulty as moderate, with a difficulty close to 0.5. This is accomplished by taking the difference between the scores and scaling the result within the range [0, 1]: Dpair recip(q, d+, d\u2212) = recipq,D(d+) \u2212recipq,D(d\u2212) + 1 2 (7) Normalized score heuristic. An alternative to using the ranks of documents by an unsupervised ranker is to use the scores from these rankers. We define Dnorm as a measure of difficulty that uses the ranking score information. This allows documents that receive similar (or identical) scores to be considered similarly (or identically) in terms of difficulty. In the case of identical scores, Dnorm allows to improve the reproducibility of the CL approach compared to curricula that rely on rank [22]. To account for various ranges in which ranking scores can appear, we apply min-max normalization by query to fit all scores into the [0, 1] interval, eliminating perquery score characteristics. The integration of the normalized score normq,D(d) into both pointwise and pairwise rankers are similar to that of the reciprocal rank curriculum: Dpoint norm (q, d,s) = ( normq,D(d) s > 0 \u25b7relevant 1 \u2212normq,D(d) s \u22640 \u25b7non-relevant (8) Dpair norm(q, d+, d\u2212) = normq,D(d+) \u2212normq,D(d\u2212) + 1 2 (9) \f20 40 60 80 100 rank 12 14 16 18 20 22 24 26 28 BM25 score (a) 10 15 20 25 30 BM25 score 0.0 0.2 0.4 0.6 0.8 1.0 difficulty (b) Dnorm Dkde Figure 2: (a) Example of BM25 scores exhibiting non-linear behavior; there are several answers with a much higher score than others and a long tail of lower-scored answers. (b) Comparison between normalized score (solid blue) and KDE (dashed green) heuristic values by BM25 score. The grey vertical lines indicate the values from the initial ranking (from (a)). Scores are from MS-MARCO query 1000009 retrieved using Anserini\u2019s [38] BM25 implementation. Kernel Density Estimation (KDE) heuristic. The normalized score heuristic provides weighting based on ranking score, but it fails to acknowledge an important characteristic of ranking score values: they are often non-linear. For example, it is common for a handful of scores to be comparatively very high, with a long tail of lower scored answers (e.g., with fewer query term matches, see Figure 2(a)). We hypothesize that it may be valuable to provide a greater degree of value distinction between scores in areas of high score density (e.g., in the long tail, around a score of 16 and below in Figure 2(a)) and areas with relatively low score density (e.g., around a score of 20). To this end, we construct a Gaussian Kernel Density Estimation (KDE), with the bandwidth selected using Scott\u2019s Rule [35]. We then define Dkde by using the CDF of the kernel as the basis of difficulty measure: Dpoint kde (q, d,s) = ( KDEq,D(d) s > 0 \u25b7relevant 1 \u2212KDEq,D(d) s \u22640 \u25b7non-relevant (10) Dpair kde (q, d+, d\u2212) = KDEq,D(d+) \u2212KDEq,D(d\u2212) + 1 2 (11) where KDEq,D(d) yields the CDF score of the kernel density estimation for d. An example of the difference between Dnorm and Dkde for a particular query is shown in Figure 2(b). This approach has the added benefit of allowing a non-zero difficulty for positive samples that are not retrieved in the ranking list. In summary, we propose a curriculum learning approach for answer ranking. The approach weights training samples by predicted difficulty. We propose three heuristics for estimating training sample difficulty, based on the rank or score of an unsupervised ranking model. 4 EXPERIMENTS We conduct experiments on three large-scale answer ranking datasets \u2014 namely TREC Deep Learning (DL) [6] (Section 4.1), TREC Complex Answer Retrieval (CAR) [10] (Section 4.2), and ANTIQUE [13] (Section 4.3) \u2014 and two neural rankers (Vanilla BERT [9, 23] and ConvKNRM [7]) to answer the following research questions: RQ1 Are the proposed training curricula effective for training neural rankers for answer ranking? (Sections 4.1\u20134.3) RQ2 Under which conditions is each curriculum more effective (e.g., amount and quality of training data, type of neural ranker trained, etc.)? (Sections 4.1\u20134.3) RQ3 Is it important to shift to difficult samples, or can a ranker be successfully trained focusing only on easy samples? (Section 4.4) RQ4 Is focusing on the easy samples first more beneficial to training than focusing on the hardest samples first? (Section 4.5) Each dataset exhibits different characteristics (summarized in Table 2), as do the neural ranking architectures employed: \u2022 \u201cVanilla\u201d BERT [23]. This model uses the sentence classification mechanism from a pretrained BERT contextualized model [9] (a deep transformer-based network) to model the semantic relevance between the question and answer. This model yields exceptional ranking performance at the expense of computational cost and is the foundation for most state-of-the-art answer ranking approaches. We test Vanilla BERT using both pointwise and pairwise loss, as defined in Section 3.1. In line with [23], we initialize the model using bert-base (12-layer transformer pretrained on English text from [9]) and fine-tune using a learning rate of 2 \u00d7 10\u22125 with the Adam optimizer. \u2022 ConvKNRM [7]. This model learns the relationship between unigram and n-gram (via a convolutional neural network) similarity scores between the question and answer and combines the scores using Gaussian filters. This model yields competitive ranking performance and can be optimized for real-time ranking [18]. We use unigram to tri-gram encoding with cross-matching and 128 hidden nodes for score combination. Word vectors were initialized using 300-dimensional FastText [2] word embeddings trained on WikiNews with subword information. Based on preliminary experiments that showed that the ConvKNRM model fails to converge when trained using pointwise loss, we only test using pairwise loss. We train the model using the Adam optimizer and a learning rate of 10\u22123. Furthermore, we use the score additivity technique from [39]. We train each model using training iterations consisting of 32 batches of 16 training samples uniformly selected over the reranking pool. We employ gradient accumulation when a training batch is unable to fit on a GPU (e.g., Vanilla BERT models). After each training iteration, the validation performance is assessed. We employ early stopping after 15 consecutive epochs with no improvement to the dataset-dependent validation metric. When training is early stopped, the model is rolled back to the version of that achieved a performance improvement. This yielded up to 130 training iterations. We test our three proposed training curricula (Drecip, Dnorm, and Dkde) on each of the datasets and neural rankers. We optimize the parameter m i.e., end of curriculum learning epoch, by fine-tuning on the validation set. For each dataset, ranker, and loss combination, we test m \u2208{1, 5, 10, 20, 50, 100}. To put performance of the neural rankers in context, we include the ranking effectiveness of Anserini\u2019s [38] implementation of BM25 and SDM [25], both with default parameters, tuned on the validation set (\u2018Tuned\u2019), and tuned on the test set (representing the optimal \fTable 2: Dataset statistics. The values in parentheses indicate the average number of relevance judgments per query. Dataset # Answers Train Queries Validation Queries Test Queries Test Judgments (judg. per query) (judg. per query) (judg. per query) TREC DL [27] 8.8M 504k (1.1) 200 (0.7) 43 (215.3) Human (graded) TREC CAR [10] 30M 616k (4.8) 2.2k (5.5) 2.4k (6.7) Inferred (positive only) ANTIQUE [13] 404k 2.2k (11.3) 200 (11.0) 200 (33.0) Human (graded) settings of parameters for this model, \u2018Optimized\u2019).3 We also include relevant prior reported results and the optimal re-ranking of the results (i.e., sorting the original ranking list by relevance score, serving as an upper bound to re-ranking performance). 4.1 Web passage answer ranking We first demonstrate the effectiveness of our training curricula on the TREC Deep Learning (DL) 2019 answer passage ranking dataset, which uses the MS-MARCO collection and queries [27]. 3Models tuned using a grid search: BM25 k1 \u2208[0.1, 4.0] by 0.1 and b \u2208[0.0, 1.0] by 0.05; SDM term, ordered and unordered weights \u2208[0, 1] by 0.1. Table 3: Ranking performance on the TREC DL 2019 answer passage ranking task. Significant improvements in performance when using the training curricula (as compared to no curriculum) are indicated with \u2191(paired t-test p < 0.05). There are no statistically-significant differences among the curricula. The top result for each model are listed in bold. TREC DL 2019 Ranker Training MRR@10 P@1 ConvKNRM Pairwise 0.6159 0.4419 w/ Drecip 0.6834 0.5581 w/ Dnorm 0.6514 0.5116 w/ Dkde 0.6475 0.5116 Vanilla BERT Pointwise 0.8740 0.7907 w/ Drecip 0.8942 0.8372 w/ Dnorm 0.8895 0.8140 w/ Dkde 0.8857 0.8140 Vanilla BERT Pairwise 0.8477 0.7442 w/ Drecip 0.8624 0.7674 w/ Dnorm 0.8581 0.7907 w/ Dkde 0.8837 \u21910.8372 Baselines BM25 Default 0.7024 0.5814 Tuned 0.6653 0.5349 Optimized 0.7555 0.6744 SDM Default 0.6276 0.4884 Tuned 0.6243 0.4884 Optimized 0.6667 0.5814 1. 0.907 Top TREC Re-Ranking runs [6] 2. 0.882 3. 0.870 Optimal Re-Ranker 0.9767 0.9767 The training data for this dataset consists of over a million questions collected from the Bing query log. A human annotator was presented a question and a list of 10 candidate answer passages. The annotator was asked to produce a written answer to these questions based on the passages and to indicate the passages that were most valuable in the production of this answer. For the purposes of passage ranking, these passages are considered relevant to the corresponding question. We note that this does not necessarily mean that all correct passages are annotated as relevant, nor it means that the best passage is annotated (better answers could exist beyond the 10 shown to the annotator). To overcome this limitation, the TREC DL track manually judged the top retrieved passages for a subset of the test collection. This evaluation setting, which uses manual relevance judgments, is more suitable for evaluation than prior works that relied on incomplete relevance judgments (e.g., [28]). These incomplete training relevance labels also make this dataset suitable for our curriculum learning approach; answers ranked highly by an unsupervised ranker may be relevant, so down-weighting these samples during training may be beneficial. We train our models using the official MS-MARCO list of training positive and negative relevance judgments. We use a held-out set of 200 queries for validation. We re-rank the official initial test set ranking,4 and we use the official TREC DL manual relevance judgments for our evaluation and analysis. Statistics about the training, development, and test sets are given in Table 2. Since this work focuses on re-ranking, we evaluate using precisionoriented metrics, and leave recall to future work. We use mean reciprocal rank at 10 (MRR@10) as the validation metric, as it is the official task evaluation metric. Although not included as an official task metric, we also evaluate using P(recision)@1, which indicates the performance of the ranker in a realistic setting in which a single answer is given to a question. We present the ranking performance for TREC DL in Table 3. We observe that under all conditions, our proposed curricula outperform the ranker when trained without a curriculum for both MRR and P@1 metrics. Drecip outperforms the other curricula for ConvKNRM and pointwise Vanilla BERT, while Dkde outperforms the other curricula for pairwise Vanilla BERT. When the model significantly under-performs well-tuned BM25 and SDM (ConvKNRM), we observe that the curricula can improve the ranking performance to approximately the level of these baselines. When the model is already doing substantially better (Vanilla BERT), our training curricula also yield a considerable boost to ranking effectiveness. The observation that our approach can improve 4Another evaluation setting for TREC DL is \u201cfull ranking\u201d, in which systems perform initial retrieval in addition to re-ranking. Since this work focuses on improving the effectivness of re-raning models rather than initial stage retrieval, we compare with other re-ranking submissions. \f0 10 20 30 40 50 60 epoch 0.20 0.25 0.30 0.35 0.40 MRR m = 5 No Curriculum kde Figure 3: Validation performance comparison between Vanilla BERT model trained with pointwise loss without a curriculum (black x) and with the Dkde curriculum (blue circle) for TREC DL. The tuned m parameter for the Dkde curriculum used here is marked with a vertical line. While the variant without a curriculum quickly reaches optimal performance, the curriculum approach reaches a higher performance faster and offers a stronger foundation on which to continue training after the curriculum terminates. the ranking effectiveness in both these cases is encouraging, and suggests that the approach is generally beneficial. When compared to the top TREC DL re-ranking results [6], our approach performs favorably. Specifically, the top approach, namely pointwise Vanilla BERT with Drecip, ranks second among the submissions. It is only narrowly exceeded by a much more expensive and complicated approach of pretraining a new BERT model from scratch using a different training objective. Our results indicate that this can be avoided by simply doing a better job weighting the training samples. To gain a better understanding of how the curriculum benefits the training process, we compare the validation performance of the pointwise Vanilla BERT model with the Dkde training curriculum to the same model when trained without a curriculum (Figure 3). This reveals that when not using a curriculum, the validation performance peaks early, suggesting that it is overfitting to difficult examples. The curriculum, however, has even stronger early performance and is in a better position to incorporate difficult samples as training continues. Note that the tuned end of curriculum epoch is m = 5 for this example, showing that the curriculum does not need to be in place for long to get these benefits. Also note that the training data were presented in the exact same order in both cases, showing the importance of weighting the loss effectively. 4.2 Complex answer passage ranking We also evaluate our curriculum learning framework on the TREC Complex Answer Retrieval (CAR) dataset [10]. To compare with prior work, we use version 1.0 of the dataset. This dataset consists of topics in the form of a hierarchy of article headings (e.g., Green Sea Turtle \u00bb Ecology and behavior \u00bb Diet). A standard set of automatically-generated relevance judgments are provided by assuming paragraphs (passages) under a heading are relevant to the query corresponding to the heading. The automatic relevance assessments provide a large amount of training data, but can suffer from variable quality (e.g., some paragraphs are very difficult to match as they provide little context). This makes TREC CAR a Table 4: Ranking performance on the TREC CAR complex answer passage ranking task. Significant improvements in performance when using the training curricula (as compared to no curriculum) for each model are indicated with \u2191(paired t-test p < 0.05, no significant reductions observed). For Pointwise loss, Drecip significantly outperforms Dnorm in terms of MAP. There are no other significant differences among the training curricula. The top results in each section are indicated in bold. TREC CAR Ranker Training R-Prec MAP ConvKNRM Pairwise 0.1081 0.1412 w/ Drecip \u21910.1174 \u21910.1493 w/ Dnorm \u21910.1258 \u21910.1572 w/ Dkde \u21910.1227 \u21910.1553 Vanilla BERT Pointwise 0.2026 0.2490 w/ Drecip \u21910.2446 \u21910.2864 w/ Dnorm \u21910.2370 \u21910.2764 w/ Dkde \u21910.2370 \u21910.2795 Vanilla BERT Pairwise 0.2731 0.3207 w/ Drecip \u21910.2914 \u21910.3298 w/ Dnorm \u21910.2921 \u21910.3307 w/ Dkde \u21910.2844 0.3254 Baselines BM25 Default Settings 0.1201 0.1563 Tuned 0.1223 0.1583 Optimized 0.1231 0.1588 SDM Default Settings 0.1154 0.1463 Tuned 0.1099 0.1420 Optimized 0.1155 0.1459 BERT Large [28] 0.335 BERT Base [28] 0.310 PACRR [24] 0.146 0.176 Optimal Re-Ranker 0.6694 0.6694 good application of training curricula; it contains many positive relevance samples that are difficult to match. A set of manuallygraded relevance assessments are also provided by TREC assessors. However, due to the shallow assessment pool used (due to the large number of topics), we opt to only evaluate our approach using the automatic judgments.5 We use TREC 2017 (Y1) training data with hierarchical relevance judgments. We also compare our results to the performance reported by [28] and [24], which use BERT and the PACRR neural ranking architecture augmented with entity embeddings for classification, respectively. Following previous work [28], we train and validate our models using the top 10 results retrieved by BM25 and test on the top 1000 results. We use the official task metric of R-Prec(ision) to validate our model. We also report MAP, another official metric for the task. We use these metrics rather than MRR and P@1 because CAR 5The track report suggests that the automatic judgments are a reasonable proxy for manual judgments as there is a strong correlation between the automatic and manual performance among the TREC submissions [10]. \fqueries often need many relevant passages to answer the question, not just one. We present the performance of our training curricula on TREC CAR in Table 4. We observe that in all cases, the training curricula significantly improve the ranking effectiveness. When training rankers using pairwise loss, the Dnorm curriculum is most effective, and when training with pointwise loss, the Drecip curriculum is most effective. In the case of ConvKNRM, without the curriculum, the ranker under-performs the unsupervised BM25 and SDM baselines; with the curricula, it performs on-par with them. For Vanilla BERT, both when trained with pairwise and pointwise losses, the ranker outperforms the unsupervised baselines without the curricula, and improves significantly when using the curricula. When compared with the supervised baselines, i.e., BERT and PACRR, the Vanilla BERT model trained with pairwise loss and Dnorm curriculum ends up performing about as well as the large BERT baseline reported by [28] (0.3307 versus 0.335 in terms of MAP, no statistically significant difference). This is a considerable achievement because the Vanilla BERT model is half the size and about twice as fast to execute. This observation strengthens the case for using curricula when training because it can allow for similar gains as using a much larger model. The remaining gap between our trained models and the optimal re-ranker on the CAR dataset, however, indicates that there is still room for improvement in this task. In particular, a considerable challenge is ranking passages without much context highly without adding too much noise to the model. 4.3 Non-factoid question answering We also test our approach on the ANTIQUE non-factoid question answering dataset [13]. Unlike TREC DL and CAR, ANTIQUE has more thoroughly annotated training queries, with an around 11 graded relevance judgments per query in the training and validation collections (crowdsourced) (see Table 2). Furthermore, these include explicit labels for non-relevant answers, which are not present in the other two datasets. This more extensive annotation comes at the expense of scale, however, with far fewer queries to train upon. Nevertheless, ANTIQUE represents another valuable set of conditions under which to evaluate our curricula. We randomly sample from the top 100 BM25 results for additional negative samples during training. We validate and test by re-ranking the top 100 BM25 results, and MRR as the validation metric and P@1 as a secondary metric. We use these two official task metrics (at relevance level of 3 or higher, as specified in [13]) because the answers in ANTIQUE are self-contained, and these metrics emphasize correct answers that are ranked highly and first, respectively. We report the curricula performance on ANTIQUE in Table 5. Similar to TREC DL, we observe that the Drecip and Dkde curricula are the most effective. For ConvKNRM, the curricula were able to overcome what would otherwise be a model that under-performs w.r.t. the BM25 and SDM unsupervised baselines. For the pointwise and pairwise Vanilla BERT models (which are already very effective), we observe gains beyond. In the case of pairwise-trained Vanilla BERT, the Drecip curriculum significantly boosted ranking performance. Despite our efforts to reproduce the effectiveness of Table 5: Ranking performance on the ANTIQUE non-factoid question answering task. Significant improvements in performance when using the training curricula (as compared to no curriculum) are indicated with \u2191(paired t-test p < 0.05). There are no statistically-significant differences among the curricula. The top results in each section are indicated in bold. ANTIQUE Ranker Training MRR P@1 ConvKNRM Pairwise 0.4920 0.3650 w/ Drecip \u21910.5617 \u21910.4550 w/ Dnorm \u21910.5523 \u21910.4450 w/ Dkde \u21910.5563 \u21910.4500 Vanilla BERT Pointwise 0.6694 0.5550 w/ Drecip 0.6858 0.5850 w/ Dnorm 0.6888 0.5800 w/ Dkde 0.6953 0.6000 Vanilla BERT Pairwise 0.6999 0.5850 w/ Drecip \u21910.7335 \u21910.6450 w/ Dnorm 0.7237 0.6250 w/ Dkde 0.7244 0.6250 Baselines BM25 Default Settings 0.5464 0.4450 Tuned 0.5802 0.4550 Optimized 0.6035 0.4950 SDM Default Settings 0.5229 0.4050 Tuned 0.5377 0.4400 Optimized 0.5491 0.4700 Best prior published (BERT) [13] 0.7968 0.7092 Optimal Re-Ranker 0.9400 0.9400 BERT reported in [13], we were unable to do so using the experimental settings described in that work. These results are still below that of an optional re-ranking, suggesting that there is still considerable room for improvement when ranking these non-factoid answers. To answer RQ1 (whether the training curricula are effective), we observed that for three answer ranking datasets (TREC DL, TREC CAR, and ANTIQUE) these curricula can improve the ranking effectiveness across multiple neural rankers and loss functions. We observe that when a ranker initially underperforms standard baselines (e.g., ConvKNRM), the performance is effectively boosted to the level of those baselines. When the ranker already exceeds these baselines (e.g., Vanilla BERT), we also observe a boost to ranking effectiveness, often comparable to or approaching the state-of-theart while being considerably faster (e.g., using BERT Base instead of BERT Large) or less complicated (e.g., not requiring an expensive pre-training step). The observation that the curricula are effective in these various conditions suggests that these curricula are generally effective. To answer RQ2 (under what conditions each curriculum is effective), we observe that Drecip and Dkde are generally more effective for natural-language questions (TREC DL and ANTIQUE), while Dnorm is more effective for keyword/structured questions \f(TREC CAR). One possible alternative explanation may be that the latter is better with weak relevance labels, as TREC CAR\u2019s relevance labels are obtained through a heuristic, rather than human annotators. It does not appear as if the amount of training data has an effect, as TREC DL and ANTIQUE exhibit similar characteristics, while having drastically different amounts of training data. 4.4 End of curriculum evaluation We already observed in Figure 3 that when using a training curriculum, ranking performance not only peaks higher sooner, but also leaves the model in a better starting point for when all samples are weighted equally. However, an important question remains: Is it important to train with equal weight for all samples or can the difficulty weights be used exclusively? To this end, we perform a test that forgoes the curriculum convergence parameter m, directly using D(\u00b7) as the training sample weight, regardless of training iteration (i.e., m = \u221e, or equivalently W = D instead of Eq. 3). We report the performance for this experiment on each dataset for each top-performing curriculum in Table 6 (m = \u221esetting). We observe that for all models on the TREC DL and ANTIQUE datasets, this approach leads to a drop in ranking effectiveness, suggesting that it is important to eventually perform equal sample weighting. Intuitively, this is important because if easy samples are always weighted higher than difficult samples, the model will be hindered in learning the more complicated function to rank difficult samples. Curiously, for TREC CAR, this setting sometimes leads to improved ranking effectiveness (though not a statistically significant improvement). One possible explanation is that in situations where weak labels are used (rather than human-judged labels from top retrieved results), it may be better to always apply the weighting, as some inferred positive labels may be too distant from what the model will typically encounter at inference time. To answer RQ3 (whether shifting to difficult samples is important), we find that it is indeed beneficial to use our proposed weighting technique given in Eq. 3, rather than always applying the difficulty weighting when using manually-assessed relevance labels. 4.5 Anti-curriculum: Hardest samples first To test whether our intuitions that \u201cdifficult\u201d samples are harmful during early phases of training, we conduct a study using an anti-curriculum, i.e., we train our models by weighting the more difficult samples higher than the easier samples. This was applied by swapping out the difficulty function D with b D(\u00b7) = 1 \u2212D(\u00b7). This has the effect of assigning high weights to samples that previously had low weights and vice versa. All usage of the difficulty function remains unchanged (e.g., the integration of the difficulty function into the weight function). Table 6 ( b D setting) presents a ranking performance comparison when using the anti-curriculum. We observe that the anticurriculum always reduces ranking effectiveness, sometimes significantly. In some cases, this can be rather severe; on TREC DL for Vanilla BERT (pairwise), the MRR is reduced by 0.0523 and P@1 is reduced by 0.1163, resulting in a model that underperforms one without any weighting at all. To answer RQ4, these results suggest that there is benefit to weighting the easiest samples higher first, rather than the more difficult samples. Table 6: Ranker performance when the curriculum always uses difficulty scores, and never assigns equal weight to all samples (i.e., m = \u221e), and when employing the anticurriculum ( b D). Significant reductions in performance are indicated with \u2193(paired t-test, p < 0.05). TREC DL Ranker Curriculum m MRR@10 P@1 ConvKNRM Dpair recip 20 0.6834 0.5581 Dpair recip \u221e 0.6744 0.5581 b Dpair recip 20 \u21930.5414 \u21930.3721 Vanilla BERT Dpoint recip 10 0.8942 0.8372 Dpoint recip \u221e 0.8205 0.7209 b Dpoint recip 10 0.8527 0.7442 Vanilla BERT Dpair kde 20 0.8837 0.8372 Dpair kde \u221e \u21930.7752 \u21930.6279 b Dpair kde 20 0.8314 0.7209 TREC CAR Ranker Curriculum m R-Prec MAP ConvKNRM Dpair norm 50 0.1258 0.1572 Dpair norm \u221e 0.1250 0.1579 b Dpair norm 50 \u21930.1030 \u21930.1324 Vanilla BERT Dpoint recip 20 0.2446 0.2864 Dpoint recip \u221e 0.2475 0.2894 b Dpoint recip 20 \u21930.2258 \u21930.2709 Vanilla BERT Dpair norm 10 0.2921 0.3307 Dpair norm \u221e \u21930.2669 \u21930.3103 b Dpair norm 10 0.2837 0.3276 ANTIQUE Ranker Curriculum m MRR P@1 ConvKNRM Dpair recip 100 0.5617 0.4550 Dpair recip \u221e 0.5368 0.4100 b Dpair recip 100 0.5366 0.4200 Vanilla BERT Dpoint kde 10 0.6953 0.6000 Dpoint kde \u221e \u21930.6139 \u21930.4750 b Dpoint kde 10 0.6677 0.5500 Vanilla BERT Dpair recip 5 0.7335 0.6450 Dpair recip \u221e 0.7158 0.6150 b Dpair recip 5 0.7193 0.6200 \f5" + }, + { + "url": "http://arxiv.org/abs/2004.14255v2", + "title": "Efficient Document Re-Ranking for Transformers by Precomputing Term Representations", + "abstract": "Deep pretrained transformer networks are effective at various ranking tasks,\nsuch as question answering and ad-hoc document ranking. However, their\ncomputational expenses deem them cost-prohibitive in practice. Our proposed\napproach, called PreTTR (Precomputing Transformer Term Representations),\nconsiderably reduces the query-time latency of deep transformer networks (up to\na 42x speedup on web document ranking) making these networks more practical to\nuse in a real-time ranking scenario. Specifically, we precompute part of the\ndocument term representations at indexing time (without a query), and merge\nthem with the query representation at query time to compute the final ranking\nscore. Due to the large size of the token representations, we also propose an\neffective approach to reduce the storage requirement by training a compression\nlayer to match attention scores. Our compression technique reduces the storage\nrequired up to 95% and it can be applied without a substantial degradation in\nranking performance.", + "authors": "Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, Ophir Frieder", + "published": "2020-04-29", + "updated": "2020-05-26", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Pretrained deep transformer networks, e.g., BERT [8], have recently been transformative for many tasks, exceeding the effectiveness of prior art in many natural language processing and information retrieval tasks [4, 27, 31, 32, 47, 48]. However, these models are huge in size, thus expensive to run. For instance, in about one year, the largest pretrained transformer model grew from about 110 million parameters (GPT [34]) to over 8.3 billion (MegatronLM [39]), which, when applied to IR tasks like ad-hoc retrieval, have substantial impact on the query processing performance, to Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGIR \u201920, July 25\u201330, 2020, Virtual Event, China \u00a9 2020 Association for Computing Machinery. ACM ISBN 978-1-4503-8016-4/20/07...$15.00 https://doi.org/10.1145/3397271.3401093 document only combined query only 1. Train time: \ufb01ne-tune masked transformer model for ranking train data 2. Index time: compute term representations document collection document term representations 3. Query time: load term representations and compute \ufb01nal score query ranking score layer 1 layer l layer n transformer combined query only document only document only combined query only document term representations Figure 1: High-level overview of PreTTR. At query time, document representations (which were computed at index time) are loaded, which reduces the computational burden. the point of being impractical [27]. We move these neural ranking models towards practicality. Runtime efficiency is a central tenant in information retrieval, though as neural approaches have gained prominence, their running time has been largely ignored in favor of gains in ranking performance [16]. Recently, the natural language processing community has begun to consider and measure running time [37], albeit mostly for reasons of environmental friendliness and inclusiveness. Chiefly, model distillation approaches [22, 36, 40] are prominent, which involve training a smaller model off of the predictions of a larger model. This smaller model can then be further fine-tuned for a specific task. While this approach can exceed the performance of a smaller model when only trained on the specific task data, it inherently limits the performance of the smaller model to that of the larger model. Nevertheless, distillation is a method complementary to ours; our approach can work with a distilled transformer network. Others have explored quantization approaches to reduce model sizes, by limiting the number of bits used to represent network\u2019s parameters to 16, 8, or fewer bits. Quantization was mainly arXiv:2004.14255v2 [cs.IR] 26 May 2020 \fexplored to make the neural networks suitable for embedded systems [11, 38]. We employ a basic quantization technique to reduce the storage requirements of the term representations. We propose a method for improving the efficiency of transformerbased neural ranking models. We exploit a primary characteristic of ad-hoc ranking: an initial indexing phase can be employed to pre-process documents in the collection to improve query-time performance. Specifically, we observe that much of the term interaction at query time happens locally within either the query or document, and only the last few layers of a deep transformer network are required to produce effective ranking scores once these representations are built. Thus, documents can be processed at index time through part of the network without knowledge of the query. The output of this partial network computation is a sequence of contextualized term representations. These representations can then be stored and used at query time to finish the processing in conjunction with the query. This approach can be trained end-to-end by masking the attention across the query and document during training time (i.e., disallowing the document from attending to the query and vice versa.) We call this approach PreTTR (Precomputing Transformer Term Representations). A high-level overview of PreTTR is shown in Figure 1. At train time, a transformer network is fine-tuned for ad-hoc document ranking. This transformer network masks attention scores in the first l layers, disallowing interactions between the query and the document. At index time, each document in the collection is processed through the first l layers, and the resulting term representations are stored. At query time, the query is processed through the first l layers, and then combined with the document term representations to finish the ranking score calculation. Since term representations of each layer can be large (e.g., 768 float values per document term in the base version of BERT), we also propose a compression approach. This approach involves training an encoding layer between two transformer layers that produces representations that can replicate the attention patterns exhibited by the original model. We experimentally show that all these processes result in a much faster network at query time, while having only a minimal impact on the ranking performance and a reasonable change in index size. The settings of PreTTR (amount of precomputation, degree of compression) can be adjusted depending on the needs of the application. These are all critical findings that are required to allow transformer networks to be used in practical search environments. Specifically, the lower computation overhead reduces query-time latency of using transformer networks for ranking, all while still yielding the substantial improvements to ranking accuracy that transformer-based rankers offer. In summary, the contributions of the paper are the following: \u2022 A new method for improving the efficiency of transformerbased neural ranking models (PreTTR). The approach exploits the inverted index to store a precomputed term representation of documents used to improve query-time performance; \u2022 A novel technique for compressing the precomputed term representations to reduce the storage burden introduced by PreTTR. This is accomplished by training a compression function between transformer layers to minimize the difference between the attention scores with and without compression; \u2022 A comprehensive experimental evaluation of PreTTR on multiple pre-trained transformer networks on two public datasets, namely, TREC WebTrack 2012 and TREC Robust 2004. Our PreTTR accelerates the document re-ranking stage by up to 42\u00d7 on TREC WebTrack 2012, while maintaining comparable P@20 performance. Moreover, our results show that our compression technique can reduce the storage required by PreTTR by up to 97.5% without a substantial degradation in the ranking performance; \u2022 For reproducibility, our code is integrated into OpenNIR [26], with instructions and trained models available at: https://github.com/Georgetown-IR-Lab/prettr-neural-ir. 2 RELATED WORK We present an overview of neural ranking techniques, pretrained transformers for ranking, and efforts to optimize the efficiency of such networks. 2.1 Neural Ranking As neural approaches have gained prominence in other disciplines, many have investigated how deep neural networks can be applied to document ranking [10, 17, 19, 44]. These approaches typically act as a final-stage ranking function, via a telescoping (also referred to as cascading, or multi-stage) technique [29, 43]; that is, initial ranking is conducted with less expensive approaches (e.g., BM25), with the final ranking score calculated by the more expensive machinelearned functions. This technique is employed in commercial web search engines [35]. Neural ranking approaches can broadly be categorized into two categories: representation-focused and interactionfocused models. Representation-focused models, such as DSSM [17], aim to build a dense \u201csemantic\u201d representation of the query and the document, which can be compared to predict relevance. This is akin to traditional vector space models, with the catch that the vectors are learned functions from training data. Interaction models, on the other hand, learn patterns indicative of relevance. For instance, PACRR [19] learns soft n-gram matches in the text, and KNRM [44] learns matching kernels based on word similarity scores between the query and the document. 2.2 Pretrained Transformers for Ranking Since the rise of pretrained transformer networks (e.g., BERT [8]), several have demonstrated their effectiveness on ranking tasks. Nogueira and Cho [31] demonstrated that BERT was effective at passage re-ranking (namely on the MS-MARCO and TREC CAR datasets) by fine-tuning the model to classify the query and passage pair as relevant or non-relevant. Yang et al. [47] used BERT in an end-to-end question-answering pipeline. In this setting, they predict the spans of text that answer the question (same setting as demonstrated on SQuAD in [8]). MacAvaney et al. [27] extended that BERT is effective at document ranking, both in the \u201cvanilla\u201d setting (learning a ranking score from the model directly) and when using the term representations from BERT with existing neural ranking architectures (CEDR). Dai and Callan [4] found that the additional context given by natural language queries (e.g., topic descriptions) can improve document ranking performance, when compared with keyword-based queries. Yang et al. [48] showed that \fBERT scores aggregated by sentence can be effective for ranking. Doc2Query [32] employs a transformer network at index time to add terms to documents for passage retrieval. The authors also demonstrate that a BERT-based re-ranker can be employed atop this index to further improve ranking performance. 2.3 Neural Network Efficiency Pretrained transformer networks are usually characterized by a very large numbers of parameters and very long inference times, making them unusable in production-ready IR systems such as web search engines. Several approaches were proposed to reduce the model size and the inference computation time in transformer networks [12]. Most of them focus on the compression of the neural network to reduce their complexity and, consequently, to reduce their inference time. Neural network pruning consists of removing weights and activation functions in a neural network to reduce the memory needed to store the network parameters. The objective of pruning is to convert the weight matrix of a dense neural network to a sparse structure, which can be stored and processed more efficiently. Pruning techniques work both at learning time and as a post-learning step. In the first category, Pan et al. propose regularization techniques focused at removing redundant neurons at training time [33]. Alternatively, in the second category, Han et al. propose to remove the smallest weights in terms of magnitude and their associated edges to shrink the size of the network [13]. Conversely, our proposed approach does not change the dense structure of a neural network to a sparser representation, but it aims to precompute the term representation of some layers, thus completely removing the document-only portion of a transformer neural network (see Figure 1). Another research line focuses on improving the efficiency of a network is weight quantization. The techniques in this area aim at reducing the number of bits necessary to represent the model weights: from the 32 bits necessary to represent a float to only a few bits [18]. The state of the art network quantization techniques [1, 45] aims at quantizing the network weights using just 2-3 bits per parameter. These approaches proved effective on convolutional and recurrent neural networks. Quantization strategies could be used in our proposed approach. However, to reduce the size of the term representations, we opt to instead focus on approaches to reduce the dimensionality of the term representations, and leave quantization of the stored embeddings to future work. A third research line employed to speed-up neural networks is knowledge distillation [15]. It aims to transform the knowledge embedded in a large network (called teacher) into a smaller network (called student). The student network is trained to reproduce the results of the teacher networks using a simpler network structure, with less parameters than those used in the teacher network. Several strategies have been proposed to distill knowledge in pretrained transformer networks such as BERT [22, 36, 40]. Our PreTTR method is orthogonal to knowledge distillation of transformer network. In fact, our approach can be applied directly to any kind of transformer, including those produced by knowledge distillation. Table 1: Table of symbols. Symbol(s) Definition q Query d Document R(q, d) Neural ranking architecture T (s) Transformer network s a sequence of input tokens E Embedding layer Li Transformer encoding layer si Transformer token representations after layer i ai Attention weights used in layer i c Classification representation d Dimension of the classification representation m Length of sequence s h Number of attention heads per layer n Number of layers in T Wcombine Vanilla BERT weight combination l Layer number the transformer is executed for precomputing document term vectors e Compressed size r Compressed representation after layer l W /bcomp Compression parameters W /bdecomp De-compression parameters \u02c6 sl De-compressed representation after layer l 2.4 Neural Ranking Efficiency Scalability and computational efficiency are central challenges in information retrieval. While the efficiency of learning to rank solutions for document re-ranking have been extensively studied [6, 24, 41], computational efficiency concerns have largely be ignored by prior work in neural ranking, prompting some to call for more attention to this matter [16]. That being said, some efforts do exist. For instance, Zamani et al. [50] investigate learning sparse query and document representations which allow for indexing. Ji et al. [21] demonstrate that Locality-Sensitive Hashing (LSH) and other tricks can be employed to improve the performance of interaction-focused methods such as DRMM [10], KNRM [44], and ConvKNRM [5]. This approach does not work for transformer models, however, because further processing of the term embeddings is required (rather than only computing similarity scores between the query and document). Within the realm of transformer-based models for ad-hoc ranking, to our knowledge only [27] and [32] acknowledge that retrieval speed is substantially impacted by using a deep transformer network. As a result Hofst\u00e4tter and Hanbury [16] call for more attention to be paid to run time. MacAvaney et al. find that limiting the depth of the transformer network can reduce the re-ranking time while yielding comparable ranking performance [27]. Nogueira et al. find that their approach is faster than a transformer-based re-ranker, but it comes at a great cost to ranking performance: a trade-off that they state can be worthwhile in some situations [32]. In contrast with both these approaches, we employ part of the transformer network at index time, and the remainder at query-time (for re-ranking). We find that this can yield performance on par with the full network, while significantly reducing the query time latency. 3 MOTIVATION Let a generic transformer network T : s 7\u2192c map a sequence s of m tokens (e.g., query and document terms) to a d-dimensional \f[CLS] [tax] [evade] [world][news] [for] [tax] [fraud][today] [SEP] \u2026 [SEP] query document embed. layer 1 layer l comp. layer l+1 \u21a6\u21a4 layer n \u2026 \u2026 \u2026 \u2026 Wcombine ranking score tokens \u2026 \u2026 \u21a6\u21a4 \u21a6\u21a4 \u21a6\u21a4 \u21a6\u21a4 \u21a6\u21a4 \u21a6\u21a4 \u21a4\u21a6 \u21a4\u21a6 \u21a4\u21a6 \u21a4\u21a6 \u21a4\u21a6 \u21a4\u21a6 \u21a4\u21a6 \u2026 decomp. \u21a6\u21a4 \u21a6\u21a4 \u21a6\u21a4 \u21a6\u21a4 \u21a4\u21a6 \u21a4\u21a6 \u21a4\u21a6 \u21a4\u21a6 Storage Figure 2: Overview of PreTTR. Compressed term representations for document layers 1 to l are computed and stored at index time (green segments) while term representations for query layers 1 to l (orange segments) and joint querydocument representations for layers l + 1 to n (blue segments) are computed at query time to produce the final ranking score. Compression and decompression can optionally be applied between layers l and l + 1 to reduce the storage needed for the document term representations. output representation c \u2208Rd. As depicted in Figure 2, the transformer network is composed by an initial embedding layer E and by n layers L1, . . . , Ln. The embedding layer E maps each of the m input tokens into the initial d-dimensional token representations matrix s0 \u2208Rm\u00d7d. Each layer Li takes the token representations matrix si\u22121 \u2208Rm\u00d7d from the previous layer Li\u22121 and produces a new representations matrix si \u2208Rm\u00d7d. The specific representation used and operations performed in E and Li depend on the specific transformer architecture (e.g., BERT uses token, segment, and position embeddings for the embedding layer E and self-attention, a feed-forward layer, and batch normalization in each layer Li). However, the primary and common component of each layer Li is the self-attention mechanism and associated procedure. When the transformer network is trained, every layer produces a selfattention tensor ai \u2208Rh\u00d7m\u00d7m, where h is the number of attention heads per layer, i.e., the number of attention \u201crepresentation subspaces\u201d per layer. A general description of this process is given by Vaswani et al. [42], while different transformer architectures may have tweaks to this general structure or pre-training procedure. We assume a special output classification token, e.g., [CLS] in BERT, is included as a token in c, and that the final representation of this token is used as the final output of the transformer network, i.e., c = T(s). Without loss of generality, here we only concern ourselves with the [CLS] output classification token, i.e., we ignore other token representation outputs; this is the special token representation that models such as BERT use to generate ranking scores. We illustrate how neural transformer networks are used in a ranking scenario. We follow the Vanilla BERT model proposed by MacAvaney et al. [27] and generalize it. Let a ranking function R(q, d) \u2208R map a query q and a document d to a real-valued ranking score. Neural rankers based on transformer networks such as Vanilla BERT compute the ranking score by feeding the querydocument pair into the transformer. Given a query q and a document d, their tokens are concatenated into a suitable transformer input, e.g., s = [CLS]; q; [SEP]; d; [SEP], where \u201c;\u201d represents the concatenation operator.1 The output of the transformer network corresponding to this input is then linearly combined using a tuned weight matrixWcombine \u2208Rd\u00d71 to compute the final ranking score as follows: R(q, d) = T \u0000[CLS]; q; [SEP]; d; [SEP]\u0001Wcombine. (1) The processing time of state-of-the-art neural rankers based on transformer networks is very high, e.g., approximately 50 documents ranked per second on a modern GPU, making such rankers impractical for most ad-hoc retrieval tasks. To gain an understanding of where are the most expensive components of a transformer network such as the Vanilla BERT model, we measure the run-times of the main steps of the model. We find that most of the processing is performed in the computations involving the transformer\u2019s layers. In particular, about 50% of the total time is spent performing attention-related tasks. Moreover, the feed-forward step of the transformer (consisting of intermediate and output in diagram) accounts for about 48% of the total time, and is largely due to the large intermediate hidden representation size for each token. This breakdown motivates the investigation of possible solutions to reduce the processing time of transformer networks, in particular in reducing the time spent in traversing the transformer\u2019s layers. 4 PROPOSED SOLUTION We discuss how our PreTTR approach improve the efficiency of processing queries using a transformer network by reducing the computational impact of the network\u2019s layers. 4.1 PreTTR: Precomputing Transformer Term Representations We improve the query time performance of transformer models by precomputing document term representations partially through the transformer network (up to transformer layer l). We then use these representations at query time to complete the execution of the network when the query is known. This is accomplished at model training time by applying an attention mask to layers L1, L2, . . . , Ll , in which terms from the query are not permitted to attend to terms from the document and vice versa. In layers Ll+1, . . . , Ln, this attention mask is removed, permitting any token to attend to any other token. Once trained, the model is used at both index and query time. At index time, documents are encoded (including the trailing [SEP] token)2 by the transformer model through layers L1, L2, . . . , Ll without a query present (Figure 2, green segments). The token representations generated at index time at layer Ll are then stored to be reused at query time (Figure 2, document storage between layers Ll and Ll+1). To answer a query, candidate documents are selected, e.g., the top documents 1We use the BERT convention of [CLS] and [SEP] to represent the classification and separation tokens, respectively. 2There is evidence that the separator token performs an important function for pretrained transformer models, by acting as a no-op for the self-attention mechanism [2]. \fretrieved by a first-stage simple ranking model [41], and precomputed term representations are loaded. The query terms (including the leading [CLS] and training [SEP] tokens) are encoded up to layer Ll without a document present (Figure 2, orange segments). Then, the representations from the query and the document are joined, and the remainder of the transformer network is executed over the entire sequence to produce a ranking score (Figure 2, blue segments). Since (1) the length of a query is typically much shorter than the length of a document, (2) the query representations can be re-used for each document being ranked, (3) each transformer layer takes about the same amount of time to execute, and (4) the time needed to perform term embedding is comparatively low, PreTTR decreases by about n\u2212l n the cost of traversing the transformer network layers. With a sufficiently large value of l, this results in considerable time savings. Note that this reduction can be at most equal to 1 n because, when l = n, no information about the document ever contributes to the ranking score, resulting in identical scores for every document. Moreover, we show experimentally that this can be further improved by limiting the computation of the final layer to only the [CLS] representation. 4.2 Token Representation Compression Although PreTTR can reduce the run-time cost of traversing the first l layers of the transformer network at query time, the solution proposed might be costly in terms of storage requirements because the representation size d is quite large (e.g., 1024, 768 or 512 float values per token). To address this issue, we propose a new token compression technique that involves pre-training a simple encoderdecoder network. This network is able to considerably reduce the token representation size. We opt for this approach because it can fit seamlessly into the transformer network, while reducing the number of dimensions needed to represent each token. The compressor is added as an additional component of the transformer network between layers Ll and Ll+1. We compress the input by using a simple feed-forward and normalization procedure, identical to the one used within a BERT layer to transform the output (but with a smaller internal representation rather than a larger one). We optimize the weights for the compression network in two stages: (1) an initial pre-training stage on unlabeled data, and (2) a fine-tuning stage when optimizing for relevance. For a compressed size of e values, a two-step procedure is used. First, the compressed representations r \u2208Rm\u00d7e are built using r = gelu(slWcomp + bcomp), where gelu(\u00b7) is a Gaussian Error Linear Unit [14], and Wcomp \u2208Rd\u00d7e and bcomp \u2208Re are the new learned weight parameters. These compressed representations r can be stored in place of sl. Second, the compressed representations r are then expanded back out to \u02c6 sl \u2208Rm\u00d7d via a second linear transformation involving the learned weight parameters Wdecomp, bdecomp, and batch normalization. The decompressed representations \u02c6 sl are then used in place of the original representation sl for the remaining layers of the transformer. In preliminary experiments, we found the compression and decompression parameters to be difficult to learn jointly with the ranker itself. Thus, we instead propose a pre-training approach to provide an effective initialization of these parameters. We want the transformer network with the compression mechanism to behave similarly to that of the network without such compression: we do not necessarily care about the exact representations themselves. Thus, we use an attention-based loss function. More specifically, we optimize our compression/decompression network to reduce the mean squared error of the attention scores in the last n \u2212l layers of the compressed transformer network and the original transformer network. Thus, the loss function we use to train our compression and decompression network is: L(al+1, . . . , an, \u02c6 al+1, . . . , \u02c6 an) = 1 n \u2212l n \u00d5 i=l+1 MSE(ai, \u02c6 ai), (2) where ai represents the attention scores at layer i from the unmodified transformer network, \u02c6 ai represents the attention scores at layer i from the transformer network with the compression unit, and MSE(\u00b7) is the mean squared error function. With this loss function, the weights can be pre-trained on a massive amount of unlabeled text. We use this procedure as an initial pre-training step; we further fine-tune the weights when optimizing the entire ranking network for relevance. 5 EXPERIMENTAL SETUP We detail the setup employed in our experiments: the datasets, namely TREC WebTrack 2012 and TREC Robust 2004, and the transformer networks we use, i.e., Vanilla BERT and some of its variants. Then, we discuss the training procedure adopted in training the transformer networks and our proposed compression/decompression technique. Details about the evaluation metrics and the baselines used conclude the section. 5.1 Datasets We test PreTTR on two datasets, namely TREC WebTrack 2012 and TREC Robust 2004. Table 2 summarizes some salient statistics about the two datasets. Table 2: Datasets characteristics. WebTrack 2012 Robust 2004 Domain Web Newswire Document collection ClueWeb09-B TREC Disks 4 & 5 # Queries 50 249 # Documents 50M 528k Tokens / query 2.0 2.7 Judgments / query 321 1.2k The TREC WebTrack 2012 dataset consists of web queries and relevance judgments from the ClueWeb09-B document collection. We use relevance judgments from 2012 for test and the ones from 2011 for validation. The relevance judgments available from the remaining years of the TREC WebTrack, i.e., 2009, 2010, 2013, and 2014 are used for training. Note that, while the TREC WebTrack 2009\u201312 have been evaluated on the ClueWeb09-B document collection, the TREC WebTrack 2013\u201314 have been evaluated on the ClueWeb12 [19] document collection.3 We generate the training samples by using the corresponding document collection. This is 3https://lemurproject.org/clueweb09/ and https://lemurproject.org/clueweb12/. \fthe setup used by several other works on TREC WebTrack 2012, e.g., [19, 27]. TREC Robust 2004 consists of 249 news queries. For these experiments, we use a standard k-fold evaluation (k = 5) where each iteration uses three folds for training, one for validation, and a final held-out fold for testing. We perform this evaluation by using the five folds provided by Huston and Croft [20]. 5.2 Transformer Networks We use the Vanilla transformer model from [27]. This model yields comparable performance to other leading formulations, while being simpler, e.g., no paragraph segmentation required, as is needed by FirstP/MaxP/SumP [4], or alternative training datasets and sentence segmentation, as required by the system of Yang et al. [48]. Vanilla BERT encodes as much of the document as possible (adhering to the transformer maximum input length constraint), and averages the classification embeddings when multiple document segments are required. We employ the same optimal hyper-parameters for the model presented in [27]. For our primary experiments, we use the pretrained bert-base-uncased [8]. We do not test with the large variants of BERT because the larger model exhibits only marginal gains for ranking tasks, while being considerably more expensive to run [31]. To show the generality of our approach we present tests conducted also for other pretrained transformers in Section 6.5: a version of BERT that was more effectively pre-trained, i.e., RoBERTa [25] (roberta-base) and a smaller (distilled) version of BERT, i.e., DistilBERT [36] (distilbert-base-uncased). 5.3 Training We train all transformer models using pairwise softmax loss [7] and the Adam optimizer [23] with a learning rate of 2\u00d710\u22125. We employ a batch size of 16 pairs of relevant and non-relevant documents with gradient accumulation. Training pairs are selected randomly from the top-ranked documents in the training set, where documents that are labeled as relevant are treated as positive, and other topranked documents are considered negative. Every 32 batches, the model is validated, and the model yielding the highest performance on the validation set is selected for final evaluation. For training the document term compressor/decompressor (as described in Section 4.2), we use the Wikipedia text from the TREC Complex Answer Retrieval (CAR) dataset [9] (version 2.0 release). This dataset was chosen because it overlaps with the data on which BERT was originally trained on, i.e., Wikipedia, and was used both for evaluation of passage ranking approaches [30] and as a weak supervision dataset for training neural models [28]. We sample text pairs using combinations of headings and paragraphs. Half the pairs use the heading associated with the paragraph, and the other half use a random heading from a different article, akin to the next sentence classification used in BERT pre-training. The compression and decompression parameters (Wcomp, bcomp,Wdecomp, and bdecomp) are trained to minimize the difference in attention scores, as formulated in Eq. (2). We found that the compressor training process converged by 2M samples. 5.4 Evaluation Since the transformer network is employed as a final-stage reranker, we evaluate the performance of our approach on each dataset using two precision-oriented metrics. Our primary metric for both datasets is P@20 (also used for model validation). Following the evaluation convention from prior work [27], we use ERR@20 for TREC WebTrack 2012 and nDCG@20 for TREC Robust 2004 as secondary metrics. We also evaluate the query-time latency of the models. We conduct these experiments using commodity hardware: one GeForce GTX 1080 Ti GPU. To control for factors such as disk latency, we assume the model and term representations are already loaded in the main memory. In other words, we focus on the impact of the model computation itself. However, the time spent moving the data to and from the GPU memory is included in the time. 5.5 Baselines The focus of this work is to reduce the query-time latency of using Vanilla transformer models, which are among the state-of-the-art neural ranking approaches. Thus, our primary baseline is the unmodified Vanilla transformer network. To put the results in context, we also include the BM25 results tuned on the same training data. We tune BM25 using grid search with Anserini\u2019s implementation [46], over k1 in the range of 0.1\u20134.0 (by 0.1) and b in the range of 0.1\u20131.0 (by 0.1). We also report results for CEDR-KNRM [27], which outperform the Vanilla transformer approaches. However, it come with its own query-time challenges. Specifically, since it uses the term representations from every layer of the transformer, this would require considerably more storage. To keep our focus on the typical approach, i.e., using the [CLS] representation for ranking, we leave it to future work to investigate ways in which to optimize the CEDR model.4 We also report results for Birch [49], which exploits transfer learning from the TREC Microblog dataset. To keep the focus of this work on the effect of pre-computation, we opt to evaluate in the single-domain setting. 6 RESULTS AND DISCUSSION We report the results of a comprehensive experimental evaluation of the proposed PreTTR approach. In particular, we aim at investigating the following research questions: RQ1 What is the impact of PreTTR on the effectiveness of the Vanilla BERT transformer network in ad-hoc ranking? (Section 6.1) RQ2 What is the impact of the token representation compression on the effectiveness of PreTTR? (Section 6.2) RQ3 What is the impact of the proposed PreTTR approach on the efficiency of Vanilla BERT when deployed as a second stage re-ranker? (Section 6.3) RQ4 What is the impact of PreTTR when applied to first n \u22121 layers of a transformer network? (Section 6.4) RQ5 What is the impact of PreTTR when applied to different transformer networks such as RoBERTA and DistilBERT? (Section 6.5) 4We note that techniques such as LSH hashing can reduce the storage requirements for CEDR, as it uses the representations to compute query-document similarity matrices, as demonstrated by [21]. \f6.1 Precomputing Transformer Term Representations To answer RQ1 we first evaluate the effect of the precomputation of term representations. Table 3 provides a summary of the ranking performance of PreTTR-based Vanilla BERT at layer l. At lower values of l, the ranking effectiveness remains relatively stable, despite some minor fluctuations. We note that these fluctuations are not statistically significant when compared with the base model (paired t-test, 99% confidence interval) and remain considerably higher than the tuned BM25 model. We also tested using a two onesided equivalence (TOST) and found similar trends (i.e., typically the the significant differences did not exhibit significant equivalence.) In the case of TREC WebTrack 2012, the model achieves comparable P@20 performance w.r.t. the base model with only a single transformer layer (12), while the first 11 layers are precomputed. Interestingly, the ERR@20 suffers more than P@20 as more layers are precomputed. This suggests that the model is able to identify generally-relevant documents very effectively with only a few transformer layers, but more are required to be able to identify the subtleties that contribute to greater or lesser degrees of relevance. Although it would ideally be best to have comparable ERR@20 performance in addition to P@20, the substantial improvements that this approach offers in terms of query-time latency (see Section 6.3) may make the trade-off worth it, depending on the needs of the application. On the TREC Robust 2004 newswire collection, precomputing the first 10 layers yields comparable P@20 performance w.r.t. the base model. Interestingly, although l = 11 yields a relatively effective model for WebTrack, Robust performance significantly suffers in this setting, falling well below the BM25 baseline. We also observe a significant drop in nDCG@20 performance at l = 8, while P@20 performance remains stable until l = 11. This is similar to the behavior observed on WebTrack: as more layers are precomputed, the model has a more difficult time distinguishing graded relevance. We observe that the highest-performing models (metric in bold) are not always the base model. However, we note that these scores do not exhibit statistically significant differences when compared to the base model. In summary, we answer RQ1 by showing that Vanilla BERT can be successfully trained by limiting the interaction between query terms and document terms, and that this can have only a minimal impact on ranking effectiveness, particularly in terms in the precision of top-ranked documents. This is an important result because it shows that document term representations can be built independently of the query at index time. 6.2 Term Representation Compression To answer RQ2, we run the Vanilla BERT model with varying sizes e of the compressed embedding representations over the combination layers l that give the most benefit to query latency time (i.e., l = 7, 8, 9, 10, 11). Layers l \u22646 are not considered because they provide less computational benefit (taking about one second or more per 100 documents, see Section 6.3). See Table 4 for a summary of the results on TREC WebTrack 2012 and Robust 2004. We find that the representations can usually be compressed down to at least e = 256 (67% of the original dimension of 768) without substantial Table 3: Breakdown of ranking performance when using a PreTTR-based Vanilla BERT ranking, joining the encodings at layer l. Statistically significant differences with the base model are indicated by \u2193(paired t-test by query, p < 0.01). WebTrack 2012 Robust 2004 Ranker P@20 ERR@20 P@20 nDCG@20 Base 0.3460 0.2767 0.3784 0.4357 l = 1 0.3270 0.2831 0.3851 0.4401 l = 2 0.3170 0.2497 0.3821 0.4374 l = 3 0.3440 0.2268 0.3859 0.4386 l = 4 0.3280 0.2399 0.3701 0.4212 l = 5 0.3180 0.2170 0.3731 0.4214 l = 6 0.3270 0.2563 0.3663 0.4156 l = 7 0.3180 0.2255 0.3656 0.4139 l = 8 0.3140 0.2344 0.3636 \u21930.4123 l = 9 0.3130 0.2297 0.3644 \u21930.4106 l = 10 0.3360 0.2295 0.3579 \u21930.4039 l = 11 0.3380 \u21930.1940 \u21930.2534 \u21930.2590 Tuned BM25 0.2370 0.1418 0.3123 0.4140 Vanilla BERT [27] 0.4042 0.4541 CEDR-KNRM [27] 0.4667 0.5381 Birch [49] 0.4669 0.5325 loss in ranking effectiveness. In Robust, we observe a sharp drop in performance at e = 128 (83% dimension compression) at layers 7\u201310. There is no clear pattern for which compression size is most effective for WebTrack 2012. Note that these differences are generally not statistically significant. This table shows that, to a point, there is a trade-off between the size of the stored representations and the effectiveness of the ranker. Without any intervention, approximately 112TB of storage would be required to store the full term vectors for ClueWeb09-B (the document collection for TREC WebTrack 2012). For web collections, this can be substantially reduced by eliminating undesirable pages, such as spam. Using recommended settings for the spam filtering approach proposed by Cormack et al. [3] for ClueWeb09-B, the size can be reduced to about 34TB. Using our compression/decompression approach, the storage needed can be further reduced, depending on the trade-off of storage, query-time latency, and storage requirements. If using a dimension e = 128 for the compressed representation (with no statistically significant differences in effectiveness on WebTrack), the size is further reduced to 5.7TB, which yields a 95% of space reduction. We also observed that there is little performance impact by using 16-bit floating point representations, which further reduces the space to about 2.8TB. Although this is still a tall order, it is only about 2.5% of the original size, and in the realm of reasonable possibilities. We leave it to future work to investigate further compression techniques, such as kernel density estimation-based quantization [38]. Since the size scales with the number of documents, the storage requirements are far less for smaller document collections such as newswire. Document representations for the TREC Disks 4 & 5 (the document collection for the Robust 2004) can be stored in about \fTable 4: Ranking performance at various compression sizes. Statistically significant increases and decreases in ranking performance (compared to the model without compression) are indicated with \u2191and \u2193, respectively (paired t-test by query, p < 0.01). We mark columns with * to indicate cases in which the uncompressed model (none) significantly underperforms the Base model performance (from Table 3). TREC WebTrack 2012 P@20 ERR@20 Compression l = 7 l = 8 l = 9 l = 10 l = 11 l = 7 l = 8 l = 9 l = 10 * l = 11 (none) 0.3180 0.3140 0.3130 0.3360 0.3380 0.2255 0.2344 0.2297 0.2295 0.1940 e = 384 (50%) 0.3430 0.3260 0.2980 0.3360 0.3090 0.2086 0.2338 0.1685 0.2233 0.2231 e = 256 (67%) 0.3380 0.3120 \u21910.3440 0.3260 0.3250 \u21910.2716 0.2034 \u21910.2918 0.1909 0.2189 e = 128 (83%) 0.3100 0.3210 0.3320 0.3220 0.3370 0.2114 0.2234 0.2519 0.2239 0.2130 TREC Robust 2004 P@20 nDCG@20 Compression l = 7 l = 8 l = 9 l = 10 * l = 11 l = 7 * l = 8 * l = 9 * l = 10 * l = 11 (none) 0.3656 0.3636 0.3644 0.3579 0.2534 0.4139 0.4123 0.4106 0.4039 0.2590 e = 384 (50%) 0.3587 \u21930.3369 \u21930.3435 0.3522 0.2687 0.4098 \u21930.3720 \u21930.3812 0.3895 \u21910.2807 e = 256 (67%) \u21930.2950 0.3623 \u21930.2695 0.3535 0.2635 \u21930.3130 0.4074 \u21930.2753 0.3983 0.2694 e = 128 (83%) \u21930.2461 \u21930.2530 \u21930.2499 \u21930.2607 0.2655 \u21930.2454 \u21930.2568 \u21930.2533 \u21930.2608 0.2713 Table 5: Vanilla BERT query-time latency measurements for re-ranking the top 100 documents on TREC WebTrack 2012 and TREC Robust 2004. The latency is broken down into time to compute query representations up through layer l, the time to decompress document term representations, and the time to combine the query and document representations from layer l + 1 to layer n. The l = 11 setting yields a 42\u00d7 speedup for TREC WebTrack, while not significantly reducing the ranking performance. TREC WebTrack 2012 Robust04 Ranker Total Speedup Query Decom. Combine Total Base 1.941s (1.0\u00d7) 2.437s l = 1 1.768s (1.1\u00d7) 2ms 10ms 1.756s 2.222s l = 2 1.598s (1.2\u00d7) 3ms 10ms 1.585s 2.008s l = 3 1.423s (1.4\u00d7) 5ms 10ms 1.409s 1.792s l = 4 1.253s (1.5\u00d7) 6ms 10ms 1.238s 1.575s l = 5 1.080s (1.8\u00d7) 7ms 10ms 1.063s 1.356s l = 6 0.906s (2.1\u00d7) 9ms 10ms 0.887s 1.138s l = 7 0.735s (2.6\u00d7) 10ms 10ms 0.715s 0.922s l = 8 0.562s (3.5\u00d7) 11ms 10ms 0.541s 0.704s l = 9 0.391s (5.0\u00d7) 12ms 10ms 0.368s 0.479s l = 10 0.218s (8.9\u00d7) 14ms 10ms 0.194s 0.266s l = 11 0.046s (42.2\u00d7) 15ms 10ms 0.021s 0.053s 195GB, without any filtering and using the more effective e = 256 for the dimension of the compressed representation. In summary, regarding RQ2, we show that, through our compression technique, one can reduce the storage requirements of PreTTR. With a well-trained compression and decompression weights, this can have minimal impact on ranking effectiveness. 6.3 Re-ranking Efficiency The reduction of the re-ranking latency achieved by our proposed PreTTR is considerable. To answer RQ3, in Table 5 we report an analysis of the re-ranking latency of PreTTR-based Vanilla BERT when precomputing the token representations at a specific layer l and a comparison against the base model, i.e., Vanilla BERT. Without our approach, re-ranking the top 100 results for a query using Vanilla BERT takes around 2 seconds. Instead, when using PreTTR-based Vanilla BERT at layer l = 11, which yields comparable P@20 performance to the base model on the TREC WebTrack 2012 collection, the re-ranking process takes 46 milliseconds for 100 documents, i.e., we achieve a 42.0\u00d7 speedup. One reason this performance is achievable is because the final layer of the transformer network does not need to compute the representations for each token; only the representations for the [CLS] token are needed, since it is the only token used to compute the final ranking score. Thus, the calculation of a full self-attention matrix is not required. Since the [CLS] representation is built in conjunction with the query, it alone can contain a summary of the query terms. Furthermore, since the query representation in the first l layers is independent of the document, these representations are re-used among all the documents that are re-ranked. Of the time spent during re-ranking for l = 11, 32% of the time is spent building the query term representation, 21% of the time is spent decompressing the document term representations, and the remainder of the time is spent combining the query and document representations. Moreover, when using PreTTR-based Vanilla BERT at layer l = 10, the transformer network needs to perform a round of computations on all the term representations. Nevertheless, in this case, our PreTTR approach leads to a substantial speedup of 8.9\u00d7 w.r.t. Vanilla BERT. We also observe that the time to decompress the term representations (with e = 256) remains a constant overhead, as expected. We observe a similar trend when timing the performance of Robust 2004, though we would recommend using l \u226410 for this dataset, as l = 11 performs poorly in terms of ranking effectiveness. Nonetheless, at l = 10, Robust achieves a 9.2\u00d7 speedup, as compared to the full model. In summary, regarding RQ3, we show that the PreTTR approach can save a considerable amount of time at query-time, as compared to the full Vanilla BERT model. These time savings can make it practical to run transformer-based rankers in a real-time query environment. \f6.4 Single Layer Ranking (l = 11) We answer RQ4 by highlighting a first interesting difference between the WebTrack and the Robust ranking performance: the effectiveness at l = 11 (Table 3). For WebTrack, the performance is comparable in terms of P@20, but suffers in terms of ERR@20. For Robust, the performance suffers drastically. We attribute this to differences in the dataset characteristics. First, let us consider what happens in the l = 11 case. Since it is the final layer and only the representation of the [CLS] token is used for ranking, the only attention comparisons that matter are between the [CLS] token and every other token (not a full comparison between every pair of tokens, as is done in other layers). Thus, a representation of the entire query must be stored in the [CLS] representation from layer 11 to provide an effective comparison with the remainder of the document, which will have no contribution from the query. Furthermore, document token representations will need to have their context be fully captured in a way that is effective for the matching of the [CLS] representation. Interestingly, this setting blurs the line between representation-focused and interaction-focused neural models. Now we will consider the characteristics of each dataset. From Table 2, we find that the queries in the TREC WebTrack 2012 are typically shorter (mean: 2.0, median: 2, stdev: 0.8) than those from Robust (mean: 2.7, median: 3, stdev: 0.7). This results in queries that are more qualified, and may be more difficult to successfully represent in a single vector. To answer RQ4, we observe that the ranking effectiveness when combining with only a single transformer layer can vary depending on dataset characteristics. We find that in web collections (an environment where query-time latency is very important), it may be practical to use PreTTR in this way while maintaining high precision of the top-ranked documents. 6.5 PreTTR for Other Transformers Numerous pre-trained transformer architectures exist. We now answer RQ5 by showing that PreTTR is not only effective on BERT, but its ability of reducing ranking latency by preserving quality holds also on other transformer variants. We investigate both the popular RoBERTa [25] model and the DistilBERT [36] model. These represent a model that uses a more effective pre-training process, and a smaller network size (via model distillation), respectively. Results for this experiment are shown in Table 6. We first observe that the unmodified RoBERTa model performs comparably with the BERT model, while the DistilBERT model performs slightly worse. This suggests that model distillation alone may not be a suitable solution to address the poor query-time ranking latency of transformer networks. With each value of l, we observe similar behavior to BERT: P@20 remains relatively stable, while ERR@20 tends to degrade. Interestingly, at l = 2 DistilBERT\u2019s ERR@20 performance peaks at 0.2771. However, this difference is not statistically significant, and thus we cannot assume it is not due to noise. We tested the query-time latency of RoBERTa and DistilBERT in the same manner as described in Section 6.3. With 12 layers and a similar neural architecture, RoBERTa exhibited similar speedups as BERT, with up to a 56.3\u00d7 speedup at l = 11 (0.041s per 100 documents, down from 1.89s). With only 6 layers, the base DistilBERT Table 6: WebTrack 2012 using two other Vanilla transformer architectures: RoBERTa and DistilBERT. Note that DistilBERT only has 6 layers; thus we only evaluate l \u2208[1, 5] for this model. There are no statistically significant differences between the Base Model and any of the PreTTR variants (paired t-test, p < 0.01). RoBERTA [25] DistilBERT [36] Ranker P@20 ERR@20 P@20 ERR@20 Base 0.3370 0.2609 0.3110 0.2293 l = 1 0.3380 0.2796 0.3220 0.1989 l = 2 0.3370 0.2207 0.3340 0.2771 l = 3 0.3530 0.2669 0.3070 0.1946 l = 4 0.3620 0.2647 0.3350 0.2281 l = 5 0.2950 0.1707 0.3350 0.2074 l = 6 0.3000 0.1928 l = 7 0.3350 0.2130 l = 8 0.3220 0.2460 l = 9 0.3180 0.2256 l = 10 0.3140 0.1603 l = 11 0.3210 0.2241 model was faster (0.937s), and was able to achieve a speedup of 24.1\u00d7 with l = 5 (0.035s). In summary, we show that the PreTTR approach can be successfully generalized to other transformer networks (RQ5). We observed similar trends to those we observed with BERT in two transformer variants, both in terms of ranking effectiveness and efficiency. 7" + }, + { + "url": "http://arxiv.org/abs/2004.14245v2", + "title": "Expansion via Prediction of Importance with Contextualization", + "abstract": "The identification of relevance with little textual context is a primary\nchallenge in passage retrieval. We address this problem with a\nrepresentation-based ranking approach that: (1) explicitly models the\nimportance of each term using a contextualized language model; (2) performs\npassage expansion by propagating the importance to similar terms; and (3)\ngrounds the representations in the lexicon, making them interpretable. Passage\nrepresentations can be pre-computed at index time to reduce query-time latency.\nWe call our approach EPIC (Expansion via Prediction of Importance with\nContextualization). We show that EPIC significantly outperforms prior\nimportance-modeling and document expansion approaches. We also observe that the\nperformance is additive with the current leading first-stage retrieval methods,\nfurther narrowing the gap between inexpensive and cost-prohibitive passage\nranking approaches. Specifically, EPIC achieves a MRR@10 of 0.304 on the\nMS-MARCO passage ranking dataset with 78ms average query latency on commodity\nhardware. We also find that the latency is further reduced to 68ms by pruning\ndocument representations, with virtually no difference in effectiveness.", + "authors": "Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, Ophir Frieder", + "published": "2020-04-29", + "updated": "2020-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Passage retrieval is fundamentally burdened by short passages. While document retrieval systems can rely on signals such as term frequency to estimate the importance of a given term in a document, passages usually do not have this benefit. Consequently, traditional retrieval approaches often perform poorly at passage retrieval. Supervised deep learning approaches\u2014in particular, those that make use of pretrained contextualized language models\u2014have successfully overcome this limitation by making use of general language Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGIR \u201920, July 25\u201330, 2020, Virtual Event, China \u00a9 2020 Association for Computing Machinery. ACM ISBN 978-1-4503-8016-4/20/07...$15.00 https://doi.org/10.1145/3397271.3401262 | | Query Passage ( , ) (a) Query Importance (b) Passage Importance & Expansion (c) Relevance Score Figure 1: Overview of EPIC. characteristics [1, 6]. However, these approaches have a substantial computational burden, which can make them impractical [7, 14]. We propose a new approach for passage retrieval that performs modeling of term importance (i.e., salience) and expansion over a contextualized language model to build query and document representations. We call this approach EPIC (Expansion via Prediction of Importance with Contextualization). At query time, EPIC can be employed as an inexpensive re-ranking method because document representations can be pre-computed at index time. EPIC improves upon the prior state of the art on the MS-MARCO passage ranking dataset by substantially narrowing the effectiveness gap between practical approaches with subsecond retrieval times and those that are considerably more expensive, e.g., those using BERT as a reranker. Furthermore, the proposed representations are interpretable because the dimensions of the representation directly correspond to the terms in the lexicon. An overview is shown in Fig. 1. Neural re-ranking approaches can generally be characterized as either representation-based or interaction-based [5]. Representationbased models, like ours, build representations of a query and passage independently and then compare these representations to calculate a relevance score. These are beneficial because one can compute document representations at index time to reduce the query-time cost. Interaction-based models combine signals from the query and the document at query time to compute the relevance score [13]. The Duet model [12] aims to achieve low query-time latency by combining signals from both a representation-based and an interaction-based model. However, this approach substantially under-performs the latest pure interaction-based approaches such as the one in [13]. TK [8] attempts to bridge this performance gap by using a smaller transformer network, but still utilizes an interaction-based approach which itself adds considerable computational overhead. Finally, other interesting proposals have investigated alternative approaches for offloading computational cost to index time. Doc2query [15] and docTTTTTquery [14] add important context to otherwise short documents by using a sequenceto-sequence model to predict additional terms to add to the document. DeepCT-Index [2] models an importance score for each term arXiv:2004.14245v2 [cs.IR] 20 May 2020 \fin the document and replaces the term frequency values in the inverted index with these scores. Unlike these approaches, EPIC models query/document term importance and performs document expansion. We found that it can be employed as an inexpensive yet effective re-ranking model; the impact on query time latency can be as low as an additional 5ms per query (for a total of 68ms). In summary, the novel contributions presented are: We propose a new representation-based ranking model that is grounded in the lexicon. We show that this model can improve ranking effectiveness for passage ranking, with a minimal impact on query-time latency. We show that the model yields interpretable representations of both the query and the document. We show that latency and storage requirements of our approach can be reduced by pruning the document representations. For reproducibility, our code is integrated into OpenNIR [10], with instructions and trained models available at: https://github.com/Georgetown-IR-Lab/epic-neural-ir. 2 MODEL Overview and notation. Our model follows the representationfocused neural ranking paradigm. That is, we train a model to generate a query and document1 representation in a given fixedlength vector space, and produce a ranking score by computing a similarity score between the two representations. Assume that queries and documents are composed by sequences of terms taken from a vocabulary V . Any sequence of terms, either a query or a document, is firstly represented as a sequence of vectors using a contextualized language model like BERT [3]. More formally, let f : V n \u2192Rn\u00d7e denote such a function associating an input sequence s of n terms t1, . . . ,tn to their n embeddings f1(s), . . . , fn(s), where fi(s) \u2208Re and e is the size of the embedding. So, a n-term query q is represented with the n embeddings f1(q), . . . , fn(q), and a m-term document d is represented with m embeddings f1(d), . . . , fm(d). Given the embeddings for queries and documents, we now illustrate the process for constructing query representations, document representations, the final querydocument similarity score. Query representation. A query q is represented as a sparse vector \u03d5q \u2208R|V | (Fig. 1 (a)). The elements of \u03d5q that correspond to to terms not in the query are set to 0. For each term ti appearing in the t1, . . . ,tn terms of the query q, the corresponding element \u03d5q(ti) is equal to the importance wq(ti) of the term w.r.t. the query wq(ti) = ln \u0010 1 + softplus\u0000\u03b8\u22a4 1 fi(q)\u0001\u0011 , (1) where \u03b81 \u2208Re is a vector of learned parameters. The softplus(\u00b7) function is defined as softplus(x) = ln(1 + ex). The use of softplus ensures that no terms have a negative importance score, while imposing no upper bound. The logarithm prevents individual terms from dominating. When a term appears more than once in a query, the corresponding value of \u03d5q sums up all contributions. The elements of the query representation encode the importance of the terms w.r.t. the query. This approach allows the query representation model to learn to assign higher weights to the query terms that 1For ease of notation, we refer to passages as documents. are most important to match given the textual context. Note that the number of elements in the representation is equal to the number of query terms; thus the query processing time is proportional to the number of query terms [17]. Document representation. A document d is represented as a dense vector \u03d5d \u2208R|V | (Fig. 1 (b)). Firstly, to perform document expansion, each e-dimensional term embedding fj(d) is projected into a |V |-dimensional vector space, i.e., \u03c8j : fj(d) 7\u2192\u03982fj(d), where \u03982 \u2208R|V |\u00d7e is a matrix of learned parameters. Note that\u03c8j \u2208 R|V |, and let \u03c8j(\u03c4) denote the entry of this vector corresponding to term \u03c4 \u2208V . Secondly, the importance wd(tj) of the terms w.r.t. the document is computed as in Eq (1): wd(tj) = ln \u0010 1 + softplus\u0000\u03b8\u22a4 3 fj(d)\u0001\u0011 , (2) where \u03b83 \u2208Re is a vector of learned parameters. Thirdly, we compute a factor representing the overall quality c(d) of the document c(d) = sigmoid(\u03b8\u22a4 4 d[CLS]), (3) where \u03b84 \u2208Re is a vector of learned parameters, and d[CLS] \u2208Re is the embedding produced by the contextualized language model\u2019s classification mechanism. We find that this factor helps give poorquality documents lower values overall. The sigmoid(\u00b7) function is defined as: sigmoid(x) = 1 1+e\u2212x . Finally, for each term \u03c4 appearing in the vocabulary, the corresponding element of the document representation \u03d5d(\u03c4) is defined as: \u03d5d(\u03c4) = c(d) max tj \u2208d \u0000wd(tj)\u03c8j(\u03c4)\u0001. (4) This step takes the maximum score for each term in the vocabulary generated by any term in the document. Since they do not rely on the query, these representations can be computed at index time. Similarity measure. We use the dot product to compute the similarity between the query and document vectors (Fig. 1 (c)), i.e., sim(q,d) = \u03d5\u22a4 q \u03d5d = \u00d5 \u03c4 \u2208V \u03d5q(\u03c4)\u03d5d(\u03c4). (5) 3 EXPERIMENTAL EVALUATION We conduct experiments using the MS-MARCO passage ranking dataset (full ranking setting).2 This dataset consists of approximately 1 million natural-language questions gathered from a query log (average length: 7.5 terms, stddev: 3.1), and 8.8 million candidate answer passages (avg length: 73.1, stddev: 28.4). The dataset is shallowly-annotated. Annotators were asked to write a naturallanguage answer to the given question using a set of candidate answers from a commercial search engine. The annotators were asked to indicate which (if any) of the passages contributed to their answers, which are then treated as relevant to the question. This results in 0.7 judgments per query on average (1.1 judgments per query of the 62% that have an answer). Thus, this dataset has a lot of variation in queries, making it suitable for training neural ranking methods. Although this dataset is limited by the method of construction, the performance on these shallow judgments correlate well with those conducted on a deeply-judged subset [1]. 2https://microsoft.github.io/msmarco/ \fTable 1: Effectiveness and efficiency of our approach compared to a variety of baselines. The values in italics represent a good trade-off between effectiveness and query latency. The value marked with * was reported by [14]. MS-Marco Dev Latency Method MRR@10 ms/query Single-Stage Ranking BM25 (from Anserini [18]) 0.198 21 doc2query [15] 0.218 48 DeepCT-Index [2] 0.243 15 docTTTTTquery [14] 0.277 63 Representation-based Re-Ranking EPIC + BM25 (ours) 0.273 106 pruned r = 2000 0.273 104 pruned r = 1000 0.272 48 EPIC + docTTTTTquery (ours) 0.304 78 pruned r = 2000 0.304 77 pruned r = 1000 0.303 68 Other Re-Ranking Duet (v2, ensemble) [12] 0.252 440 BM25 + TK (1 layer) [8] 0.303 445 BM25 + TK (3 layers) [8] 0.314 640 BM25 + BERT (large) [13] 0.365 3,500* Training. We train our model using the official MS-MARCO sequence of training triples (query, relevant passage, presumed nonrelevant passage) using cross-entropy loss. We use BERT-base [3] as the contextualized language model, as it was shown to be an effective foundation for various ranking techniques [2, 11, 13]. We set the dimensionality |V | of our representations to the size of the BERT-base word-piece vocabulary (d=30,522). The embedding size is instead e = 768. \u03982 is initialized to the pre-trained masked language model prediction matrix; all other added parameters are randomly initialized. Errors are back-propagated through the entire BERT model with a learning rate of 2 \u00d7 10\u22125 with the Adam optimizer [9]. We train in batches of 16 triples using gradient accumulation, and we evaluate the model on a validation set of 200 random queries from the development set every 512 triples. The optimal training iteration and re-ranking cutoff threshold is selected using this validation set. We roll back to the top-performing model after 20 consecutive iterations (training iteration 42) without improvement to Mean Reciprocal Rank at 10 (MRR@10). Baselines and Evaluation. We test our approach by re-ranking the results from several first-stage rankers. We report the performance using MRR@10, the official evaluation metric, on the MSMARCO passage ranking Dev set. We measure significance using a paired t-test at p < 0.01. We compare the performance of our approach with the following baselines: BM25 retrieval from a Porter-stemmed Anserini [18] index using default settings.3 DeepCT-Index [2], a model which predicts document term importance scores, and replaces the term frequency values with these importance scores for first-stage retrieval. 3We observe that the default settings outperform the BM25 results reported elsewhere and on the official leaderboard (e.g., [15]). 0.1 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 \u03c6d values 100 10\u22121 10\u22122 10\u22123 10\u22124 10\u22125 Fraction of values Figure 2: Frequencies of values (log scale) appearing in the document representations. In this figure, values are rounded up to the nearest decimal value. doc2query [15] and docTTTTTquery [14], document expansion approaches which predict additional terms to add to the document via a sequence-to-sequence transformer model. These terms are then indexed and used for retrieval using BM25. The docTTTTTquery model uses a pre-trained T5 model [16]. Duet [12], a hybrid representationand interaction-focused model. We include the top Duet variant on the MS-MARCO leaderboard (version 2, ensemble) to compare with another model that utilizes query and document representations. TK [8], a contextualized interaction-based model, focused on minimizing query time. We report results from [8] with the optimal re-ranking threshold and measure end-to-end latency on our hardware. BERT Large [13], an expensive contextualized language modelbased re-ranker. This approach differs from ours in that it models the query and passage jointly at query time, and uses the model\u2019s classification mechanism for ranking. We also measure query latency over the entire retrieval and reranking process. The experiments were conducted on commodity hardware equipped with an AMD Ryzen 3.9GHz processor, 64GiB DDR4 memory, a GeForce GTX 1080ti GPU, and a SSD drive. We report the latency of each method as the average execution time (in milliseconds) of 1000 queries from the Dev set after an initial 1000 queries is used to warm up the cache. First-stage retrieval is conducted with Anserini [18]. Ranking effectiveness. We report the effectiveness of our approach in terms of MMR@10 in Table 1. When re-ranking BM25 results, our approach substantially outperforms doc2query and DeepCT-Index. Moreover, it performs comparably to docTTTTTquery (0.273 compared to 0.277, no statistically significant difference). More importantly, we observe that the improvements of our approach and docTTTTTquery are additive as we achieve a MRR@10 of 0.304 when used in combination. This is a statistically significant improvement, and substantially narrows the gap between approaches with low query-time latency and those that trade off latency of effectiveness (e.g., BERT Large). To test whether EPIC is effective on other passage ranking tasks as well, we test on the TREC CAR passage ranking benchmark [4]. When trained and evaluated on the 2017 dataset (automatic judgments) with BM25, the MRR increases from 0.235 to 0.353. This also outperforms the DeepCT performance reported by [2] of 0.332. Effect of document representation pruning. For document vectors, we observe that the vast majority of values are very low (approximately 74% have a value of 0.1 or below, see Fig. 2). This suggests that many of the values can be pruned with little impact \f(a) how far does aaa tow , in california (b) coastal processes are located on what vertebrae (c) cost of endless pools swim spa Figure 3: Relative importance scores of sample queries. Darker colors correspond to higher weightsin the query representation. on the overall performance. This is desirable because pruning can substantially reduce the storage required for the document representations. To test this, we apply our method keeping only the top r values for each document. We show the effectiveness and efficiency of r = 2000 (reduces vocabulary by 93.4%) and r = 1000 (96.7%) in Table 1. We observe that the vectors can be pruned to r = 1000 with virtually no difference in ranking effectiveness (differences not statistically significant). We also tested with lower values of r, but found that the effectiveness drops off considerably by r = 100 (0.241 and 0.285 for BM25 and docTTTTTquery, respectively). Ranking efficiency. We find that EPIC can be implemented with a minimal impact on query-time latency. On average, the computation of the query representation takes 18ms on GPU and 51ms on CPU. Since this initial stage retrieval does not use our query representation, it is computed in parallel with the initial retrieval, which reduces the impact on latency. The similarity measure consistently takes approximately 1ms per query (both on CPU and GPU), with the remainder of the time spent retrieving document representations from disk. Interestingly, we observe that the latency of EPIC BM25 is higher than EPIC docTTTTTquery. This is because when re-ranking docTTTTTquery results, a lower reranking cutoff threshold is needed than for BM25. This further underscores the importance of using an effective first-stage ranker. When using pruning at r = 1000, the computational overhead can be substantially reduced. Specifically, we find that EPIC only adds a 5ms overhead per query to docTTTTTquery, while yielding a significant improvement in effectiveness. With pruning at r = 1000, EPIC BM25 performs comparably with docTTTTTquery with a 1.3\u00d7 speedup. Cost of pre-computing. We find that document vectors can be pre-computed for the MS-MARCO collection in approximately 14 hours on a single commodity GPU (GeForce GTX 1080ti). This is considerably less expensive than docTTTTTquery, which takes approximately 40 hours on a Google TPU (v3). When stored as half-precision (16-bit) floating point values, the vector for each document uses approximately 60KiB, regardless of the document length. This results in a total storage burden of approximately 500GiB for the entire collection. Pruning the collection to r = 1000 (which has minimal impact on ranking effectiveness) reduces the storage burden of each document to 3.9KiB (using 16-bit integer indices) and total storage to 34 GiB. Interpretability of representations. A benefit of our approach is that the dimensions of the representation correspond to terms in the lexicon, allowing the representations to be easily inspected. In Fig. 3, we present the relative scores for sample queries from MS-MARCO. We observe that the model is generally able to pick up on the terms that match intuitions of term importance. For instance, (a) gives highest scores to california, aaa (American Automobile Document terms: endless pools and swim spa ##s are available in a number of different price brackets . for the brand called ae ##nd ##less pools ##a prices start at $ 23 , 900 for their most basic pool for other brands of endless pool ( a . k . a swim spa ##s ) prices can be as low as $ 900 ##0 things that change the price prices change drastically depending upon location , time of year , and property type . Top expansion terms: pay paid cost paying much what fee costs thing spending docTTTTquery: pool endless how cost much price doe swim build spa ... Figure 4: Relative representation values of terms that appear in a sample document. Alongside terms that appeared in the passage, the top \u2018expansion\u2019 terms are also shown. For reference, the most frequent terms produces by docTTTTTquery are also given, weighted by term frequency. Association), and tow. These three terms are good candidates for a keyword-based query with the same query intent. This approach does not necessarily just remove stop words; in (b) what is assigned a relatively high score. We provide an example of document vector importance scores in Fig. 4. Because the document vector is dense, the figure only shows the terms that appear directly in the document and other top-scoring terms. Notice that terms related to price, endless, pool(s), and cost are assigned the highest scores. In this case, the expansion of the term cost was critical for properly scoring this document, as a relevant query is cost of endless pools/spas. Although the terms that docTTTTTquery generate for this document are similar, the continuous values generated by EPIC paid off in a higher MRR@10 score for the query \u201ccost of endless pools/swim spa\u201d (a relevant question for this passage). 4" + }, + { + "url": "http://arxiv.org/abs/1905.05818v1", + "title": "Ontology-Aware Clinical Abstractive Summarization", + "abstract": "Automatically generating accurate summaries from clinical reports could save\na clinician's time, improve summary coverage, and reduce errors. We propose a\nsequence-to-sequence abstractive summarization model augmented with\ndomain-specific ontological information to enhance content selection and\nsummary generation. We apply our method to a dataset of radiology reports and\nshow that it significantly outperforms the current state-of-the-art on this\ntask in terms of rouge scores. Extensive human evaluation conducted by a\nradiologist further indicates that this approach yields summaries that are less\nlikely to omit important details, without sacrificing readability or accuracy.", + "authors": "Sean MacAvaney, Sajad Sotudeh, Arman Cohan, Nazli Goharian, Ish Talati, Ross W. Filice", + "published": "2019-05-14", + "updated": "2019-05-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "main_content": "INTRODUCTION Clinical note summaries are critical to the clinical process. After writing a detailed note about a clinical encounter, practitioners often write a short summary called an impression (example shown in Figure 1). This summary is important because it is often the primary document of the encounter considered when reviewing a patient\u2019s clinical history. The summary allows for a quick view of the most important information from the report. Automated summarization of clinical notes could save clinicians\u2019 time, and has the potential to capture important aspects of the note that the author might not have considered [7]. If high-quality summaries are generated frequently, the practitioner may only need to review the summary and occasionally make minor edits. \u2217Both authors contributed equally to this research. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGIR \u201919, July 21\u201325, 2019, Paris, France \u00a9 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-6172-9/19/07...$15.00 https://doi.org/10.1145/3331184.3331319 FINDINGS: LIVER: Liver is echogenic with slightly coarsened echotexture and mildly nodular contour. No focal lesion. Right hepatic lobe measures 14 cm in length. BILE DUCTS: No biliary ductal dilatation. Common bile duct measures 0.06 cm. GALLBLADDER: Partially visualized gallbladder shows multiple gallstones without pericholecystic fluid or wall thickening. Proximal TIPS: 108 cm/sec, previously 82 cm/sec; Mid TIPS: 123 cm/sec, previously 118 cm/sec; Distal TIPS: 85 cm/sec, previously 86 cm/sec; PORTAL VENOUS SYSTEM: [...] IMPRESSION: (Summary) 1. Stable examination. Patent TIPS 2. Limited evaluation of gallbladder shows cholelithiasis. 3. Cirrhotic liver morphology without biliary ductal dilatation. Figure 1: Abbreviated example of radiology note and its summary. Recently, neural abstractive summarization models have shown successful results [1, 11, 13, 14]. While promising in general domains, existing abstractive models can suffer from deficiencies in content accuracy and completeness [18], which is a critical issue in the medical domain. For instance, when summarizing a clinical note, it is crucial to include all the main diagnoses in the summary accurately. To overcome this challenge, we propose an extension to the pointer-generator model [14] that incorporates domain-specific knowledge for more accurate content selection. Specifically, we link entities in the clinical text with a domain-specific medical ontology (e.g., RadLex1 or UMLS2), and encode them into a separate context vector, which is then used to aid the generation process. We train and evaluate our proposed model on a large collection of real-world radiology findings and impressions from a large urban hospital, MedStar Georgetown University Hospital. Results using the rouge evaluation metric indicate statistically significant improvements over existing state-of-the-art summarization models. Further extensive human evaluation by a radiology expert demonstrates that our method produces more complete summaries than the top-performing baseline, while not sacrificing readability or accuracy. In summary, our contributions are: 1) An approach for incorporating domain-specific information into an abstractive summarization model, allowing for domain-informed decoding; and 2) Extensive automatic and human evaluation on a large collection of radiology notes, demonstrating the effectiveness of our model and providing insights into the qualities of our approach. 1.1 Related Work Recent trends on abstractive summarization are based on sequenceto-sequence (seq2seq) neural networks with the incorporation of 1RadLex version 3.10, http://www.radlex.org/Files/radlex3.10.xlsx 2https://www.nlm.nih.gov/research/umls/ arXiv:1905.05818v1 [cs.CL] 14 May 2019 \fattention [13], copying mechanism [14], reinforcement learning objective [8, 12], and tracking coverage [14]. While successful, a few recent studies have shown that neural abstractive summarization models can have high readability, but fall short in generating accurate and complete content [6, 18]. Content accuracy is especially crucial in medical domain. In contrast with prior work, we focus on improving summary completeness using a medical ontology. Gigioli et al. [8] used a reinforced loss for abstractive summarization in the medical domain, although their focus was headline generation from medical literature abstracts. Here, we focus on summarization of clinical notes where content accuracy and completeness are more critical. The most relevant work to ours is by Zhang et al. [19] where an additional section from the radiology report (background) is used to improve summarization. Extensive automated and human evaluation and analyses demonstrate the benefits of our proposed model in comparison with existing work. 2 MODEL Pointer-generator network (PG). Standard neural approaches for abstractive summarization follow the seq2seq framework where an encoder network reads the input and a separate decoder network (often augmented with an attention mechanism) learns to generate the summary [17]. Bidirectional LSTMs (BiLSTMs) [9] are often used as the encoder and decoder. A more recent successful summarization model\u2014called Pointer-generator network\u2014allows the decoder to also directly copy text from the input in addition to generation [14]. Given a report x = {x1,x2, ...,xn}, the encoded input sequence h = BiLSTM(x), and the current decoding state st = BiLSTM(x\u2032)[t], where x\u2032 is the input to the decoder (i.e., gold standard summary token at training or previously generated token at inference time), the model computes the attention weights over the input terms a = softmax(h\u22a4W1s\u22a4). The attention scores are employed to compute a context vector c which is a weighted sum over input c = \u00cdn i aihi that is used along with the output of the decoder BiLSTM to either generate the next term from a known vocabulary or copy the token from the input sequence with the highest attention value. We refer the reader to See et al. [14] for additional details on the pointer-generator architecture. Ontology-aware pointer-generator (Ontology PG). In this work, we propose an extension of the pointer-generator network that allows us to leverage domain-specific knowledge encoded in an ontology to improve clinical summarization. We introduce a new encoded sequence u = {u1, ...,un\u2032} that is the result of linking an ontology U to the input texts. In other words, u = FU (x) where FU is a mapping function, e.g., a simple mapping function that only outputs a word sequence if it appears in the ontology and otherwise skips it. We then use a second BiLSTM to encode this additional ontology terms similar to the way the original input is encoded hu = BiLSTM(u). We then calculate an additional context vector c\u2032 which includes the domain-ontology information: a\u2032 = softmax(h\u22a4 u W2s\u22a4); c\u2032 = \u00d5n\u2032 i a\u2032 iui (1) The second context vector acts as additional global information to aid the decoding process, and is akin to how Zhang et al. [19] include background information from the report. We modify the decoder BiLSTM to include the ontology-aware context vector in the decoding process. Recall that an LSTM network controls the flow of its previous state and the current input using several gates (input gate i, forget gate f, and output gate o), where each of these gates are vectors calculated according to an additive combination of the previous LSTM state and current input. For example, for the forget gate we have: ft = tanh(Wf [st\u22121;x \u2032 t ] + b) where st\u22121 is the previous decoder state and x \u2032 t is the decoder input, and \u201c;\u201d shows concatenation (for more details on LSTMs refer to [9]). The ontology-aware context vector c\u2032 is passed as additional input to this function for all the LSTM gates: e.g., for the forget gate we will have: ft = tanh(Wf [st\u22121;x \u2032 t ;c\u2032] + b). This intuitively guides the information flow in the decoder using the ontology information. 3 EXPERIMENTAL SETUP We train and evaluate our model on a dataset of 41,066 real-world radiology reports from MedStar Georgetown University Hospital containing radiology reports with a variety of imaging modalities (e.g., x-rays, CT scans, etc). The dataset is randomly split into 8010-10 train-dev-test splits. Each report describes clinical findings about a specific diagnostic case, and an impression summary (as shown in Figure 1). The findings sections are 136.6 tokens on average and the impression sections are 37.1 tokens on average. Performing cross-institutional evaluation is challenging and beyond the scope of this work due to the varying nature of reports between institutions. For instance, the public Indiana University radiology dataset [4] consists only of chest x-rays, and has much shorter reports (average length of findings: 40.0 tokens; average length of impressions: 10.5 tokens). Thus, in this work, we focus on summarization within a single institution. Ontologies. We employ two ontologies in this work. UMLS is a general medical ontology maintained by the US National Library of Medicine and includes various procedures, conditions, symptoms, body parts, etc. We use QuickUMLS [15] (a fuzzy UMLS concept matcher) with a Jaccard similarity threshold of 0.7 and a window size of 3 to extract UMLS concepts from the radiology findings. We also evaluate using an ontology focused on radiology, RadLex, which is a widely-used ontology of radiological terms maintained by the Radiological Society of North America. It consists of 68,534 radiological concepts organized according to a hierarchical structure. We use exact n-gram matching to find important radiological entities, only considering RadLex concepts at a depth of 8 or greater.3 In pilot studies, we found that the entities between depths 8 and 20 tend to represent concrete entities (e.g., \u2018thoracolumbar spine region\u2019) rather than abstract categories (e.g., \u2018anatomical entity\u2019). Comparison. We compare our model to well-established extractive baselines as well as the state-of-the-art abstractive summarization models. LSA [16]: An extractive vector-space summarization model based on Singular Value Decomposition (SVD). LexRank [5]: An extractive method which employs graph-based centrality ranking of the sentence.4 Pointer-Generator (PG) [14]: An abstractive seq2seq attention summarization model that incorporates a copy mechanism to directly copy text from input where appropriate. 3The maximum tree depth is 20. 4For LSA and LexRank, we use the Sumy implementation (https://pypi.python.org/ pypi/sumy) with the top 3 sentences. \fthere is no fracture within either hip* or the visualized bony pelvis* . there is mild narrowing of the right hip* joint* with marginal osteophytes . limited evaluation of the left hip* is unremarkable . RadLex PG PG no dense airspace* consolidation* . no pleural* effusion* or pneumothorax . cardiac silhouette is normal . mildly prominent pulmonary vascularity . RadLex PG PG Figure 2: Average attention weight comparison between our approach (RadLex PG) and the baseline (PG). Color differences show to which term each model attends more while generating summary. RadLex concepts of depth 8 or lower are marked with *. Our approach attends to more RadLex terms throughout the document, allowing for more complete summaries. Table 1: rouge results on MedStar Georgetown University Hospital\u2019s development and test sets. Both the UMLS and RadLex ontology PG models are statistically better than the other models (paired t-test, p < 0.05). Development Test Model RG-1 RG-2 RG-L RG-1 RG-2 RG-L LexRank [5] 27.60 13.85 25.79 28.02 14.26 26.24 LSA [16] 28.04 14.68 26.15 28.16 14.71 26.27 PG [14] 36.60 21.73 35.40 37.17 22.36 35.45 Back. PG [19] 36.58 21.86 35.39 36.95 22.37 35.68 UMLS PG (ours) 37.41 22.23 36.10 37.98 23.14 36.67 RadLex PG (ours) 37.64 22.45 36.33 38.42 23.29 37.02 Background-Aware Pointer-Generator (Back. PG) [19]: An extension of PG, which is specifically designed to improve radiology note summarization by encoding the Background section of the report to aid the decoding process.5 Parameters and training. We use 100-dimensional GloVe embeddings pre-trained over a large corpus of 4.5 million radiology reports [19], a 2-layer BiLSTM encoder with a hidden size of 100, and a 1-layer LSTM decoder with the hidden size of 200. At inference time, we use beam search with beam size of 5. We use a dropout of 0.5 in all models, and train to optimize negative loglikelihood loss using the Adam optimizer [10] and a learning rate of 0.001. 4 RESULTS AND ANALYSIS 4.1 Experimental results Table 1 presents rouge evaluation results of our model compared with the baselines (as compared to human-written impressions). The extractive summarization methods (LexRank and LSA) perform particularly poorly. This may be due to the fact that these approaches are limited to simply selecting sentences from the text, and that the most central sentences may not be the most important for building an effective impression summary. Interestingly, the Back. PG approach (which uses the background section of the report to guide the decoding process) is ineffective on our dataset. This may be due to differences in conventions across institutions, such as what information is included in a report\u2019s background and what is considered important to include in its impression. We observe that our Ontology-Aware models (UMLS PG and RadLex PG) significantly outperform all other approaches (paired t-test, p < 0.05) on both the development and test sets. The RadLex 5Using the author\u2019s code at github.com/yuhaozhang/summarize-radiology-findings model slightly outperforms the UMLS model, suggesting that the radiology-specific ontology is beneficial (though the difference between UMLS and RadLex is not statistically significant). We also experimented incorporating both ontologies in the model simultaneously, but it resulted in slightly lower performance (1.26% lower than the best model on rouge-1). To verify that including ontological concepts in the decoder helps the model identify and focus on more radiology terms, we examined the attention weights. In Figure 2, we show attention plots for two reports, comparing the attention of our approach and PG. The plots show that our approach results in attention weights being shared across radiological terms throughout the findings, potentially helping the model to capture a more complete summary. 4.2 Expert human evaluation While our approach surpasses state-of-the-art results on our dataset in terms of rouge scores, we recognize the limitations of the rouge framework for evaluating summarization [2, 3]. To gain better insights into how and why our methodology performs better, we also conduct expert human evaluation. We had a domain expert (radiologist) who is familiar with the process of writing radiological findings and impressions evaluate 100 reports. Each report consists of the radiology findings, one manually-written impression, one impression generated using PG, and one impression generated using our ontology PG method (with RadLex). In each sample, the order of the Impressions are shuffled to avoid bias between samples. Samples were randomly chosen from the test set, one from each of 100 evenly-spaced bins sorted by our system\u2019s rouge-1 score. The radiologist was asked to score each impression in terms of the following on a scale of 1 (worst) to 5 (best): Readability. Impression is understandable (5) or gibberish (1). Accuracy. Impression is fully accurate (5), or contains critical errors (1). Completeness. Impression contains all important information (5), or is missing important points (1). We present our manual evaluation results using histograms and arrow plots in Figure 3. The histograms indicate the score distributions of each approach, and the arrows indicate how the scores changed. The starting points of an arrow indicates the score of an impression we compare to (either the human-written, or the summary generated by PG). The head of an arrow indicates the score of our approach. The numbers next to each arrow indicate how many reports made the transition. The figure shows that our approach improves completeness considerably, while maintaining the readability and accuracy. The major improvement in completeness is between the score of 3 and 4, where there is a net gain of 10 reports. Completeness is particularly important because it is where \f0 25 50 Readability RadLex PG PG 1 2 3 4 5 score 52 14 16 9 2 1 1 3 2 0 25 50 Accuracy 1 2 3 4 5 score 52 9 9 5 6 4 1 2 1 2 2 2 1 1 1 2 0 20 40 Completeness 1 2 3 4 5 score 4 8 7 19 1 18 8 21 3 4 6 1 0 50 Readability RadLex PG Manual 1 2 3 4 5 score 58 9 24 2 1 4 2 0 50 Accuracy 1 2 3 4 5 score 50 15 13 1 5 4 1 5 1 2 3 0 20 40 Completeness 1 2 3 4 5 score 7 5 17 22 1 6 7 19 7 2 1 1 1 1 3 (a) (b) (c) (d) (e) (f) Figure 3: Histograms and arrow plots plot depicting differences between impressions of 100 manually-scored radiology reports. Although challenges remain to reach human parity for all metrics, our approach makes strong gains to address the problem of report completeness (c, f), as compared to the next leading summarization approach (PG). existing summarization models\u2014such as PG\u2014are currently lacking, as compared to human performance. Despite the remaining gap between human and generated completeness, our approach yields considerable gains toward human-level completeness. Our model is nearly as accurate as human-written summaries, only making critical errors (scores of 1 or 2) in 5% of the cases evaluated, as compared to 8% of cases for PG. No critical errors were found in the human-written summaries, although the human-written summaries go through a manual review process to ensure accuracy. The expert annotator furthermore conducted blind qualitative analysis to gain a better understanding of when our model is doing better and how it can be further improved. In line with the completeness score improvements, the annotator noted that in many cases our approach is able to identify pertinent points associated with RadLex terms that were missed by the PG model. In some cases, such as when the author picked only one main point, our approach was able to pick up important items that the author missed. Interestingly, it also was able to include specific measurement details better than the PG network, even though these measurements do not appear in RadLex. Although readability is generally strong, our approach sometimes generates repetitive sentences and syntactical errors more often than humans. These could be addressed in future work with additional post-processing heuristics such as removing repetitive n-grams as done in [12]. In terms of accuracy, our approach sometimes mixes up the \u201cleft\u201d and \u201cright\u201d sides. This often occurs with findings that have mentions of both sides of a specific body part. Multi-level attention (e.g., [1]) could address this by forcing the model to focus on important segments of the text. There were also some cases where our model under-performed in terms of accuracy and completeness due to synonymy that is not captured by RadLex. For instance, in one case our model did identify torsion, likely due to the fact that in the findings section it was referred to as twisting (a term that does not appear in RadLex). 5" + }, + { + "url": "http://arxiv.org/abs/1904.07094v3", + "title": "CEDR: Contextualized Embeddings for Document Ranking", + "abstract": "Although considerable attention has been given to neural ranking\narchitectures recently, far less attention has been paid to the term\nrepresentations that are used as input to these models. In this work, we\ninvestigate how two pretrained contextualized language models (ELMo and BERT)\ncan be utilized for ad-hoc document ranking. Through experiments on TREC\nbenchmarks, we find that several existing neural ranking architectures can\nbenefit from the additional context provided by contextualized language models.\nFurthermore, we propose a joint approach that incorporates BERT's\nclassification vector into existing neural models and show that it outperforms\nstate-of-the-art ad-hoc ranking baselines. We call this joint approach CEDR\n(Contextualized Embeddings for Document Ranking). We also address practical\nchallenges in using these models for ranking, including the maximum input\nlength imposed by BERT and runtime performance impacts of contextualized\nlanguage models.", + "authors": "Sean MacAvaney, Andrew Yates, Arman Cohan, Nazli Goharian", + "published": "2019-04-15", + "updated": "2019-08-19", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Recently, there has been much work designing ranking architectures to effectively score query-document pairs, with encouraging results [5, 6, 20]. Meanwhile, pretrained contextualized language models (such as ELMo [16] and BERT [4]) have shown great promise on various natural language processing tasks [4, 16]. These models work by pre-training LSTM-based or transformer-based [19] language models on a large corpus, and then by performing minimal task fine-tuning (akin to ImageNet [3, 23]). Prior work has suggested that contextual information can be valuable when ranking. ConvKNRM [1], a recent neural ranking model, uses a convolutional neural network atop word representations, allowing the model to learn representations aware of context in local proximity. In a similar vein, McDonald et al. [12] proposes Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGIR \u201919, July 21\u201325, 2019, Paris, France \u00a9 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-6172-9/19/07...$15.00 https://doi.org/10.1145/3331184.3331317 an approach that learns a recurrent neural network for term representations, thus being able to capture context from the entire text [12]. These approaches are inherently limited by the variability found in the training data. Since obtaining massive amounts of highquality relevance information can be difficult [24], we hypothesize that pretrained contextualized term representations will improve ad-hoc document ranking performance. We propose incorporating contextualized language models into existing neural ranking architectures by using multiple similarity matrices \u2013 one for each layer of the language model. We find that, at the expense of computation costs, this improves ranking performance considerably, achieving state-of-the-art performance on the Robust 2004 and WebTrack 2012\u20132014 datasets. We also show that combining each model with BERT\u2019s classification mechanism can further improve ranking performance. We call this approach CEDR (Contextualzed Embeddings for Document Ranking). Finally, we show that the computation costs of contextualized language models can be dampened by only partially computing the contextualized language model representations. Although others have successfully used BERT for passage ranking [14] and question answering [22], these approaches only make use of BERT\u2019s sentence classification mechanism. In contrast, we use BERT\u2019s term representations, and show that they can be effectively combined with existing neural ranking architectures. In summary, our contributions are as follows: We are the first to demonstrate that contextualized word representations can be successfully incorporated into existing neural architectures (PACRR [6], KNRM [20], and DRMM [5]), allowing them to leverage contextual information to improve ad-hoc document ranking. We present a new joint model that combines BERT\u2019s classification vector with existing neural ranking architectures (using BERT\u2019s token vectors) to get the benefits from both approaches. We demonstrate an approach for addressing the performance impact of computing contextualized language models by only partially computing the language model representations. Our code is available for replication and future work.1 2 METHODOLOGY 2.1 Notation In ad-hoc ranking, documents are ranked for a given query according to a relevance estimate. Let Q be a query consisting of query terms {q1,q2, ...,q|Q |}, and let D be a document consisting of terms {d1,d2, ...,d|D |}. Let ranker(Q, D) \u2208R be a function that returns 1https://github.com/Georgetown-IR-Lab/cedr arXiv:1904.07094v3 [cs.IR] 19 Aug 2019 \fa real-valued relevance estimate for the document to the query. Neural relevance ranking architectures generally use a similarity matrix as input S \u2208R|Q |\u00d7|D |, where each cell represents a similarity score between the query and document: Si,j = sim(qi,dj). These similarity values are usually the cosine similarity score between the word vectors of each term in the query and document. 2.2 Contextualized similarity tensors Pretrained contextual language representations (such as those from ELMo [16] and BERT [4]) are context sensitive; in contrast to more conventional pretrained word vectors (e.g., GloVe [15]) that generate a single word representation for each word in the vocabulary, these models generate a representation of each word based on its context in the sentence. For example, the contextualized representation of word bank would be different in bank deposit and river bank, while a pretrained word embedding model would always result in the same representation for this term. Given that these representations capture contextual information in the language, we investigate how these models can also benefit general neural ranking models. Although contextualized language models vary in particular architectures, they typically consist of multiple stacked layers of representations (e.g., recurrent or transformer outputs). The intuition is that the deeper the layer, the more context is incorporated. To allow neural ranking models to learn which levels are most important, we choose to incorporate the output of all layers into the model, resulting in a three-dimensional similarity representation. Thus, we expand the similarity representation (conditioned on the query and document context) to SQ,D \u2208RL\u00d7|Q |\u00d7|D | where L is the number of layers in the model, akin to the channel in image processing. Let contextQ,D(t,l) \u2208RD be the contextualized representation for token t in layer l, given the context of Q and D. Given these definitions, let the contextualized representation be: SQ,D[l,q,d] = cos(contextQ,D(q,l),contextQ,D(d,l)) (1) for each query term q \u2208Q, document term d \u2208D, and layer l \u2208[1..L]. Note that when q and d are identical, they will likely not receive a similarity score of 1, as their representation depends on the surrounding context of the query and document. The layer dimension can be easily incorporated into existing neural models. For instance, soft n-gram based models, like PACRR, can perform convolutions with multiple input channels, and counting-based methods (like KNRM and DRMM) can count each channel individually. 2.3 Joint BERT approach Unlike ELMo, the BERT model encodes multiple text segments simultaneously, allowing it to make judgments about text pairs. It accomplishes this by encoding two meta-tokens ([SEP] and [CLS]) and using text segment embeddings (Segment A and Segment B). The [SEP] token separates the tokens of each segment, and the [CLS] token is used for making judgments about the text pairs. During training, [CLS] is used for predicting whether two sentences are sequential \u2013 that is, whether Segment A immediately precedes Segment B in the original text. The representation of this token can be fine-tuned for other tasks involving multiple text segments, including natural language entailment and question answering [22]. We explore incorporating the [CLS] token\u2019s representation into existing neural ranking models as a joint approach. This allows neural rankers to benefit from deep semantic information from BERT in addition to individual contextualized token matches. Incorporating the [CLS] token into existing ranking models is straightforward. First, the given ranking model produces relevance scores (e.g., n-gram or kernel scores) for each query term based on the similarity matrices. Then, for models using dense combination (e.g., PACRR, KNRM), we propose concatenating the [CLS] vector to the model\u2019s signals. For models that sum query term scores (e.g., DRMM), we include the [CLS] vector in the dense calculation of each term score (i.e., during combination of bin scores). We hypothesize that this approach will work because the BERT classification mechanism and existing rankers have different strengths. The BERT classification benefits from deep semantic understanding based on next-sentence prediction, whereas ranking architectures traditionally assume query term repetition indicates higher relevance. In reality, both are likely important for relevance ranking. 3 EXPERIMENT 3.1 Experimental setup Datasets. We evaluate our approaches using two datasets: Trec Robust 2004 and WebTrack 2012\u201314. For Robust, we use the five folds from [7] with three folds used for training, one fold for testing, and the previous fold for validation. For WebTrack, we test on 2012\u201314, training each year individually on all remaining years (including 2009\u201310), and validating on 2011. (For instance, when testing on WebTrack 2014, we train on 2009\u201310 and 2012\u201313, and validate on 2011.) Robust uses Trec discs 4 and 52, WebTrack 2009\u2013 12 use ClueWeb09b3, and WebTrack 2013\u201314 uses ClueWeb124 as document collections. We evaluate the results using the nDCG@20 / P@20 metrics for Robust04 and nDCG@20 / ERR@20 for WebTrack. Models. Rather than building new models, in this work we use existing model architectures to test the effectiveness of various input representations. We evaluate our methods on three neural relevance matching methods: PACRR [6], KNRM [20], and DRMM [5]. Relevance matching models have generally shown to be more effective than semantic matching models, while not requiring massive amounts of behavioral data (e.g., query logs). For PACRR, we increase kmax = 30 to allow for more term matches and better back-propagation to the language model. Contextualized language models. We use the pretrained ELMo (Original, 5.5B) and BERT (BERT-Base, Uncased) language models in our experiments. For ELMo, the query and document are encoded separately. Since BERT enables encoding multiple texts at the same time using Segment A and Segment B embeddings, we encode the query (Segment A) and document (Segment B) simultaneously. Because the pretrained BERT model is limited to 512 tokens, longer documents are split such that document segments are split as evenly as possible, while not exceeding the limit when combined with the query and control tokens. (Note that the query is always included in full.) BERT allows for simple classification fine-tuning, so we also experiment with a variant that is first fine-tuned on the 2520k documents; https://trec.nist.gov/data_disks.html 350M web pages, https://lemurproject.org/clueweb09/ 4733M web pages, https://lemurproject.org/clueweb12/ \fsame data using the Vanilla BERT classifier (see baseline below), and further fine-tuned when training the ranker itself. Training and optimization. We train all models using pairwise hinge loss [2]. Positive and negative training documents are selected from the query relevance judgments (positive documents limited to only those meeting the re-ranking cutoff threshold k using BM25, others considered negative). We train each model for 100 epochs, each with 32 batches of 16 training pairs. Gradient accumulation is employed when the batch size of 16 is too large to fit on a single GPU. We re-rank to top k BM25 results for validation, and use P@20 on Robust and nDCG@20 on WebTrack to select the best-performing model. We different re-ranking functions and thresholds at test time for each dataset: BM25 with k = 150 for Robust04, and QL with k = 100 for WebTrack. The re-ranking setting is a better evaluation setting than ranking all qrels, as demonstrated by major search engines using a pipeline approach [18]. All models are trained using Adam [8] with a learning rate of 0.001 while BERT layers are trained at a rate of 2e-5.5 Following prior work [6], documents are truncated to 800 tokens. Baselines. We compare contextualized language model performance to the following strong baselines: BM25 and SDM [13], as implemented by Anserini [21]. Finetuning is conducted on the test set, representing the maximum performance of the model when using static parameters over each dataset.6 We do not report SDM performance on WebTrack due to its high cost of retrieval on the large ClueWeb collections. Vanilla BERT ranker. We fine-tune a pretrained BERT model (BERT-Base, Uncased) with a linear combination layer stacked atop the classifier [CLS] token. This network is optimized the same way our models are, using pairwise cross-entropy loss and the Adam optimizer. We use the approach described above to handle documents longer than the capacity of the network, and average the [CLS] vectors from each split. TREC-best: We also compare to the top-performing topic TREC run for each track in terms of nDCG@20. We use uogTrA44xu for WT12 ([10], a learning-to-rank based run), clustmrfaf for WT13 ([17], clustering-based), UDInfoWebAX for WT14 ([11], entity expansion), and pircRB04t3 for Robust04 ([9], query expansion using Google search results).7 ConvKNRM [1], our implementation with the same training pipeline as the evaluation models. Each evaluation model when using GloVe [15] vectors.8 3.2 Results & analysis Table 1 shows the ranking performance using our approach. We first note that the Vanilla BERT method significantly outperforms the tuned BM25 [V] and ConvKNRM [C] baselines on its own. This is encouraging, and shows the ranking power of the Vanilla BERT model. When using contextualized language term representations without tuning, PACRR and DRMM performance is comparable to that of GloVe [G], while KNRM sees a modest boost. This might be 5Pilot experiments showed that a learning rate of 2e-5 was more effective on this task than the other recommended values of 5e-5 and 3e-5 by [4]. 6k1 in 0.1\u20134 (by 0.1), b in 0.1\u20131 (by 0.1), SDM weights in 0\u20131 (by 0.05). 7We acknowledge limitations of the TREC experimentation environment. 8glove.42B.300d, https://nlp.stanford.edu/projects/glove/ curbing population growth curbing population growth curbing population growth curb ##ing population growth curbing population growth curb ##ing population growth Relevant (FT934-7698) Non-relevant (LA032990-0138) BERT (ft) GloVe BERT (ft) 0.7 0.5 try curb growth population order raise tuesday abandoned plan citywide curb construction GloVe ELMo ELMo 0.2 0.3 0.6 0.6 Figure 1: Example similarity matrix excerpts from GloVe, ELMo, and BERT for relevant and non-relevant document for Robust query 435. Lighter values have higher similarity. (a) (b) Figure 2: (a) Processing rates by document length for GloVe, ELMo, and BERT using PACRR. (b) Processing rate and dev performance of PACRR when using a subset of BERT layers. due to KNRM\u2019s ability to train its matching kernels, tuning to specific similarity ranges produced by the models. (In contrast, DRMM uses fixed buckets, and PACRR uses maximum convolutional filter strength, both of which are less adaptable to new similarity score ranges.) When fine-tuning BERT, all three models see a significant boost in performance compared to the GloVe-trained version. PACRR and KNRM see comparable or higher performance than the Vanilla BERT model. This indicates that fine-tuning contextualized language models for ranking is important. This boost is further enhanced when using the CEDR (joint) approach, with the CEDR models always outperforming Vanilla BERT [V], and nearly always significantly outperforming the non-CEDR versions [N]. This suggests that term counting methods (such as KNRM and DRMM) are complementary to BERT\u2019s classification mechanism. Similar trends for both Robust04 and WebTrack 2012\u201314 indicate that our approach is generally applicable to ad-hoc document retrieval tasks. To gain a better understanding of how the contextual language model helps enhance the input representation, we plot example similarity matrices based on GloVe word embeddings, ELMo representations (layer 2), and fine-tuned BERT representations (layer 5). In these examples, two senses of the word curb (restrain, and edge of street) are encountered. The first is relevant to the query (it\u2019s discussing attempts to restrain population growth). The second is not (it discusses street construction). Both the ELMo and BERT models give a higher similarity score to the correct sense of the term for the query. This ability to distinguish different senses of terms is a strength of contextualized language models, and likely can explain some of the performance gains of the non-joint models. Although the contextualized language models yield ranking performance improvements, they come with a considerable cost at inference time\u2014a practical issue ignored in previous ranking work [14, 21]. To demonstrate this, in Figure 2(a) we plot the processing rate of GloVe, ELMo, and BERT.9 Note that the processing 9Running time measured on single GeForce GTX 1080 Ti GPU, data in memory. \fTable 1: Ranking performance on Robust04 and WebTrack 2012\u201314. Significant improvements to [B]M25, [C]onvKNRM, [V]anilla BERT, the model trained with [G]lOve embeddings, and the corresponding [N]on-CEDR system are indicated in brackets (paired t-test, p < 0.05). Robust04 WebTrack 2012\u201314 Ranker Input Representation P@20 nDCG@20 nDCG@20 ERR@20 BM25 n/a 0.3123 0.4140 0.1970 0.1472 SDM [13] n/a 0.3749 0.4353 TREC-Best n/a 0.4386 0.5030 0.2855 0.2530 ConvKNRM GloVe 0.3349 0.3806 [B] 0.2547 [B] 0.1833 Vanilla BERT BERT (fine-tuned) [BC] 0.4042 [BC] 0.4541 [BC] 0.2895 [BC] 0.2218 PACRR GloVe 0.3535 [C] 0.4043 0.2101 0.1608 PACRR ELMo [C] 0.3554 [C] 0.4101 [BG] 0.2324 [BG] 0.1885 PACRR BERT [C] 0.3650 [C] 0.4200 0.2225 0.1817 PACRR BERT (fine-tuned) [BCVG] 0.4492 [BCVG] 0.5135 [BCG] 0.3080 [BCG] 0.2334 CEDR-PACRR BERT (fine-tuned) [BCVG] 0.4559 [BCVG] 0.5150 [BCVGN] 0.3373 [BCVGN] 0.2656 KNRM GloVe 0.3408 0.3871 [B] 0.2448 0.1755 KNRM ELMo [C] 0.3517 [CG] 0.4089 0.2227 0.1689 KNRM BERT [BCG] 0.3817 [CG] 0.4318 [B] 0.2525 [B] 0.1944 KNRM BERT (fine-tuned) [BCG] 0.4221 [BCVG] 0.4858 [BCVG] 0.3287 [BCVG] 0.2557 CEDR-KNRM BERT (fine-tuned) [BCVGN] 0.4667 [BCVGN] 0.5381 [BCVG] 0.3469 [BCVG] 0.2772 DRMM GloVe 0.2892 0.3040 0.2215 0.1603 DRMM ELMo 0.2867 0.3137 [B] 0.2271 0.1762 DRMM BERT 0.2878 0.3194 [BG] 0.2459 [BG] 0.1977 DRMM BERT (fine-tuned) [CG] 0.3641 [CG] 0.4135 [BG] 0.2598 [B] 0.1856 CEDR-DRMM BERT (fine-tuned) [BCVGN] 0.4587 [BCVGN] 0.5259 [BCVGN] 0.3497 [BCVGN] 0.2621 rate when using static GloVe vectors is orders of magnitude faster than when using the contextualized representations, with BERT outperforming ELMo because it uses the more efficient Transformer instead of an RNN. In an attempt to improve the running time of these systems, we propose limiting the number of layers processed by the model. The reasoning behind this approach is that the lower the layer, the more abstract the matching becomes, perhaps becoming less useful for ranking. We show the runtime and ranking performance of PACRR when only processing only up to a given layer in Figure 2(b). It shows that most of the performance benefits can be achieved by only running BERT through layer 5; the performance is comparable to running the full BERT model, while running more than twice as fast. While we acknowledge that our research code is not completely optimized, we argue that this approach is generally applicable because the processing of these layers are sequential, query-dependent, and dominate the processing time of the entire model. This approach is a simple time-saving measure. 4" + }, + { + "url": "http://arxiv.org/abs/1806.07916v1", + "title": "RSDD-Time: Temporal Annotation of Self-Reported Mental Health Diagnoses", + "abstract": "Self-reported diagnosis statements have been widely employed in studying\nlanguage related to mental health in social media. However, existing research\nhas largely ignored the temporality of mental health diagnoses. In this work,\nwe introduce RSDD-Time: a new dataset of 598 manually annotated self-reported\ndepression diagnosis posts from Reddit that include temporal information about\nthe diagnosis. Annotations include whether a mental health condition is present\nand how recently the diagnosis happened. Furthermore, we include exact temporal\nspans that relate to the date of diagnosis. This information is valuable for\nvarious computational methods to examine mental health through social media\nbecause one's mental health state is not static. We also test several baseline\nclassification and extraction approaches, which suggest that extracting\ntemporal information from self-reported diagnosis statements is challenging.", + "authors": "Sean MacAvaney, Bart Desmet, Arman Cohan, Luca Soldaini, Andrew Yates, Ayah Zirikly, Nazli Goharian", + "published": "2018-06-20", + "updated": "2018-06-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Researchers have long sought to identify early warning signs of mental health conditions to allow for more effective treatment (Feightner and Worrall, 1990). Recently, social media data has been utilized as a lens to study mental health (Coppersmith et al., 2017). Data from social media users who are identi\ufb01ed as having various mental health conditions can be analyzed to study common language patterns that indicate the condition; language use could give subtle indications of a person\u2019s wellbeing, allowing the identi\ufb01cation of at-risk users. Once identi\ufb01ed, users could be provided with relevant resources and support. While social media offers a huge amount of data, acquiring manually-labeled data relevant to mental health conditions is both expensive and not scalable. However, a large amount of labeled data is crucial for classi\ufb01cation and largescale analysis. To alleviate this problem, NLP researchers in mental health have used unsupervised heuristics to automatically label data based on self-reported diagnosis statements such as \u201cI have been diagnosed with depression\u201d (De Choudhury et al., 2013; Coppersmith et al., 2014a, 2015; Yates et al., 2017). A binary status of a user\u2019s mental health conditions does not tell a complete story, however. People\u2019s mental condition changes over time (Wilkinson and Pickett, 2010), so the assumption that language characteristics found in a person\u2019s social media posts historically re\ufb02ects their current state is invalid. For example, the social media language of an adult diagnosed with depression in early adolescence might no longer re\ufb02ect any depression. Although the extraction of temporal information has been well-studied in the clinical domain (Lin et al., 2016; Bethard et al., 2017; Dligach et al., 2017), temporal information extraction has remained largely unexplored in the mental health domain. Given the speci\ufb01c language related to self-reported diagnoses posts and the volatility of mental conditions in time, the time of diagnosis provides critical signals on examining mental health through language. To address this shortcoming of available datasets, we introduce RSDD-Time: a dataset of temporally annotated self-reported diagnosis statements, based on the Reddit Self-Reported Depression Diagnosis (RSDD) dataset (Yates et al., 2017). RSDD-Time includes 598 diagnosis statements that are manually annotated to include pertinent temporal information. In particular, we identify if the conditions are current, meaning that the condition is apparently present according the the arXiv:1806.07916v1 [cs.CL] 20 Jun 2018 \fself-reported diagnosis post. Next, we identify how recently a particular diagnosis has occurred. We refer to these as condition state and diagnosis recency, respectively. Furthermore, we identify the time expressions that relate to the diagnosis, if provided. In summary, our contributions are: (i) We explain the necessity of temporal considerations when working with self-reported diagnoses. (ii) We release a dataset of annotations for 598 selfreported depression diagnoses. (iii) We provide and analyze baseline classi\ufb01cation and extraction results. Related work Public social media has become a lens through which mental health can be studied as it provides a public narration of user activities and behaviors (Conway and O\u2019Connor, 2016). Understanding and identifying mental health conditions in social media (e.g., Twitter and Reddit) has been widely studied (De Choudhury et al., 2013; Coppersmith et al., 2014b; De Choudhury and De, 2014; Mitchell et al., 2015; Gkotsis et al., 2016; Yates et al., 2017). To obtain ground truth knowledge for mental health conditions, researchers have used crowdsourced surveys and heuristics such as self-disclosure of a diagnosis (De Choudhury et al., 2013; Tsugawa et al., 2015). The latter approach uses high-precision patterns such as \u201cI was diagnosed with depression.\u201d Only statements claiming an actual diagnosis are considered because people sometimes use phrases such as \u201cI am depressed\u201d casually. In these works, individuals self-reporting a depression diagnoses are presumed to be depressed. Although the automated approaches have yielded far more users with depression than user surveys (tens of thousands, rather than hundreds), there is no indication of whether or not the diagnosis was recent, or if the conditions are still present. In this work, we address this by presenting manual annotations of nearly 600 self-reported diagnosis posts. This dataset is valuable because it allows researchers to train and test systems that automatically determine diagnosis recency and condition state information. 2 Data For the study of temporal aspects of self-reported diagnoses, we develop an annotation scheme1 and 1Available at http://ir.cs.georgetown.edu/ resources/ apply it to a set of 598 diagnosis posts randomly sampled from the Reddit Self-Reported Depression Diagnosis (RSDD) dataset (Yates et al., 2017). In the annotation environment, the diagnosis match is presented with a context of 300 characters on either side. A window of 150 characters on either side was too narrow, and having the whole post as context made annotation too slow, and rarely provided additional information. Annotation scheme Two kinds of text spans are annotated: diagnoses (e.g., \u201cI was diagnosed\u201d) and time expressions that are relevant to the diagnosis (e.g., \u201ctwo years ago\u201d). On diagnosis spans, the following attributes are marked: \u2022 Diagnosis recency determines when the diagnosis occurred (not the onset of the condition). Six categorical labels are used: very recently (up to 2 months ago), more than 2 months but up to 1 year ago, more than 1 year but up to 3 years ago, more than 3 years ago, unspeci\ufb01ed (when there is no indication), and unspeci\ufb01ed but not recent (when the context indicates that the diagnosis happened in the past, yet there is insuf\ufb01cient information to assign it to the \ufb01rst four labels). \u2022 For condition state, the annotator assesses the context for indications of whether the diagnosed condition is still current or past. The latter includes cases where it is reported to be fully under control through medication. We use a \ufb01vepoint scale (current, probably current, unknown, probably past and past). This can be mapped to a three-point scale for coarse-grained prediction (i.e. moving probable categories to the center or the extremes). \u2022 When a diagnosis is presented as uncertain or incorrect, we mark it as diagnosis in doubt. This can be because the diagnosis is put into question by the poster (e.g., \u201cI was diagnosed with depression before they changed it to ADHD\u201d), or it was later revised. \u2022 Occasionally, incorrect diagnosis matches are found in RSDD. These are marked as false positive. This includes diagnoses for conditions other than depression or self-diagnosis that occur in block quotes from other posts. False positive posts are not included in the analyses below. Time expressions indicating the time of diagnosis are marked similarly to the TIMEX3 speci\ufb01cation (Pustejovsky et al., 2005), with the additional \fSpan Attribute % \u03ba diagnosis false positive 0.97 0.43 diagnosis in doubt 0.97 0.22 condition state 0.52 0.41 diagnosis recency 0.66 0.64 time explicit 0.91 0.81 inferable from age 0.93 0.82 Table 1: Inter-annotator agreement by average pairwise agreement (%) and weighted Cohen\u2019s kappa (\u03ba). support for ages, years in school, and references to other temporal anchors. Because of these additions, we also annotate prepositions pertaining to the temporal expression when present (e.g., \u2018at 14\u2019, \u2018in 2004\u2019). Each span also has an indication of how their associated diagnosis can be assigned to one of the diagnosis recency labels. Explicit time expressions allow immediate assignment given the post date (e.g., yesterday, last August, in 2006). If the recency can be inferred assuming a poster\u2019s age at post time is known, it is inferable from age (e.g., at 17, in high school). A poster\u2019s age could be established using mentions by the author, or estimated with automatic age prediction. Inter-annotator agreement After an initial annotation round with 4 annotators that allowed for the scheme and guidelines to be improved, the entire dataset was annotated by 6 total annotators with each post being at least double annotated; disagreements were resolved by a third annotator where necessary. We report pairwise interannotator agreement in Table 1. Cohen\u2019s kappa is linearly weighted for ordinal categories (condition state and diagnosis recency). Agreement on false positives and doubtful diagnoses is low. For future analyses that focus on detecting potential misdiagnoses, further study would be required to improve agreement, but it is tangential to the focus on temporal analysis in this study. Estimating the state of a condition is inherently ambiguous, but agreement is moderate at 0.41 weighted kappa. The \ufb01ve-point scale can be backed off to a three-point scale, e.g. by collapsing the three middle categories into don\u2019t know. Pairwise percent agreement then improves from 0.52 to 0.68. The recency of a diagnosis can be established with substantial agreement (\u03ba = 0.64). Time expression attributes can be annotated with almost perfect agreement. Attribute Count false positive 25 out of 598 diagnosis in doubt 16 out of remaining 573 condition state current (254), prob. current (64), unknown (225), prob. past (29), past (26) diagnosis recency unspec. (232), unspec. but past (176), recent (27), >2m-1y (37), >1y-3y (29), >3y (97) time expression explicit (144), inferable from age (101), non-inferable (47), n/a (306) Table 2: Attribute counts in the RSDD-Time dataset. current probably current don't know probably past past 0 50 100 150 200 250 unspecified unspecified-past <2m 2m-1y 1y-3y >3y Figure 1: Incidence and interaction of condition state (columns) and diagnosis recency (colors). Availability The annotation data and annotation guidelines are available at https://github. com/Georgetown-IR-Lab/RSDD-Time. The raw post text is available from the RSDD dataset via a data usage agreement (details available at http://ir.cs.georgetown.edu/ resources/rsdd.html). 3 Corpus analysis Counts for each attribute are presented in Table 2. Figure 1 shows the incidence and interaction between condition state and diagnosis recency in our dataset. About half the cases have a condition state that is current, but interestingly, there are also many cases (55) where the diagnosis relates (at least probably) to the past. There is also a large number of cases (225) where it is not clear from the post whether the condition is current or not. This further shows that many self-reported diagnosis statements may not be current, which could make a dataset noisy, depending on the objective. For diagnosis recency, we observe that the majority of diagnosis times are either unspeci\ufb01ed or happened in the unspeci\ufb01ed past. For 245 cases, however, the diagnosis recency can be inferred from the post, usually because there is an explicit \ftime expression (59% of cases), or by inferencing from age (41%). Next, we investigate the interaction between condition state and diagnosis recency. We particularly observe that the majority of past conditions (rightmost two columns) are also associated with a diagnosis recency of more than 3 years ago or of an unspeci\ufb01ed past. On the other hand, many current conditions (leftmost column) have an unspeci\ufb01ed diagnosis time. This is expected because individuals who speci\ufb01cally indicate that their condition is not current also tend to specify when they have been \ufb01rst diagnosed, whereas individuals with current conditions may not mention their time of diagnosis. 4 Experiments To gain a better understanding of the data and provide baselines for future work to automatically perform this annotation, we explore methods for attribute classi\ufb01cation for diagnosis recency and condition state, and rule-based diagnosis time extraction. We split the data into a training dataset (399 posts) and a testing dataset (199 posts). We make this train/test split available for future work in the data release. For our experiments, we then disregard posts that are labeled as false positive (yielding 385 posts for training and 188 for testing), and we only consider text in the context window with which the annotator was presented. 4.1 Diagnosis recency and condition state classi\ufb01cation We train several models to classify diagnosis recency and condition state. In each we use basic bag-of-character-ngrams features. Character ngrams of length 2-5 (inclusive) are considered, and weighted using tf-idf. For labels, we use the combined classes described in Section 2. To account for class imbalance, samples are weighed by the inverse frequency of their category in the training set. We compare three models: logistic regression, a linear-kernel Support Vector Machine (SVM), and Gradient-Boosted ensemble Trees (GBT) (Chen and Guestrin, 2016). The logistic regression and SVM models are \u21132 normalized, and the GBT models are trained with a maximum tree depth of 3 to avoid over\ufb01tting. We present results in Table 3. The GBT method performs best for diagnosis recency classi\ufb01cation, and logistic regression performs best for condition Diagnosis Recency Condition State P R F1 P R F1 Logistic Reg. 0.47 0.35 0.37 0.45 0.45 0.44 Linear SVM 0.23 0.23 0.21 0.68 0.40 0.40 GBT 0.56 0.42 0.46 0.35 0.38 0.36 Table 3: Macro-averaged classi\ufb01cation results for diagnosis recency and condition state using tf-idf vectorized features for various baseline models. state classi\ufb01cation. This difference could be due to differences in performance because of skew. The condition state data is more skewed, with current and don\u2019t know accounting for almost 80% of the labels. 4.2 Time expression classi\ufb01cation To automatically extract time expressions, we use the rule-based SUTime library (Chang and Manning, 2012). Because diagnoses often include an age or year in school rather than an absolute time, we added rules speci\ufb01cally to capture these time expressions. The rules were manually generated by examining the training data, and will be released alongside the annotations. RSDD-Time temporal expression annotations are only concerned with time expressions that relate to the diagnosis, whereas SUTime extracts all temporal expressions in a given text. We use a simple heuristic to resolve this issue: simply choose the time expression closest to the post\u2019s diagnosis by character distance. In the case of a tie, the heuristic arbitrarily selects the leftmost expression. This heuristic will improve precision by eliminating many unnecessary temporal expressions, but has the potential to reduce precision by eliminating some correct expressions that are not the closest to the diagnosis. Results for temporal extraction are given in Table 4. Notice that custom age rules greatly improve the recall of the system. The experiment also shows that the closest heuristic improves precision at the expense of recall (both with and without the age rules). Overall, the best results in terms of F1 score are achieved using both the closest heuristic and the age rules. A more sophisticated algorithm could be developed to increase the candidate expression set (to improve recall), and better predict which temporal expressions likely correspond to the diagnosis (to improve precision). \fP R F1 SUTime 0.17 0.59 0.26 + age rules 0.20 0.81 0.32 + closest heuristic 0.33 0.51 0.40 + closest heuristic + age rules 0.44 0.69 0.53 Table 4: Results using SUTime, with additional rules for predicting age expressions and when limiting the candidate expression set using the closest heuristic. 5" + }, + { + "url": "http://arxiv.org/abs/1805.00791v1", + "title": "Characterizing Question Facets for Complex Answer Retrieval", + "abstract": "Complex answer retrieval (CAR) is the process of retrieving answers to\nquestions that have multifaceted or nuanced answers. In this work, we present\ntwo novel approaches for CAR based on the observation that question facets can\nvary in utility: from structural (facets that can apply to many similar topics,\nsuch as 'History') to topical (facets that are specific to the question's\ntopic, such as the 'Westward expansion' of the United States). We first explore\na way to incorporate facet utility into ranking models during query term score\ncombination. We then explore a general approach to reform the structure of\nranking models to aid in learning of facet utility in the query-document term\nmatching phase. When we use our techniques with a leading neural ranker on the\nTREC CAR dataset, our methods rank first in the 2017 TREC CAR benchmark, and\nyield up to 26% higher performance than the next best method.", + "authors": "Sean MacAvaney, Andrew Yates, Arman Cohan, Luca Soldaini, Kai Hui, Nazli Goharian, Ophir Frieder", + "published": "2018-05-02", + "updated": "2018-05-02", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION As people become more comfortable using question answering systems, it is inevitable that they will begin to expect the systems to answer questions with complex answers. For instance, even the seemingly simple question \u201cIs cheese healthy?\u201d cannot be answered with a simple \u2018yes\u2019 or \u2018no\u2019. To fully answer the question, positive and negative qualities should be discussed, along with the strength of evidence, and conditions under which the qualities apply\u2014a complex answer. Complex Answer Retrieval (CAR) frames this problem as an information retrieval (IR) task [2]. Given a query that consists of a topic (e.g., \u2018cheese\u2019), and facets of the topic (e.g., \u2018health effects\u2019), a CAR system should be able to retrieve information from a variety of sources to throughly answer the corresponding question. CAR has similarities with existing, yet distinct, areas of research in IR. Although CAR involves passage retrieval, it is distinguishable from passage retrieval because CAR compiles multiple passages together to form complete answers. It is also different than factoid question answering (questions with a simple answer, e.g. \u201cWho wrote Hamlet?\u201d), and complex question answering (questions that Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGIR \u201918, July 8\u201312, 2018, Ann Arbor, MI, USA \u00a9 2018 Association for Computing Machinery. ACM ISBN 978-1-4503-5657-2/18/07...$15.00 https://doi.org/10.1145/3209978.3210135 themselves require reasoning, e.g. \u201cWhich female characters are in the same room as Homer in Act III Scene I?\u201d). We observe that question facets can be structural or topical. Structural facets refer to general categories of information that could apply to other entities of the same type, such as the \u2018History\u2019 or \u2018Economy\u2019 of a country. Topical facets refer to categories of information that are specific to the entity mentioned in the question, such as the \u2018Westward expansion\u2019 or \u2018Banking crisis\u2019 of the United States. (Although either facet could be asked about other topics, they are much more specific to details of the topic than structural headings.) We call this distinction facet utility, and explain it in detail in Section 2, along with additional background and related work. We then present and evaluate two novel approaches to CAR based on this observation and the hypothesis that it will affect how terms are matched. The first approach integrates predictors of a facet\u2019s utility into the score combination component of an answer ranker. The second approach is a technique to help any model learn to make the distinction itself by treating different facets independently. To predict facet utility, we use the heading structure of CAR queries (described in Section 2) and corpus statistics. We show how our approaches can be integrated with recent neural ranking models, and evaluate on the TREC CAR dataset. Our approaches yield favorable results compared to other known methods, achieving the top results overall and up to a 26% gain over the next best method. 2 BACKGROUND AND RELATED WORK The first major work done with CAR frames the task in terms of Wikipedia content generation [1]. CAR fits naturally with this domain because CAR query topics and facets often correspond well with article titles and headings, respectively. Furthermore, Wikipedia itself provides an extensive source of sample queries (paths in the heading hierarchy from the title), partial answers (i.e., paragraphs), and automatic relevance judgments (paragraphs can be assumed relevant to the headings they are under). For simplicity, we use Wikipedia-focused terminology in the remainder of this work. A heading refers to any component of a query, and corresponds to a question topic or facet. The title is the first query component (topic), the main heading is the last component, and intermediate heading are any headings between the two (if any). The main and intermediate headings represent the facet of interest to the topic. Example queries using this terminology are given in Table 1. A central challenge of CAR is resolving facet utility. Due to the structure of CAR queries as a list of headings, we generalize the concept to heading utility\u2014the idea that headings (i.e., question topics and facets) can serve a variety of functions in an article. arXiv:1805.00791v1 [cs.IR] 2 May 2018 \fTable 1: Example CAR queries from Wikipedia by heading position. Some queries have no intermediate headings. Title Intermediate Heading(s) Main Heading Cheese \u00bb (none) \u00bb Nutrition and health Green sea turtle \u00bb Ecology and behavior \u00bb Life cycle History of the United States \u00bb 20th Century \u00bb Imperialism Disturbance (ecology) \u00bb (none) \u00bb Cyclic disturbance Medical tourism \u00bb Destinations \u00bb Europe \u00bb Finland We distinguish between structural and topical headings. We define structural headings as headings that serve a structural purpose for an article\u2014general question facets that could be asked about many similar topics. In contrast, topical headings describe details that are specific to the particular topic. For instance, \u201ccooking and eating\u201d is a structural heading for Cheese (one would expect it to be found in other food-related articles), whereas \u201ccheeseboard\u201d is a topical heading because it relates specifically to the topic of the article. Because the terminology in structural headings is necessarily more generic (they accommodate many topics), we predict that terms found in these headings are less likely to appear verbatim in relevant paragraphs than terms in topical headings. Thus, modeling this behavior should improve performance on CAR because it will be able to learn which terms are less important. Previous work does not model facet utility, treating all headings equally by concatenating their components. Nanni et al. [8] presents a survey of prominent general domain ranking and query expansion approaches for CAR. They test one deep neural model (Duet [7]), and find that it outperforms the other approaches, including BM25, cosine similarity with TF-IDF and word embeddings, and a learning-to-rank approach. The recent 2017 TREC track focused on CAR [2]. This track yielded both manual relevance judgments for evaluation of CAR systems, and a variety of new CAR approaches (seven teams participated). One prominent approach used a sequential dependence model [5]. They modified the approach for CAR by limiting ordered ngrams to those found within a single heading, and unordered ngrams to only inter-heading pairs. Another approach uses a Siamese attention network [6], including topic features extracted from DBPedia. While this approach does distinguish the title from other headings, it only uses it for query expansion and related entity extraction. Another submission applied a reinforcement learning-based query reformulation approach to CAR [9]. 3 METHOD Since previous work shows that neural-based rankers have potential for CAR, we focus on an approach that can be adapted for various neural rankers. Many leading interaction-focused neural rankers share a similar two-phase architecture, as shown in Figure 1a. Phase 1 performs matching of query terms to document terms, and phase 2 combines the matching results to produce a final relevance score. For instance, DRMM [3] uses a feed-forward histogram matching network, and a term gating combination network to predict relevance. MatchPyramid [10] uses hierarchal convolution for matching, followed by a dense layer for aggregation. Similarly, PACRR [4] uses a max-pooled convolution phase for matching, and a recurrent or dense combination phase. Finally, DeepRank [11] generates Table 2: Example contextual vectors for the query \u201cgreen sea turtle \u00bb ecology and behavior \u00bb life cycle\u201d. green sea turtle ecology and behavior life cycle position_title 1 1 1 0 0 0 0 0 position_inter 0 0 0 1 1 1 0 0 position_main 0 0 0 0 0 0 1 1 heading_frequency 0 0 0 3 3 3 3 3 query contexts and uses a convolutional layer to generate local relevance representations as a matching phase, and uses a term gating mechanism for combination. We present two approaches to model facet utility by modifying this generalized neural ranking structure. The first approach applies contextual vectors in the combination phase (Figure 1b), and the second approach splits the input into independent matching phases (Figure 1c). Contextual vectors. In the combination phase, signals across query terms are combined to produce a relevance score, so it is natural to include information here to provide additional context about each query term when combining the results. For instance, PACRR includes the inverse document frequency (IDF) in its combination layer, allowing the model to learn how to weight results based on this statistic [4]. We use this phase to inform the model about heading utility based on predictions about the distinction between structural and topical headings. We call these contextual vectors, since they provide context in the CAR domain. The intuition is that by providing the model with estimators of heading utility, the model will learn which terms to weight higher. Here we explore two types of contextual vectors: heading position (HP) and heading frequency (HF). When distinguishing between structural and topical headings, it is important to consider the position itself in the query. For instance, since the title is the question topic, it is necessarily topical. Furthermore, it is reasonable to suspect that intermediate headings will often be structural because they assist in the organization of an article. Main headings may either be structural or topical, depending on the question itself. Thus, for heading position contextual vectors, we use a simple indicator to distinguish whether a term is from the title, an intermediate, or the main heading. An example is given in Figure 2. Another approach to modeling structural and topical headings using contextual vectors is to examine the prevalence of a given heading. This is based on the intuition that structural headings should appear in many similar documents, whereas the usage of topical headings should be less widespread. For instance, the structural heading \u201cNutrition and health\u201d in the article Cheese also appears in articles entitled Beef, Raisin, Miso, and others, whereas the topical \u201cCheeseboard\u201d heading only also appears as the title of a disambiguation page. We model this behavior using heading usage frequency: f rq(h) = \u00cd a\u2208C I(h\u2208a) |C | . That is, the probability that a given article a in corpus C contains heading h, given the indicator function I. Heading usage frequencies very close to 0 include titles and other content-specific headings like Cheeseboard. Due to the wide variety of Wikipedia articles, most probabilities are very low. Therefore, we stratify the scores by percentile, grouping similarly-common headings together. Based on pilot studies, we found the (1) 60th, (2) 90th, and (3) 99th percentiles to be effective breakpoints. We use \fq0 q2 q3 q4 q5 q6 q7 q8 q9 d0 ... d1 d2 d3 dn document main inter. title q0 q2 q3 q4 q5 q6 q7 q8 q9 d0 ... d1 d2 d3 dn document query main inter. title q0 q2 q3 q4 q5 q6 q7 q8 q9 matching (e.g. CNN) d0 ... d1 d2 d3 dn document query main inter. title combination (e.g. dense) rel. score + contextual vectors (a) general neural ranker (b) contextual vectors (c) heading independence matching (e.g. CNN) combination (e.g. dense) combination (e.g. dense) matching (e.g. CNN) matching (e.g. CNN) matching (e.g. CNN) rel. score rel. score query Figure 1: (a) General interaction-focused ranking architecture, with matching and combination phases (unmodified). (b) Modified architecture, including contextual vectors for combination. (c) Modified architecture, splitting for heading independence. complete, case insensitive heading matches. Unknown headings are assumed to be infrequent, and belong to the 0th percentile. An example of this vector is given in Table 2. Heading independence. Since contextual vectors are applied in the combination phase, they have no effect on the criteria constituting a strong signal from the matching phase. However, we hypothesize that facet utility can also be important when matching. For instance, a structural heading like \u201cHistory\u201d might have a lower matching threshold, allowing matches of similar words terms such as \u201cearly\u201d or \u201cwas\u201d (both of which have a lower word2vec cosine similarity score to \u201chistory\u201d than functionally-related word pairs, such as \u201ccheese\u201d and \u201cchocolate\u201d). Thus, we propose a method called heading independence. With this approach, we modify the structure of a generic neural IR model by splitting the matching stage into three independent parts: one for the title, one for intermediate headings, and one for the main heading. Each sub-matching phase operates independently as it otherwise would for the combined query. Then, the results are combined using the same combination logic of the original model (e.g., a dense or recurrent layer). This allows the model to learn separated logic for different heading components. The reasoning behind the split by query component is the same as the reasoning behind using heading position vectors: the title is topical, whereas intermediate headings are likely structural, and the main heading could be either. With separate matching logic for each, the model should be able to more easily distinguish between the types. An added benefit of this approach is that it improves heading alignment in the combination phase. When headings are simply concatenated (even with a symbol to indicate a change in headings), the alignment of each query component will vary among queries. Since the output of each matching stage is fixed in size, the locations of each query component will be consistent among queries. We suspect that this is particularly useful when using dense combination. 4 EXPERIMENTAL SETUP Dataset. TREC CAR provides several sets of queries based on a recent dump of Wikipedia [1, 2]. Queries in each set are generated from the heading structure of an article, where each query represents a path from the article title down to the main heading. Each query also includes automatic relevance judgments based on the assumption that paragraphs under a given heading are relevant to the query with that main heading. Half of the dump belongs to the train set, which is split into 5 folds. We use folds 1 and 2 in this work, consisting of 873, 746 queries and 2.2M automatic relevance judgments (more data than this was not required for our models to converge). The test200 set contains 1,860 queries and 4.7k automatic relevance judgments. The benchmarkY1test set contains 2,125 queries and 5.8k automatic relevance judgments. It also includes 30k manual relevance judgments, ranging from Trash (-2) to Must be mentioned (3). The paragraphcorpus is a collection of 30M paragraphs from the Wikipedia dump with no article or heading structure provided, functioning as a source of answers for retrieval. Model integration. We evaluate our contextual vector and heading independence approaches using the Position-Aware Convolutional Recurrent Relevance neural IR architecture (PACRR) [4], which is a strong neural retrieval model with a structure that naturally lends itself to incorporating contextual vectors and heading independence signals. We refer the reader to Hui et al. [4] for full details about the model, but we give a short description here to provide details about how our approach is integrated. PACRR first processes square convolutional filters over a q \u00d7d query-document similarity matrix, where each cell represents similarity scores between the corresponding query and document term. The filters are max-pooled for each cell, and the scores are k-max pooled over each query term (k = 2). Then a dense layer combines the scores (along with term IDF scores) to yield a final relevance score for the query-document pair. For runs that include contextual vectors, we append them to each term (alongside IDF) during combination. For heading independence, we use separate convolution and pooling layers, followed by a dense layer for each heading component. We also explore using the heading frequency contextual vector when using heading independence (included after the pooling layer), and before the independent dense layer. Training and evaluation. We train the models on samples from train.fold1 and train.fold2. Positive training examples come from the automatic relevance judgments, whereas negative training examples are selected from the top non-relevant BM25 results for the given query. Each model is trained for 80 iterations, and the top training iteration is selected using the R-Prec on test200. Evaluation is conducted with automatic and manual judgments on benchmarkY1test. The results are based on an initial ranking of the top 100 BM25 results for each query. We report Mean Average Precision (MAP), R-Precision (R-Prec), Mean Reciprocal Rank (MRR), and normalized Discounted Cumulative Gain (nDCG) of each variation (all four official TREC CAR metrics). \fTable 3: Performance results on benchmarkY1test. The top value is in bold. Records marked with * are based on official TREC runs, and had top results included in the manual assessment pool. Significant results compared to the unmodified PACRR model are marked with \u25b2and \u25bc(paired t-test, 95% confidence). The abbreviations for our methods are as follows: HP is the heading position contextual vector; HF is the heading frequency contextual vector; HI is heading independence. Automatic Manual Approach MAP R-Prec MRR nDCG MAP R-Prec MRR nDCG PACRR (no modification) 0.164 0.131 0.247 0.254 0.208 0.219 0.445 0.403 PACRR + HP* \u25b20.170 0.135 \u25b20.258 \u25b20.260 0.209 0.218 0.452 0.406 PACRR + HP + HF* \u25b20.170 0.134 \u25b20.255 \u25b20.259 \u25b20.211 0.221 0.453 \u25b20.408 PACRR + HI \u25b20.171 0.139 \u25b20.256 \u25b20.260 0.205 0.213 0.442 0.403 PACRR + HI + HF \u25b20.176 \u25b20.146 \u25b20.263 \u25b20.265 0.204 0.214 0.440 0.401 Sequential dependence model* [5] \u25bc0.150 \u25bc0.116 \u25bc0.226 \u25bc0.238 \u25bc0.172 \u25bc0.186 \u25bc0.393 \u25bc0.350 Siamese attention network* [6] \u25bc0.121 \u25bc0.096 \u25bc0.185 \u25bc0.175 \u25bc0.137 \u25bc0.171 \u25bc0.345 \u25bc0.274 BM25 baseline* \u25bc0.122 \u25bc0.097 \u25bc0.183 \u25bc0.196 \u25bc0.138 \u25bc0.158 \u25bc0.317 \u25bc0.296 5 RESULTS We present system performance in Table 3. Our methods are compared to the unmodified PACRR model, two other top submissions to TREC CAR 2017 (sequential dependency model [5] and the Siamese attention network [6]), and a BM25 baseline (which produces the initial result set that our methods re-rank). Our method outperforms the other TREC submissions and the BM25 baseline by all metrics for both manual and automatic relevance judgments (paired t-test, 95% confidence). The method that uses heading independence (HI) and the heading frequency vector (HF) yields up to a 26% improvement over the next best approach (SDM). Our approach also consistently outperforms the unmodified version of PACRR when evaluating using automatic relevance judgments, performing up to 11% better than the unmodified version of PACRR. Our approach occasionally does better than unmodified PACRR when evaluating with manual relevance judgments. Specifically, our approach that uses the heading position (HP) and heading frequency (HF) contextual vectors does the best overall. We acknowledge that this method (and the version with only heading position) were included as official TREC runs, yielding an advantage in the manual comparison. This work is based on the distinction between structural and topical headings, and the differences in how they interact with terms in relevant documents. While there is no absolute distinction between the two, we presented various approaches to approximate the distinction. By plotting the term occurrence rate (that is, the probability that any term occurs in a relevant paragraph) for title, intermediate, and main headings, we see clear differences in the distribution (Figure 2). Particularly, the plot shows that main headings are much more likely to appear in relevant documents than title and intermediate headings. Furthermore, the distributions of intermediate and title headings are roughly opposite each other, with titles (topical) more likely to occur than intermediate headings (structural). 6" + }, + { + "url": "http://arxiv.org/abs/1804.05972v1", + "title": "A Deeper Look into Dependency-Based Word Embeddings", + "abstract": "We investigate the effect of various dependency-based word embeddings on\ndistinguishing between functional and domain similarity, word similarity\nrankings, and two downstream tasks in English. Variations include word\nembeddings trained using context windows from Stanford and Universal\ndependencies at several levels of enhancement (ranging from unlabeled, to\nEnhanced++ dependencies). Results are compared to basic linear contexts and\nevaluated on several datasets. We found that embeddings trained with Universal\nand Stanford dependency contexts excel at different tasks, and that enhanced\ndependencies often improve performance.", + "authors": "Sean MacAvaney, Amir Zeldes", + "published": "2018-04-16", + "updated": "2018-04-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction For many natural language processing applications, it is important to understand wordlevel semantics. Recently, word embeddings trained with neural networks have gained popularity (Mikolov et al., 2013; Pennington et al., 2014), and have been successfully used for various tasks, such as machine translation (Zou et al., 2013) and information retrieval (Hui et al., 2017). Word embeddings are usually trained using linear bag-of-words contexts, i.e. tokens positioned around a word are used to learn a dense representation of that word. Levy and Goldberg (2014) challenged the use of linear contexts, proposing instead to use contexts based on dependency parses. (This is akin to prior work that found that dependency contexts are useful for vector models (Pado and Lapata, 2007; Baroni and Lenci, 2010).) They found that embeddings trained this way are better at capturing semantic similarity, rather than relatedness. For instance, embeddings trained using linear contexts place Hogwarts (the \ufb01ctional setting of the Harry Potter series) near Dumbledore (a character from the series), whereas embeddings trained with dependency contexts place Hogwarts near Sunnydale (\ufb01ctional setting of the series Buffy the Vampire Slayer). The former is relatedness, whereas the latter is similarity. Work since Levy and Goldberg (2014) examined the use of dependency contexts and sentence feature representations for sentence classi\ufb01cation (Komninos and Manandhar, 2016). Li et al. (2017) \ufb01lled in research gaps relating to model type (e.g., CBOW, Skip-Gram, GloVe) and dependency labeling. Interestingly, Abnar et al. (2018) recently found that dependency-based word embeddings excel at predicting brain activation patterns. The best model to date for distinguishing between similarity and relatedness combines word embeddings, WordNet, and dictionaries (Recski et al., 2016). One limitation of existing work is that it has only explored one dependency scheme: the English-tailored Stanford Dependencies (De Marneffe and Manning, 2008b). We provide further analysis using the cross-lingual Universal Dependencies (Nivre et al., 2016). Although we do not compare cross-lingual embeddings in our study, we will address one important question for English: are Universal Dependencies, which are less tailored to English, actually better or worse than the English-speci\ufb01c labels and graphs? Furthermore, we investigate approaches to simplifying and extending dependencies, including Enhanced dependencies and Enhanced++ dependencies (Schuster and Manning, 2016), as well as two levels of relation simpli\ufb01cation. We hypothesize that the cross-lingual generalizations from universal dependencies and the additional context from enhanced dependencies should improve the performance of word embeddings at distinguishing between functional and domain \fEnhanced + + Enhanced Basic Simpli\ufb01ed Stanford Universal + + CBOW k = 2 k = 5 Linear Contexts Dependency Contexts Unlabeled Skip-Gram Subword + Figure 1: Visual relationship between types of embedding contexts. Each layer of enhancement adds more information to the dependency context (e.g., simpli\ufb01ed adds dependency labels to the unlabeled context). We investigate CBOW using both a context window of k = 2 and k = 5, and we use the SkipGram model both with and without subword information. similarity. We also investigate how these differences impact word embedding performance at word similarity rankings and two downstream tasks: question-type classi\ufb01cation and named entity recognition. 2 Method In this work, we explore the effect of two dependency annotation schemes on the resulting embeddings. Each scheme is evaluated in \ufb01ve levels of enhancement. These embeddings are compared to embeddings trained with linear contexts using the continuous bag of words (CBOW) with a context window of k = 2 and k = 5, and Skip-Gram contexts with and without subword information. These con\ufb01gurations are summarized in Figure 1. Two dependency annotation schemes for English are Stanford dependencies (De Marneffe and Manning, 2008b) and Universal dependencies (Nivre et al., 2016). Stanford dependencies are tailored to English text, including dependencies that are not necessarily relevant cross-lingually (e.g. a label prt for particles like up in pick up). Universal dependencies are more generalized and designed to work crosslingually. Many structures are similar between the two schemes, but important differences exist. For instance, in Stanford dependencies, prepositions head their phrase and depend on the modi\ufb01ed Simp. Basic Stanford dependencies mod poss, prt, predet, det, amod, tmod, npadvmod, possessive, advmod, quantmod, preconj, mark, vmod, nn, num, prep, appos, mwe, mod, number, neg, advcl, rcmod arg agent, iobj, dobj, acomp, pcomp, pobj, ccomp, arg, subj, csubj, obj, xcomp, nsubj aux aux, cop sdep xsubj, sdep Universal dependencies core iobj, dobj, ccomp, csubj, obj, xcomp, nsubj ncore discourse, cop, advmod, dislocated, vocative, aux, advcl, mark, obl, expl nom case, nmod, acl, neg, appos, det, amod, nummod coord cc, conj special goeswith, reparandum, orphan loose parataxis, list mwe compound, mwe, \ufb02at other punct, dep, root Table 1: Simpli\ufb01ed Stanford and Universal dependency labels. For simpli\ufb01ed dependencies, basic labels are collapsed into the simpli\ufb01ed label shown in this table. (Relations not found in this table were left as is.) word (in is the head of in Kansas), whereas in universal dependencies, prepositions depend on the prepositional object (Kansas dominates in). Intuitively, these differences should have a moderate effect on the resulting embeddings because different words will be in a given word\u2019s context. We also investigate \ufb01ve levels of enhancement for each dependency scheme. Basic dependencies are the core dependency structure provided by the scheme. Simpli\ufb01ed dependencies are more coarse basic dependencies, collapsing similar labels into rough classes. The categories are based off of the Stanford Typed Dependencies Manual (De Marneffe and Manning, 2008a) and the Universal Dependency Typology (De Marneffe et al., 2014), and are listed in Table 1. Note that the two dependency schemes organize the relations in different ways, and thus the two types of simpli\ufb01ed dependencies represent slightly different structures. The unlabeled dependency context removes all labels, and just captures syntactically adjacent tokens. Enhanced and Enhanced++ dependencies (Schuster and Manning, 2016) address some practical dependency distance issues by extending basic dependency edges. Enhanced dependencies augment modi\ufb01ers and conjuncts with their parents\u2019 labels, propagate governors and depen\fdents for indirectly governed arguments, and add subjects to controlled verbs. Enhanced++ dependencies allow for the deletion of edges to better capture English phenomena, including partitives and light noun constructions, multi-word prepositions, conjoined prepositions, and relative pronouns. 3 Experimental Setup We use the Stanford CoreNLP parser1 to parse basic, Enhanced, and Enhanced++ dependencies. We use the Stanford english SD model to parse Stanford dependencies (trained on the Penn Treebank) and english UD model to parse Universal dependencies (trained on the Universal Dependencies Corpus for English). We acknowledge that differences in both the size of the training data (Penn Treebank is larger than the Universal Dependency Corpus for English), and the accuracy of the parse can have an effect on our overall performance. We used our own converter to generate simple dependencies based on the rules shown in Table 1. We use the modi\ufb01ed word2vecf software2 Levy and Goldberg (2014) that works with arbitrary embedding contexts to train dependencybased word embeddings. As baselines, we train the following linear-context embeddings using the original word2vec software:3 CBOW with k = 2, CBOW with k = 5, and Skip-Gram. We also train enriched Skip-Gram embeddings including subword information (Bojanowski et al., 2016) using fastText.4 For all embeddings, we use a cleaned recent dump of English Wikipedia (November 2017, 4.3B tokens) as training data. We evaluate each on the following tasks: Similarity over Relatedness Akin to the quantitative analysis done by Levy and Goldberg (2014), we test to see how well each approach ranks similar items above related items. Given pairs of similar and related words, we rank each word pair by the cosine similarity of the corresponding word embeddings, and report the area-under-curve (AUC) of the resulting precision-recall curve. We use the labeled WordSim-353 (Agirre et al., 2009; Finkelstein et al., 2001) and the Chiarello 1stanfordnlp.github.io/CoreNLP/ 2bitbucket.org/yoavgo/word2vecf 3code.google.com/p/word2vec/ 4github.com/facebookresearch/fastText dataset (Chiarello et al., 1990) as a source of similar and related word pairs. For WordSim-353, we only consider pairs with similarity/relatedness scores of at least 5/10, yielding 90 similar pairs and 147 related pairs. For Chiarello, we disregard pairs that are marked as both similar and related, yielding 48 similar pairs and 48 related pairs. Ranked Similarity This evaluation uses a list of word pairs that are ranked by degree of functional similarity. For each word pair, we calculate the cosine similarity, and compare the ranking to that of the human-annotated list using the Spearman correlation. We use SimLex-999 (Hill et al., 2016) as a ranking of functional similarity. Since this dataset distinguishes between nouns, adjectives, and verbs, we report individual correlations in addition to the overall correlation. Question-type Classi\ufb01cation (QC) We use an existing QC implementation5 that uses a bidirectional LSTM. We train the model with 20 epochs, and report the average accuracy over 10 runs for each set of embeddings. We train and evaluate using the TREC QC dataset (Li and Roth, 2002). We modi\ufb01ed the approach to use \ufb01xed (non-trainable) embeddings, allowing us to compare the impact of each embedding type. Named Entity Recognition (NER) We use the Dernoncourt et al. (2017) NER implementation6 that uses a bidirectional LSTM. Training consists of a maximum of 100 epochs, with early stopping after 10 consecutive epochs with no improvement to validation performance. We evaluate NER using the F1 score on the CoNLL NER dataset (Tjong Kim Sang and De Meulder, 2003). Like the QC task, we use a non-trainable embedding layer. 4 Results 4.1 Similarity over Relatedness The results for the WordSim-353 (WS353) and Chiarello datasets are given in Table 2a. For the WS353 evaluation, notice that the Enhanced dependencies for both Universal and Stanford dependencies outperform the others in each scheme. Even the poorest-performing level of enhancement (unlabeled), however, yields a considerable gain over the linear contexts. Both Skip-Gram variants 5github.com/zhegan27/sentence_classification github.com/Franck-Dernoncourt/NeuroNER \f(a) Sim/rel (AUC) (b) Ranked sim (Spearman) (c) Downstream Embeddings WS353 Chiarello Overall Noun Adj. Verb QC (Acc) NER (F1) Universal embeddings Unlabeled 0.786 0.711 0.370 0.408 0.484 0.252 0.915 0.877 Simpli\ufb01ed 0.805 0.774 0.394 0.420 0.475 0.309 0.913 0.870 Basic 0.801 0.761 0.391 0.421 0.451 0.331 0.920 0.876 Enhanced 0.823 0.792 0.398 0.416 0.473 0.350 0.915 0.875 Enhanced++ 0.820 0.791 0.396 0.416 0.461 0.348 0.917 0.882 Stanford embeddings Unlabeled 0.790 0.741 0.382 0.414 0.507 0.256 0.911 0.870 Simpli\ufb01ed 0.793 0.748 0.393 0.416 0.501 0.297 0.923 0.873 Basic 0.808 0.769 0.402 0.422 0.494 0.341 0.910 0.865 Enhanced 0.817 0.755 0.399 0.420 0.482 0.338 0.911 0.871 Enhanced++ 0.810 0.764 0.398 0.417 0.496 0.346 0.918 0.878 Baselines (linear contexts) CBOW, k=2 0.696 0.537 0.311 0.355 0.338 0.252 0.913 0.885 CBOW, k=5 0.701 0.524 0.309 0.353 0.358 0.258 0.899 0.893 Skip-Gram 0.617 0.543 0.264 0.304 0.368 0.135 0.898 0.881 SG + Subword 0.615 0.456 0.324 0.358 0.451 0.166 0.897 0.887 Table 2: Results of various dependency-based word embeddings, and baseline linear contexts at (a) similarity over relatedness, (b) ranked similarity, and (c) downstream tasks of question classi\ufb01cation and named entity recognition. yield the worst performance, indicating that they capture relatedness better than similarity. For the Chiarello evaluation, the linear contexts perform even worse, while the Enhanced Universal embeddings again outperform the other approaches. These results reinforce the Levy and Goldberg (2014) \ufb01ndings that dependency-based word embeddings do a better job at distinguishing similarity rather than relatedness because it holds for multiple dependency schemes and levels of enhancement. The Enhanced universal embeddings outperformed the other settings for both datasets. For Chiarello, the margin between the two is statistically signi\ufb01cant, whereas for WS353 it is not. This might be due to the fact that the the Chiarello dataset consists of manually-selected pairs that exhibit similarity or relatedness, whereas the settings for WS353 allow for some marginally related or similar terms through (e.g., size is related to prominence, and monk is similar to oracle). 4.2 Ranked Similarity Spearman correlation results for ranked similarity on the SimLex-999 dataset are reported in Table 2b. Overall results indicate the performance on the entire collection. In this environment, basic Stanford embeddings outperform all other embeddings explored. This is an interesting result because it shows that the additional dependency labels added for Enhanced embeddings (e.g. for conjunction) do not improve the ranking performance. This trend does not hold for Universal embeddings, with the enhanced versions outperforming the basic embeddings. All dependency-based word embeddings significantly outperform the baseline methods (10 folds, paired t-test, p < 0.05). Furthermore, the unlabeled Universal embeddings performed significantly worse than the simpli\ufb01ed Universal, and the simpli\ufb01ed, basic, and Enhanced Stanford dependencies, indicating that dependency labels are important for ranking. Table 2b also includes results for word pairs by part of speech individually. As the majority category, Noun-Noun scores (n = 666) mimic the behavior of the overall scores, with basic Stanford embeddings outperforming other approaches. Interestingly, Adjective-Adjective pairs (n = 111) performed best with unlabeled Stanford dependencies. Since unlabeled also performs best among universal embeddings, this indicates that dependency labels are not useful for adjective similarity, possibly because adjectives have compara\fEmbeddings QC (Acc) NER (F1) Universal embeddings Unbound 0.921 (+0.007) 0.887 (+0.000) Simpli\ufb01ed 0.929 (+0.016) 0.883 (+0.013) Basic 0.920 (+0.000) 0.891 (+0.015) Enhanced 0.923 (+0.008) 0.886 (+0.010) Enhanced++ 0.927 (+0.010) 0.890 (+0.008) Stanford embeddings Unbound 0.926 (+0.015) 0.879 (+0.009) Simpli\ufb01ed 0.933 (+0.010) 0.877 (+0.004) Basic 0.927 (+0.017) 0.885 (+0.020) Enhanced 0.923 (+0.013) 0.885 (+0.014) Enhanced++ 0.929 (+0.011) 0.884 (+0.006) Baselines (linear contexts) CBOW, k=2 0.921 (+0.008) 0.892 (+0.007) CBOW, k=5 0.925 (+0.026) 0.892 (+0.001) Skip-Gram 0.914 (+0.016) 0.887 (+0.006) SG + Subword 0.919 (+0.022) 0.896 (+0.009) Table 3: Performance results when embeddings are further trained for the particular task. The number in parentheses gives the performance improvement compared to when embeddings are not trainable (Table 2c). tively few ambiguous functions. Verb-Verb pairs (n = 222) performed best with Enhanced universal embeddings. This indicates that the augmentation of governors, dependents, and subjects of controlled verbs is particularly useful given the universal dependency scheme, and less so for the English-speci\ufb01c Stanford dependency scheme. Both Stanford and universal unlabeled dependencies performed signi\ufb01cantly worse compared to all basic, Enhanced, and Enhanced++ dependencies (5 folds, paired t-test, p < 0.05). This indicates that dependency labels are particularly important for verb similarity. 4.3 Downstream Tasks We present results for question-type classi\ufb01cation and named entity recognition in Table 2c. Neither task appears to greatly bene\ufb01t from embeddings that favor similarity over relatedness or that can rank based on functional similarity effectively without the enhanced sentence feature representations explored by Komninos and Manandhar (2016). We compare the results using to the performance of models with embedding training enabled in Table 3. As expected, this improves the results because the training captures task-speci\ufb01c information in the embeddings. Generally, the worst-performing embeddings gained the most (e.g., CBOW k = 5 for QC, and basic Stanford for NER). However, the simpli\ufb01ed Stanford embeddings and the embeddings with subword information still outperform the other approaches for QC and NER, respectively. This indicates that the initial state of the embeddings is still important to an extent, and cannot be learned fully for a given task. 5" + }, + { + "url": "http://arxiv.org/abs/1804.05408v1", + "title": "GU IRLAB at SemEval-2018 Task 7: Tree-LSTMs for Scientific Relation Classification", + "abstract": "SemEval 2018 Task 7 focuses on relation ex- traction and classification in\nscientific literature. In this work, we present our tree-based LSTM network for\nthis shared task. Our approach placed 9th (of 28) for subtask 1.1 (relation\nclassification), and 5th (of 20) for subtask 1.2 (relation classification with\nnoisy entities). We also provide an ablation study of features included as\ninput to the network.", + "authors": "Sean MacAvaney, Luca Soldaini, Arman Cohan, Nazli Goharian", + "published": "2018-04-15", + "updated": "2018-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Information Extraction (IE) has applications in a variety of domains, including in scienti\ufb01c literature. Extracted entities and relations from scienti\ufb01c articles could be used for a variety of tasks, including abstractive summarization, identi\ufb01cation of articles that make similar or contrastive claims, and \ufb01ltering based on article topics. While ontological resources can be leveraged for entity extraction (G\u00b4 abor et al., 2016), relation extraction and classi\ufb01cation still remains a challenging task. Relations are particularly valuable because (unlike simple entity occurrences) relations between entities capture lexical semantics. SemEval 2018 Task 7 (Semantic Relation Extraction and Classi\ufb01cation in Scienti\ufb01c Papers) encourages research in relation extraction in scienti\ufb01c literature by providing common training and evaluation datasets (G\u00b4 abor et al., 2018). In this work, we describe our approach using a tree-structured recursive neural network, and provide an analysis of its performance. There has been considerable previous work with scienti\ufb01c literature due to its availability and interest to the research community. A previous shared task (SemEval 2017 Task 10) investigated the extraction of both keyphrases (entities) and relations in scienti\ufb01c literature (Augenstein et al., 2017). However, the relation set for this shared task was limited to just synonym and hypernym relationships. The top three approaches used for relationonly extraction included convolutional neural networks (Lee et al., 2017a), bi-directional recurrent neural networks with Long Short-Term Memory (LSTM, Hochreiter and Schmidhuber, 1997) cells (Ammar et al., 2017), and conditional random \ufb01elds (Lee et al., 2017b). There are several challenges related to scienti\ufb01c relation extraction. One is the extraction of the entities themselves. Luan et al. (2017) produce the best published results on the 2017 ScienceIE shared task for entity extraction using a semisupervised approach with a bidirectional LSTM and a CRF tagger. Zheng et al. (2014) provide an unsupervised technique for entity linking scienti\ufb01c entities in the biomedical domain to an ontology. Contribution. Our approach employs a treebased LSTM network using a variety of syntactic features to perform relation label classi\ufb01cation. We rank 9th (of 28) when manual entities are used for training, and 5th (of 20) when noisy entities are used for training. Furthermore, we provide an ablation analysis of the features used by our model. Code for our model and experiments is available.1 2 Methodology Syntactic information between entities plays an important role in relation extraction and classi\ufb01cation (Mintz et al., 2009; MacAvaney et al., 2017). Similarly, sequential neural models, such as LSTM, have shown promising results on scienti\ufb01c literature (Ammar et al., 2017). Therefore, in our approach, we leverage both syntactic structures and neural sequential models by employing a tree-based long-short term memory cell (tree-LSTM). Tree-LSTMs, originally introduced by Tai et al. (2015), have been successfully used to 1https://github.com/Georgetown-IR-Lab/ semeval2018-task7 arXiv:1804.05408v1 [cs.CL] 15 Apr 2018 \ft1,j=H(e1) softmax y t4,j=H(e2) t3,j t2,j dense LSTM LSTM LSTM LSTM Figure 1: Our tree LSTM network. capture relation information in other domains (Xu et al., 2015; Miwa and Bansal, 2016). On a high level, tree-LSTMs operate very similarly to sequential models; however, rather than processing tokens sequentially, they follow syntactic dependencies; once the model reaches the root of the tree, the output is used to compute a prediction, usually through a dense layer. We use the childsum variant of tree-LSTM (Tai et al., 2015). Formally, let Sj = {t1,j, . . . , tn,j} be a sentence of length n, e1 = {ti, . . . , tk} and e2 = {tp, . . . , tq} two entities whose relationship we intend to classify; let H(e1), H(e2) be the root of the syntactic subtree spanning over entities e1 and e2. Finally, let T(e1, e2) be the syntactic sub-tree spanning from H(e1) to H(e2). For the \ufb01rst example in Table 1, e1 = {\u2018Oral\u2019, \u2018communication\u2019} , e2 = {\u2018indices\u2019}, H(e1) = {\u2018communication\u2019}, T(e1, e2) = {\u2018communication\u2019, \u2018offer\u2019, \u2018indices\u2019}. The proposed model uses word embeddings of terms in T(e1, e2) as inputs; the output of the tree-LSTM cell on the root of the syntactic tree is used to predict one of the six relation types (y) using a softmax layer. A diagram of our tree LSTM network is shown in Figure 1. In order to overcome the limitation imposed by the small amount of training data available for this task, we modify the general architecture proposed in (Miwa and Bansal, 2016) in two crucial ways. First, rather than using the representation of entities as input, we only consider the syntactic head of each entity. This approach improves the generalizability of the model, as it prevents over\ufb01tting on very speci\ufb01c entities in the corpus. For example, by reducing \u2018Bag-of-words methods\u2019 to \u2018methods\u2019 and \u2018segment order-sensitive models\u2019 to \u2018models\u2019, the model is able to recognize the COMRelation (abbr.) Example USAGE (U) Oral communication may offer additional indices... MODEL-FEATURE (M-F) We look at the intelligibility of MT output... PART WHOLE (P-W) As the operational semantics of natural language applications improve... COMPARE (C) Bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of... RESULT (R) We \ufb01nd that interpolation methods improve the performance... TOPIC (T) A formal analysis for a large class of words called alternative markers... Table 1: Example relations for each type. Entities are underlined, and all relations are from the \ufb01rst entity to the second entity (non-reversed). PARE relation between these two entities (see Table 1). Second, we experimented with augmenting each term representation with the following features: \u2022 Dependency labels (DEP): we append to each term embedding the label representing the dependency between the term and its parent. \u2022 PoS tags (POS): the part-of-speech tag for each term is append to its embedding. \u2022 Entity length (ENTLEN): we concatenate the number of tokens in e1 and e2 to embeddings representation of heads H(e1) to H(e2). For terms that are not entity heads, the entity length feature is replaced by \u20180\u2019. \u2022 Height: the height of each term in the syntactic subtree connecting two entities. 3 Experimental Setup SemEval 2018 Task 7 focuses on relation extraction, assuming a gold set of entities. This allows participants to focus on speci\ufb01c issues related to relation extraction with a rich set of semantic relations. These include relations for USAGE, MODEL-FEATURE, PART WHOLE, COMPARE, RESULT, and TOPIC. Examples of each type of relation are given in Table 1. The shared task evaluates three separate subtasks (1.1, 1.2, and 2). We tuned and submitted \fDataset U M-F P-W C R T Subtask 1.1 Train 409 289 215 86 57 15 Valid. 74 37 19 9 15 3 Test 175 66 70 21 20 3 Subtask 1.2 Train 363 124 162 29 94 207 Valid. 107 51 34 12 29 36 Test 123 75 56 3 29 69 Table 2: Frequency of relation labels in train, validation, and test sets. See Table 1 for relation label abbreviations. Subtask 1.1 uses manual entity labels, and subtask 1.2 uses automatic entity labels (which may be noisy). our system for subtasks 1.1 and 1.2. In both of these subtasks, participants are given scienti\ufb01c abstracts with entities and candidate relation pairs, and are asked to determine the relation label of each pair. For subtask 1.1, both the entities and relations are manually annotated. For subtask 1.2, the entities are automatically generated using the procedure described in G\u00b4 abor et al. (2016). This procedure introduces noise, but represents a more realistic evaluation environment than subtask 1.1. In both cases, relations and gold labels are produced by human annotators. All abstracts are from the ACL Anthology Reference Corpus (Bird et al., 2008). We randomly select 50 texts from the training datasets for validation of our system. We provide a summary of the datasets for training, validation, and testing in Table 2. Notice how the proportions of each relation label vary considerably among the datasets. We experiment with two sets of word embeddings: Wiki News and arXiv. The Wiki News embeddings bene\ufb01t from the large amount of general language, and the arXiv embeddings capture specialized domain language. The Wiki News embeddings are pretrained using fastText with a dimension of 300 (Mikolov et al., 2018). The arXiv embeddings are trained on a corpus of text from the cs section of arXiv.org2 using a window of 8 (to capture adequate term context) and a dimension of 100 (Cohan et al., 2018). A third variation of the embeddings simply concatenates the Wiki News and arXiv embeddings, yielding a dimension of 400; for words that appear in only one of 2https://github.com/acohan/ long-summarization System F1 Rank Subtask 1.1 (28 teams) Our submission 60.9 9 Median team 45.5 Mean team 37.1 Subtask 1.2 (20 teams) Our submission 78.9 5 Median team 70.3 Mean team 54.0 Table 3: Performance result comparison to other task participants for subtasks 1.1 and 1.2. U M-F P-W C R T Predicted label U M-F P-W C R T True label 143 15 9 6 1 1 13 38 10 0 3 1 15 8 44 3 0 0 1 1 2 14 3 0 4 2 0 1 13 0 2 0 0 0 0 1 Figure 2: Confusion matrix for subtask 1.1. the two embedding sources, the available embeddings are concatenated with a vector of appropriate size sampled from N(0, 10\u22128). For our of\ufb01cial SemEval submission, we train our model using the concatenated embeddings and one-hot encoded dependency label features. We use a hidden layer of 200 nodes, a 0.2 dropout rate, and a training batch size of 16. Syntactic trees were extracted using SpaCy3, and the neural model was implemented using MxNet4. The of\ufb01cial evaluation metric is the macroaveraged F1 score of all relation labels. For additional analysis, we use the macro precision and recall, and the F1 score for each relation label. 4 Results and Discussion In Table 3, we provide our of\ufb01cial SemEval results in the context of other task participants. In both subtasks, we ranked above both the median and mean team scores, treating the top-ranking approach for each team as the team\u2019s score. For Subtask 1.1, we ranked 9 out of 28, and for Subtask 1.2, we ranked 5 out of 20. This indicates 3https://spacy.io/ 4https://mxnet.incubator.apache.org/ \fOverall F1 by label Features P R F1 U M-F P-W C R T Subtask 1.1 (no features) 56.9 64.1 59.5 81.4 51.5 59.9 57.8 61.9 44.4 DEP 53.5 54.1 53.6 79.1 55.5 58.2 63.8 64.9 0.0 DEP + POS 60.1 59.1 59.5 79.9 57.1 58.5 68.3 60.0 33.3 DEP + POS + EntLen 59.4 64.1 60.9 80.0 59.0 56.9 58.3 61.1 50.0 DEP + POS + EntLen + Height 52.1 53.3 52.4 79.2 57.4 62.2 56.0 59.5 0.0 Subtask 1.2 (no features) 74.2 78.9 75.4 80.0 65.6 72.6 57.1 80.0 97.1 DEP 76.4 78.5 76.4 79.2 67.2 73.0 66.7 79.4 93.1 DEP + POS 75.5 80.3 77.3 82.0 73.9 73.6 57.1 80.0 97.1 DEP + POS + EntLen 78.2 79.7 78.0 81.9 69.3 70.5 66.7 82.5 97.1 DEP + POS + EntLen + Height 73.0 78.7 74.8 79.5 70.7 70.3 57.1 74.3 97.1 Table 4: Feature ablation results for subtasks 1.1 and 1.2. DEP are dependency labels, POS are part of speech labels, EntLen is is the length of the input entities, and Height is the height of the entities in the dependency tree. In both subtasks 1.1 and 1.2, the combination of dependency labels, parts of speech, and entity lengths yield the best performance in terms of overall F1 score. Embeddings P R F1 Subtask 1.1 Wiki News 59.2 57.3 57.6 arXiv 58.5 55.1 56.4 Wiki News + arXiv 59.4 64.1 60.9 Subtask 1.2 Wiki News 73.1 76.2 72.7 arXiv 65.4 67.4 65.9 Wiki News + arXiv 78.2 79.7 78.0 Table 5: Performance comparison for subtasks 1.1 and 1.2 when using Wiki News and arXiv embeddings. The concatenated embeddings outperform the individual methods. that our approach is generally more tolerant to the noisy entities given in Subtask 1.2 than most other approaches. Figure 2 is a confusion matrix for the of\ufb01cial submission for subtask 1.1. The three most frequent labels in the training data (USAGE, MODEL-FEATURE, and PART WHOLE) are also the most frequently confused relation labels. This behavior can be partially attributed to the class imbalance. In Table 4, we examine the effects of various feature combinations on the model. Speci\ufb01cally, we check the macro averaged precision, recall, and F1 scores for both subtask 1.1 and 1.2 with various sets of features on the test set. Of the combinations we investigated, including the dependency labels, part of speech tags, and the token length of entities yielded the best results in terms of overall F1 score for both subtasks. The results by individual relation label are more mixed, with the overall best combination simply yielding better performance on average, not on each label individually. Interestingly, the entity height feature reduces performance, perhaps indicating that it is easy to over\ufb01t the model using this feature. Table 5 examines the effect of the choice of word embeddings on performance. In both subtasks, concatenating the Wiki News and arXiv embeddings yields better performance than using a single type of embedding. This suggests that the two types of embeddings are useful in different cases; perhaps Wiki News better captures the general language linking the entities, whereas the arXiv embeddings capture the specialized language of the entities themselves. 5" + }, + { + "url": "http://arxiv.org/abs/1707.00189v3", + "title": "Content-Based Weak Supervision for Ad-Hoc Re-Ranking", + "abstract": "One challenge with neural ranking is the need for a large amount of\nmanually-labeled relevance judgments for training. In contrast with prior work,\nwe examine the use of weak supervision sources for training that yield pseudo\nquery-document pairs that already exhibit relevance (e.g., newswire\nheadline-content pairs and encyclopedic heading-paragraph pairs). We also\npropose filtering techniques to eliminate training samples that are too far out\nof domain using two techniques: a heuristic-based approach and novel supervised\nfilter that re-purposes a neural ranker. Using several leading neural ranking\narchitectures and multiple weak supervision datasets, we show that these\nsources of training pairs are effective on their own (outperforming prior weak\nsupervision techniques), and that filtering can further improve performance.", + "authors": "Sean MacAvaney, Andrew Yates, Kai Hui, Ophir Frieder", + "published": "2017-07-01", + "updated": "2019-07-05", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION A lack of manual training data is a perennial problem in information retrieval [18]. To enable training supervised rankers for new domains, we propose a weak supervision approach based on pairs of text to train neural ranking models and a \ufb01ltering technique to adapt the dataset to a given domain. Our approach eliminates the need for a query log or large amounts of manually-labeled in-domain relevance judgments to train neural rankers, and exhibits stronger and more varied positive relevance signals than prior weak supervision work (which relies on BM25 for these signals). Others have experimented with weak supervision for neural ranking (see Section 2.2). Our weak supervision approach di\ufb00ers from these approaches in a crucial way: we train neural rankers \u2217Work conducted while the author was at the Max Planck Institute for Informatics. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\ufb01t or commercial advantage and that copies bear this notice and the full citation on the \ufb01rst page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior speci\ufb01c permission and/or a fee. Request permissions from permissions@acm.org. SIGIR \u201919, July 21\u201325, 2019, Paris, France \u00a9 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-6172-9/19/07...$15.00 https://doi.org/10.1145/3331184.3331316 using datasets of text pairs that exhibit relevance, rather than using a heuristic to \ufb01nd pseudo-relevant documents for queries. For instance, the text pair from a newswire dataset consisting of an article\u2019s headline and its content exhibits an inherent sense of relevance because a headline often provides a concise representation of an article\u2019s content. To overcome possible domain di\ufb00erences between the training data and the target domain, we propose an approach to \ufb01lter the training data using a small set of queries (templates) from the target domain. We evaluate two \ufb01lters: an unsupervised heuristic and using the neural ranker itself as a discriminator. We evaluate our approaches by training several leading neural ranking architectures on two sources of weak supervision text pairs. We show that our methods can signi\ufb01cantly outperform various neural rankers when trained using a query log source (as proposed by [5]), the ranker when trained on a limited amount of manually-labeled in-domain data (as one would encounter in a new domain), and well-tuned conventional baselines. In summary, we (1) address existing shortcomings of weak supervision to train neural rankers by using training sources from text pairs, (2) address limitations related to domain di\ufb00erences when training rankers on these sources using novel \ufb01ltering techniques, and (3) demonstrate the e\ufb00ectiveness of our methods for ad-hoc retrieval when limited in-domain training data is available. Our code is public for validation and further comparisons.1 2 BACKGROUND AND RELATED WORK 2.1 Neural IR models Ad-hoc retrieval systems rank documents according to their relevance to a given query. A neural IR model (nir) aims to measure the interaction between a query-document pair (q, d) with a realvalue relevance score rel = nir(q,d). The model nir is trained to minimize pairwise loss between training triples consisting of a query q, relevant document d+, and non-relevant document d\u2212. Neural retrieval models can be categorized as semantic matching models(which create dense query/document representations) or as relevance matching models (which compare query and document terms directly, often through a query-document similarity matrix). We focus on relevance matching models because they generally show better performance than semantic matching models. We test our approach on three leading neural rankers: KNRM [16] uses Gaussian kernels applied to each individual similarity score and log-summed across the document dimension. A \ufb01nal dense learning-to-rank phase combines these features into a relevance score. 1https://github.com/Georgetown-IR-Lab/neuir-weak-supervision \fConv-KNRM [4] is a variant of KNRM which applies convolution \ufb01lters of lengths 1\u20133 over word embeddings before building cross-matched (matching all kernel lengths with one another) similarity matrices. The rest of the ranking process is identical to KNRM. PACRR [8] uses square convolutional kernels over the similarity matrix to capture soft n-gram matches. k\u2212max pooling is applied to retain only the strongest signals for each query term, and signals are combined with a dense layer. 2.2 Weak supervision In IR, weak supervision uses pseudo-relevant information to train a ranking model in place of human judgments. Early work on weak supervision for IR focused on training learning-to-rank models [2], using web anchor text [1] and microblog hashtags [3] for weak supervision. More recently, Dehghani et al. [5] proposed a weak supervision approach that makes use of the AOL query log and BM25 results as a source of training data. Aside from limitations surrounding the availability of query logs, their approach su\ufb00ers from limitations of BM25 itself: it assumes that documents ranked higher by BM25 are more relevant to the query than documents ranked lower. Others have suggested using a similar approach, but using news headlines [9], also assuming relevance from BM25 rankings. Still others have employed a Generative Adversarial Network to build training samples [15], but this limits the generated data to the types of relevance found in the training samples, making it a complementary approach. In contrast, our approach uses freelyavailable text pairs that exhibit both a high quality and large size. 3 METHOD 3.1 Rankingand content-based sources Recall that pairwise training consists of a set of training triples, each consisting of a queryq, relevant documentd+, and non-relevant document d\u2212. We describe two sources of weak supervision training data that replace human-generated relevance judgments: rankingbased and content-based training sources. Ranking-based training sources, \ufb01rst proposed by [5], are de\ufb01ned by a collection of texts T, a collection of documents D, and an unsupervised ranking function R(q,d) (e.g., BM25). Training triples are generated as follows. Each text is treated as a query q \u2208T. All documents in D are ranked using R(\u00b7), giving Dq. Relevant documents are sampled using a cuto\ufb00c+, and non-relevant documents are sampled using cuto\ufb00c\u2212, such that d+ \u2208Dq[0 : c+] and d\u2212\u2208Dq[c+ : c\u2212]. This source is referred to as ranking-based because the unsupervised ranker is the source of relevance.2 Content-based training sources are de\ufb01ned as a collection of text pairs P = {(a1,b1), (a2,b2), ..., (a|P |,b|P |)} and an unsupervised ranking function R(q,d) (e.g., BM25). The text pairs should be semantically related pairs of text, where the \ufb01rst element is similar to a query, and the second element is similar to a document in the target domain. For instance, they could be heading-content pairs of 2Our formulation of ranking-based sources is slightly di\ufb00erent than what was proposed by Dehghani et al. [5]: we use cuto\ufb00thresholds for positive and negative training samples, whereas they suggest using random pairs. Pilot studies we conducted showed that the threshold technique usually performs better. news articles (the headline describes the content of the article content). For a given text pair, a query and relevant document are selected (q,d+) \u2208P. The non-relevant document is selected from the collection of documents in B = {b1,b2, ...,b|P |}. We employ R(\u00b7) to select challenging negative samples from Bq. A negative cuto\ufb00c\u2212 is employed, yielding negative document d\u2212\u2208Bq[0 : c\u2212] \u2212{d+}. We discard positive samples where d+ is not within this range to eliminate overtly non-relevant documents. This approach can yield documents relevant to q, but we assert that d+ is more relevant. Althoughranking-based and content-based training sources bear some similarities, important di\ufb00erences remain. Content-based sources use text pairs as a source of positive relevance, whereas rankingbased sources use the unsupervised ranking. Furthermore, contentbased sources use documents from the pair\u2019s domain, not the target domain. We hypothesize that the enhanced notion of relevance that content-based sources gain from text pairs will improve ranking performance across domains, and show this in Section 4. 3.2 Filter framework We propose a \ufb01ltering framework to overcome domain mismatch that can exist between data found in a weak supervision training source and data found in the target dataset. The framework consists of a \ufb01lter function FD(q,d) that determines the suitability of a given weak supervision query-document pair (q,d) to the domain D. All relevant training pairs (q,d+) \u2208S for a weak supervision source S are ranked using FD(q,d+) and the cmax maximum pairs are chosen: SD = maxcmax (q,d+)\u2208S FD(q,d+). To tune FD(\u00b7) to domain D, a set of template pairs from the target domain are employed. The set of pairsTD is assumed to be relevant in the given domain.3 We assert that these \ufb01lters are easy to design and can have broad coverage of ranking architectures. We present two implementations of the \ufb01lter framework: the kmax \ufb01lter, and the Discriminator \ufb01lter. k-Maximum Similarity (kmax) \ufb01lter. This heuristic-based \ufb01lter consists of two components: a representation function rep(q,d) and a distance function dist(r1,r2). The representation function captures some matching signal between query q and document d as a vector. Since many neural ranking models consider similarity scores between terms in the query and document to perform soft term matching [4, 7, 8, 16], this \ufb01lter selects the k maximum cosine similarity scores between the word vectors of each query term and all terms in the document: maxk dj \u2208d sim(qi,dj) : \u2200qi \u2208q. Since neural models can capture local patterns (e.g., n-grams), we use an aligned mean square error. The aligned MSE iterates over possible con\ufb01gurations of elements in the representation by shifting the position to \ufb01nd the alignment that yields the smallest distance. In other words, it represents the minimum mean squared error given all rotated con\ufb01gurations of the query. Based on the shift operation and given two interaction representation matricesr1 and r2, the aligned distkmax (r1,r2) is de\ufb01ned as the minimum distance when shifting r1 for s \u2208[1, |r1|). More formally: distkmax(r1,r2) = min|r1 | s=1 MSE\u0000shif(r1,s),r2 \u0001. 3Templates do not require human judgments. We use sample queries and an unsupervised ranker to generate TD . Manual judgments can be used when available. \fUsing these two functions, the \ufb01lter is simply de\ufb01ned as the minimum distance between the representations of it and any template pair from the target domain: FD(q,d) = min (q\u2032,d\u2032)\u2208TD dist(rep(q,d),rep(q\u2032,d\u2032)) (1) Discriminator \ufb01lter. A second approach to interaction \ufb01ltering is to use the ranking architecture R itself. Rather than training R to distinguish di\ufb00erent degrees of relevance, here we use R to train a model to distinguish between samples found in the weak supervision source andTD. This technique employs the same pairwise loss approach used for relevance training and is akin to the discriminator found in generative adversarial networks. Pairs are sampled uniformly from both templates and the weak supervision source. Once RD is trained, all weak supervision training samples are ranked with this model acting as FD(\u00b7) = RD(\u00b7). The intuition behind this approach is that the model should learn characteristics that distinguish in-domain pairs from out-ofdomain pairs, but it will have di\ufb03culty distinguishing between cases where the two are similar. One advantage of this approach is that it allows for training an interaction \ufb01lter for any arbitrary ranking architecture, although it requires a su\ufb03ciently largeTD to avoid over\ufb01tting. 4 EVALUATION 4.1 Experimental setup Training sources. We use the following four sources of training data to verify the e\ufb00ectiveness of our methods: Query Log (AOL, ranking-based, 100k queries). This source uses the AOL query log [12] as the basis for a ranking-based source, following the approach of [5].4 We retrieve ClueWeb09 documents for each query using the Indri5 query likelihood (QL) model. We \ufb01xc+ = 1 andc\u2212= 10 due to the expense of sampling documents from ClueWeb. Newswire (NYT, content-based, 1.8m pairs). We use the New York Times corpus [13] as a content-based source, using headlines as pseudo queries and the corresponding content as pseudo relevant documents. We use BM25 to select the negative articles, retaining top c\u2212= 100 articles for individual headlines. Wikipedia (Wiki, content-based, 1.1m pairs). Wikipedia article heading hierarchies and their corresponding paragraphs have been employed as a training set for the Trec Complex Answer Retrieval (CAR) task [10, 11]. We use these pairs as a content-based source, assuming that the hierarchy of headings is a relevant query for the paragraphs under the given heading. Heading-paragraph pairs from train fold 1 of the Trec CAR dataset [6] (v1.5) are used. We generate negative heading-paragraph pairs for each heading using BM25 (c\u2212= 100). Manual relevance judgments(WT10). We compare the rankingbased and content-based sources with a data source that consists of relevance judgments generated by human assessors. In 4 Distinct non-navigational queries from the AOL query log from March 1, 2006 to May 31, 2006 are selected. We randomly sample 100k of queries with length of at least 4. While Dehghani et al. [5] used a larger number of queries to train their model, the state-of-the-art relevance matching models we evaluate do not learn term embeddings (as [5] does) and thus converge with fewer than 100k training samples. 5https://www.lemurproject.org/indri/ particular, manual judgments from 2010 Trec Web Track adhoc task (WT10) are employed, which includes 25k manual relevance judgments (5.2k relevant) for 50 queries (topics + descriptions, in line with [7, 8]). This setting represents a new target domain, with limited (yet still substantial) manually-labeled data. Training neural IR models. We test our method using several state-of-the-art neural IR models (introduced in Section 2.1): PACRR [8], Conv-KNRM [4], and KNRM [16].6 We use the model architectures and hyper-parameters (e.g., kernel sizes) from the best-performing con\ufb01gurations presented in the original papers for all models. All models are trained using pairwise loss for 200 iterations with 512 training samples each iteration. We use Web Track 2011 (WT11) manual relevance judgments as validation data to select the best iteration via nDCG@20. This acts as a way of \ufb01netuning the model to the particular domain, and is the only place that manual relevance judgments are used during the weak supervision training process. At test time, we re-rank the top 100 Indri QL results for each query. Interaction \ufb01lters. We use the 2-maximum and discriminator \ufb01lters for each ranking architecture to evaluate the e\ufb00ectiveness of the interaction \ufb01lters. We use queries from the target domain (Trec Web Track 2009\u201314) to generate the template pair set for the target domain TD. To generate pairs for TD, the top 20 results from query likelihood (QL) for individual queries on ClueWeb09 and ClueWeb127 are used to construct query-document pairs. Note that this approach makes no use of manual relevance judgments because only query-document pairs from the QL search results are used (without regard for relevance). We do not use query-document pairs from the target year to avoid any latent query signals from the test set. The supervised discriminator \ufb01lter is validated using a held-out set of 1000 pairs. To prevent over\ufb01tting the training data, we reduce the convolutional \ufb01lter sizes of PACRR and ConvKNRM to 4 and 32, respectively. We tune cmax with the validation dataset (WT11) for each model (100k to 900k, 100k intervals). Baselines and benchmarks. As baselines, we use the AOL ranking-based source as a weakly supervised baseline [5], WT10 as a manual relevance judgment baseline, and BM25 as an unsupervised baseline. The two supervised baselines are trained using the same conditions as our approach, and the BM25 baselines is tuned on each testing set with Anserini [17], representing the best-case performance of BM25.8 We measure the performance of the models using the Trec Web Track 2012\u20132014 (WT12\u201314) queries (topics + descriptions) and manual relevance judgments. These cover two target collections: ClueWeb09 and ClueWeb12. Akin to [5], the trained models are used to re-rank the top 100 results from a querylikelihood model (QL, Indri [14] version). Following the Trec Web Track, we use nDCG@20 and ERR@20 for evaluation. 4.2 Results In Table 1, we present the performance of the rankers when trained using content-based sources without \ufb01ltering. In terms of absolute 6By using these stat-of-the-art architectures, we are using stronger baselines than those used in [5, 9]. 7https://lemurproject.org/clueweb09.php, https://lemurproject.org/clueweb12.php 8Grid search: b \u2208[0.05, 1] (0.05 interval), and k1 \u2208[0.2, 4] (0.2 interval) \fTable 1: Ranking performance when trained using contentbased sources (NYT and Wiki). Signi\ufb01cant di\ufb00erences compared to the baselines ([B]M25, [W]T10, [A]OL) are indicated with \u2191and \u2193(paired t-test, p < 0.05). nDCG@20 Model Training WT12 WT13 WT14 BM25 (tuned w/ [17]) 0.1087 0.2176 0.2646 PACRR WT10 B\u21910.1628 0.2513 0.2676 AOL 0.1910 0.2608 0.2802 NYT W\u2191B\u21910.2135 A\u2191W\u2191B\u21910.2919 W\u21910.3016 Wiki W\u2191B\u21910.1955 A\u2191B\u21910.2881 W\u21910.3002 Conv-KNRM WT10 B\u21910.1580 0.2398 B\u21910.3197 AOL 0.1498 0.2155 0.2889 NYT A\u2191B\u21910.1792 A\u2191W\u2191B\u21910.2904 B\u21910.3215 Wiki 0.1536 A\u21910.2680 B\u21910.3206 KNRM WT10 B\u21910.1764 0.2671 0.2961 AOL B\u21910.1782 0.2648 0.2998 NYT W\u21930.1455 A\u21930.2340 0.2865 Wiki A\u2193W\u21930.1417 0.2409 0.2959 score, we observe that the two n-gram models (PACRR and ConvKNRM) always perform better when trained on content-based sources than when trained on the limited sample of in-domain data. When trained on NYT, PACRR performs signi\ufb01cantly better. KNRM performs worse when trained using the content-based sources, sometimes signi\ufb01cantly. These results suggest that these content-based training sources contain relevance signals where ngrams are useful, and it is valuable for these models to see a wide variety of n-gram relevance signals when training. The n-gram models also often perform signi\ufb01cantly better than the rankingbased AOL query log baseline. This makes sense because BM25\u2019s rankings do not consider term position, and thus cannot capture this important indicator of relevance. This provides further evidence that content-based sources do a better job providing samples that include various notions of relevance than ranking-based sources. When comparing the performance of the content-based training sources, we observe that the NYT source usually performs better than Wiki. We suspect that this is due to the web domain being more similar to the newswire domain than the complex answer retrieval domain. For instance, the document lengths of news articles are more similar to web documents, and precise term matches are less common in the complex answer retrieval domain [10]. We present \ufb01ltering performance on NYT and Wiki for each ranking architecture in Table 2. In terms of absolute score, the \ufb01lters almost always improve the content-based data sources, and in many cases this di\ufb00erence is statistically signi\ufb01cant. The one exception is for Conv-KNRM on NYT. One possible explanation is that the \ufb01lters caused the training data to become too homogeneous, reducing the ranker\u2019s ability to generalize. We suspect that Conv-KNRM is particularly susceptible to this problem because of language-dependent convolutional \ufb01lters; the other two models rely only on term similarity scores. We note that Wiki tends to do better with the 2max \ufb01lter, with signi\ufb01cant improvements seen for Conv-KNRM and KNRM. In thse models, the discriminator \ufb01lter may be learning surface characteristics of the dataset, rather than more valuable notions of relevance. We also note that cmax Table 2: Ranking performance using \ufb01ltered NYT and Wiki. Signi\ufb01cant improvements and reductions compared to un\ufb01ltered dataset are marked with \u2191and \u2193(paired t-test,p < 0.05). WebTrack 2012\u201314 Model Training kmax nDCG@20 ERR@20 PACRR NYT 0.2690 0.2136 w/ 2max 200k 0.2716 0.2195 w/ discriminator 500k \u21910.2875 0.2273 Wiki 0.2613 0.2038 w/ 2max 700k 0.2568 0.2074 w/ discriminator 800k 0.2680 0.2151 Conv-KNRM NYT 0.2637 0.2031 w/ 2max 100k \u21930.2338 0.2153 w/ discriminator 800k 0.2697 0.1937 Wiki 0.2474 0.1614 w/ 2max 400k 0.2609 \u21910.1828 w/ discriminator 700k 0.2572 0.1753 KNRM NYT 0.2220 0.1536 w/ 2max 100k 0.2235 \u21910.1828 w/ discriminator 300k 0.2274 \u21910.1671 Wiki 0.2262 0.1635 w/ 2max 600k \u21910.2389 \u21910.1916 w/ discriminator 700k 0.2366 0.1740 is an important (yet easy) hyper-parameter to tune, as the optimal value varies considerably between systems and datasets. 5" + } + ], + "David D. Lewis": [ + { + "url": "http://arxiv.org/abs/2108.12746v1", + "title": "Certifying One-Phase Technology-Assisted Reviews", + "abstract": "Technology-assisted review (TAR) workflows based on iterative active learning\nare widely used in document review applications. Most stopping rules for\none-phase TAR workflows lack valid statistical guarantees, which has\ndiscouraged their use in some legal contexts. Drawing on the theory of quantile\nestimation, we provide the first broadly applicable and statistically valid\nsample-based stopping rules for one-phase TAR. We further show theoretically\nand empirically that overshooting a recall target, which has been treated as\ninnocuous or desirable in past evaluations of stopping rules, is a major source\nof excess cost in one-phase TAR workflows. Counterintuitively, incurring a\nlarger sampling cost to reduce excess recall leads to lower total cost in\nalmost all scenarios.", + "authors": "David D. Lewis, Eugene Yang, Ophir Frieder", + "published": "2021-08-29", + "updated": "2021-08-29", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG" + ], + "main_content": "INTRODUCTION Technology-assisted review (TAR) is the use of technological means to accelerate manual document review workflows. A prominent application is document review in legal cases, known as electronic discovery or eDiscovery [3], a multi-billion dollar industry.1 Another application area is systematic reviews of scientific literature [52], which have played a revolutionary role in empirical medicine [21] and other fields [15]. More generally, TAR is applicable to a range of high recall retrieval tasks [1, 9, 11, 12, 30, 47]. The TREC-COVID 1Global $12.9 Billion eDiscovery Market Forecast to 2025, https://prn.to/3upSeBC Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia \u00a9 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-8446-9/21/11...$15.00 https://doi.org/10.1145/3459637.3482415 project was an emergency deployment of a TAR process early in the Covid-19 pandemic [36]. Two categories of TAR workflows can be distinguished. Twophase TAR workflows (sometimes called culling workflows) are focused on iterative training of a text classifier by active learning [44] which is then used to select a subset of a collection for review [8, 32, 56]. A distinction is drawn between the training phase (Phase 1) and the review phase (Phase 2). While review of documents is done in both phases, most review effort occurs in Phase 2, after training is over. Two-phase reviews are preferred when per-document costs vary among review personnel [56]. In contrast, one-phase workflows do not distinguish between training and review, and are preferable when review costs are constant. Iterative training of models, using those models to prioritize documents for review, reviewing of the prioritized documents, and the feeding back of reviewed documents for training continues during the entire review. This is the structure of a classical relevance feedback workflow in information retrieval [37, 41] and, indeed, relevance feedback is widely used in one-phase TAR reviews [8]. Since TAR is used when it would be too expensive to review all documents [35], a stopping rule is necessary to decide when the review ends. However, one wants confidence, and ideally a certification by statistical guarantee, that a certain proportion of relevant documents have been found by the stopping point, i.e., that a recall target has been achieved [28]. A stopping rule can thus fail in one of two ways: failing to hit its recall target or incurring unacceptably high costs in doing so. Unfortunately, no statistically valid, generally applicable stopping rule for one-phase TAR has been available (Section 6). The lack of such certification rules has limited the adoption of one-phase TAR workflows. For instance, the US Department of Justice Antitrust Division\u2019s model agreement for use of supervised learning includes only two-phase culling workflows.2 In response to this need, we reconsider TAR stopping rules from the perspective of statistical quality control in manufacturing [16]. Our contributions are: \u2022 A taxonomy of TAR stopping rules by application contexts in which they can be used \u2022 A theoretical framework for understanding stopping a TAR review as a problem in quantile estimation \u2022 The first two general purpose certification rules for onephase TAR: the Quantile Point Estimate Threshold (QPET) rule and the Quantile Binomial Confidence Bound (QBCB) rule. Both can be used with any sample size and recall target. The latter also provides a confidence interval on recall at any specified confidence level. 2https://www.justice.gov/file/1096096/download arXiv:2108.12746v1 [cs.IR] 29 Aug 2021 \fCIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Lewis, Yang, and Frieder \u2022 A theoretical and empirical demonstration that, for many TAR tasks, the counterintuitive key to reducing total TAR review cost is to incur more cost for sampling and reduce excess recall We begin by proposing a taxonomy of TAR stopping rules and zeroing in on those with broad applicability (Section 2). We identify sequential bias as the key challenge to certification rules and apply the theory of quantile estimation to evade this bias (Section 3). This leads to first the QPET rule (Section 3) and then the QBCB rule (Section 4), whose properties we analyze. We also examine previously proposed certification rules and find that only one narrowly applicable rule, Cormack and Grossman\u2019s Target rule [6], is statistically valid and indeed is a special case of the QBCB rule (Section 6.3). Finally, we demonstrate theoretically and empirically that minimizing sample size, as suggested by Cormack and Grossman, is almost always suboptimal from a total cost standpoint (Sections 5 and 8). 2 A TAXONOMY OF TAR STOPPING RULES Many TAR stopping rules that have been proposed would be unusable in most operational TAR contexts. In this section, we propose a taxonomy of stopping rules that clarifies their range of applicability. TAR evaluation conferences [17, 23\u201325, 38] have emphasized interventional stopping rules, i.e., rules that alter the method used to select documents for review. These rules include SCAL [8], Autostop Conservative [31], Autostop Optimistic [31], and a recent rule by Callaghan and M\u00fcller-Hansen Callaghan and M\u00fcller-Hansen [5]. By modifying the document selection process, these methods gather information that enables more accurate stopping (if not always valid statistical guarantees). While powerful, an interventional rule requires that all documents selected be chosen by a particular novel active learning algorithm. Most document reviews rely on commercial TAR software whose document selection algorithms cannot be modified by the user. Further, review managers often prefer (and may be legally required) to select documents not just by active learning, but also by Boolean text or metadata searches. Documents from other sources (related projects, direct attorney knowledge, or legal actions) may also need to be reviewed at arbitrary times. In contrast, we call a stopping rule a standoff rule if it can be applied to any TAR review, regardless of how documents are selected or in what order. Some rules allow arbitrary review combined with interventional portions: we call these hybrid rules. Standoff and hybrid rules usually require drawing a random sample for estimation purposes. Some of these rules assume that all review team decisions are correct (self-evaluation rules), while others assume only the decisions on the sample are correct (gold standard rules). A cross-cutting distinction for all rules is how strong a guarantee of quality they provide. Heuristic rules make a stopping decision based on general patterns observed for review processes, such as declining precision with increasing manual search effort or diminishing impact from new training data [6, 7, 43, 52, 55]. Heuristic stopping rules for one-phase TAR reviews are closely related to stopping rules for active learning in two-phase TAR reviews [32] and in generalization tasks [26, 44, 49]. Certification rules, on the other hand, use a random sample to provide a formal statistical guarantee that the stopping point has certain properties and/or provide a formal statistical estimate of effectiveness at the stopping point. If correctly designed, they give a degree of confidence that heuristic rules cannot. However, with one narrow exception, previously proposed certification rules fail to meet their purported statistical guarantees (Section 6). The consequences for such failures can be severe: parties in legal cases have been sanctioned for failing to meet stated targets on information retrieval effectiveness measures.3. In Sections 3 and 4 we provide the first standoff gold standard certification rules for one-phase TAR workflows that can be used with any sample size, recall target, and confidence level. 3 A POINT ESTIMATION RULE Certification rules condition stopping on some statistical guarantee of effectiveness of the TAR process. We consider here the usual collection-level binary contingency table measures, where the four outcomes TP (true positives), FP (false positives), FN (false negatives), and TN (true negatives) sum to the number of documents in the collection. For a one-phase TAR workflow, a positive prediction or detection corresponds to the document having been reviewed before the workflow is stopped. Recall, \ud835\udc47\ud835\udc43/(\ud835\udc47\ud835\udc43+ \ud835\udc39\ud835\udc41), is the most common measure on which the TAR processes are evaluated [28]. Other measures of interest in TAR are precision = \ud835\udc47\ud835\udc43/(\ud835\udc47\ud835\udc43+ \ud835\udc39\ud835\udc43) and elusion = \ud835\udc39\ud835\udc41/(\ud835\udc39\ud835\udc41+\ud835\udc47\ud835\udc41). Elusion (which one desires to be low) can be thought of as precision in the unreviewed documents and has mostly seen use in the law [40]. 3.1 Estimates and Estimators Effectiveness must be estimated. Estimates are produced by estimators, i.e., functions that define a random variable in terms of a random sample from a population [27]. An estimate is the value taken on by that random variable for a particular random sample. A point estimate is an estimate which is a scalar value. A common point estimator is the plug-in estimator, which replaces population values by the random variables for the corresponding sample values [14]. The plug-in estimator for recall, based on a simple random sample annotated for both category and detection status, is \ud835\udc4b/\ud835\udc4c where \ud835\udc4bis a random variable for the number of positive detected examples in the sample, and \ud835\udc4cis a random variable for the total number of positive examples in that sample. In other words, recall on the labeled random sample is used as the point estimate of recall in the population. The plug-in estimator of recall assumes that both class labels and detection statuses are known. If we are using the estimate in a stopping rule, however, we must stop to have an estimate, but must have an estimate to stop. The usual resolution of this dilemma in TAR is to compute, after each batch of documents is reviewed, what the estimated effectiveness would be if the TAR process were stopped at that point. The TAR process is stopped the first time one of these trial estimates exceeds the recall goal. We refer to this rule, widely used in practice, as the Point Estimate Threshold (PET) stopping rule. 3In Re: Domestic Airline Travel Antitrust Litigation, 1:15-mc-01404 (D.D.C. Sept. 13, 2018) \fCertifying One-Phase Technology-Assisted Reviews CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia 3.2 The PET Rule is Invalid Unfortunately, the PET rule is statistically biased: the expected value of effectiveness at the stopping point typically falls short of the claimed effectiveness level. We demonstrate this with an example that, while simple, exhibits the core phenomena at play. Consider a large collection, A, with an even number \ud835\udc41of documents, all of which are relevant. If we ran the TAR process until all documents were found, each document would be assigned a rank corresponding to the order in which it was found. Call that number the A-rank of the document. Suppose our recall goal is 0.5. Since all documents are relevant in our example, a TAR process achieves recall of 0.5 or more if it stops at or after A-rank \ud835\udc41/2. Now draw a simple random sample of even size \ud835\udc5bfrom A and have it coded, prior to starting the TAR process, as a gold standard. On coding those documents we will find all are relevant, and so we have a simple random sample, \ud835\udc37, of size \ud835\udc5f= \ud835\udc5bfrom the relevant documents in A. At first we do not know the A-rank of any document in sample D. However, as reviewers examine documents, they periodically find one of the \ud835\udc5bsample documents, at which point we know its A-rank. When the \ud835\udc5b/2\u2019th document from the sample is found, the plug-in estimate of recall on the sample will be 0.5, and the PET rule would stop the TAR process. Since D is a random sample, the value of the \ud835\udc5b/2\u2019th highest A-rank in D, our stopping point, is a random variable \ud835\udc37(\ud835\udc5b/2), the \ud835\udc5b/2\u2019th order statistic in sample \ud835\udc37[2]. It has the following probability mass function \ud835\udc53(\ud835\udc5b/2):\ud835\udc5b(\ud835\udc4e) = \u0000\ud835\udc4e\u22121 (\ud835\udc5b/2)\u22121 \u0001 \u00001 1 \u0001 \u0000 \ud835\udc41\u2212\ud835\udc4e (\ud835\udc5b/2) \u0001 \u0000\ud835\udc41 \ud835\udc5b \u0001 , (1) corresponding to \ud835\udc5bdraws without replacement from three bins (less than, equal to, and greater than\ud835\udc4e) [2, Chapter 3]. The expected value of \ud835\udc37(\ud835\udc5b/2) is (\ud835\udc5b/2)(\ud835\udc41+1)/(\ud835\udc5b+1) [2, Chapter 3], and thus the expected recall of the PET rule in this case is (1/2)(\ud835\udc5b/(\ud835\udc5b+ 1))((\ud835\udc41+ 1)/\ud835\udc41). This is less than 0.5 for any \ud835\udc5f< \ud835\udc41. 3.3 Quantile Point Estimation The PET rule makes multiple tests on estimates and stops when the first one succeeds, thus biasing the estimate at the stopping point. This phenomenon is the focus of sequential analysis [13, 46, 51, 53], which is central to statistical quality control in manufacturing [16]. A key insight from sequential analysis is that conditioning stopping of a process on a random variable makes the stopping point itself a random variable. It is that latter random variable we need to have the desired statistical properties. Suppose we view the PET rule more abstractly, as a rule that stops a TAR process when we have found \ud835\udc57items from a sample D of positive documents from A. The A-rank of the \ud835\udc57th item will be the \ud835\udc57th lowest A-rank in our sample. That value, \ud835\udc51\ud835\udc57, is the realization for our sample of the random variable \ud835\udc37\ud835\udc57, where \ud835\udc371, ..., \ud835\udc37\ud835\udc5fare the order statistics for the sample [2]. For such a rule to be valid, given a recall goal \ud835\udc61and positive sample size \ud835\udc5f, one strategy would be to choose \ud835\udc57such that the worst case expected value of recall for any data set, averaged over the realizations \ud835\udc51\ud835\udc57of \ud835\udc37\ud835\udc57for that data set, is at least \ud835\udc61. Computing this worst case expected value is nontrivial. Fortunately an alternative perspective is possible when recall is the measure of interest. Figure 1: A quantile point estimation example. A TAR process if carried to the end assigns an A-rank to each document in a collection (A), and thus to each document in the positive subpopulation (B). In this example, A-rank 19072 is the 0.80 quantile in B, meaning that a TAR process stopping at that rank has 0.80 recall. D is a simple random sample of size 10 from B. Hyndman & Fan\u2019s Q7 point estimator of the 0.80-quantile is thus based on the order statistics \ud835\udc378 and \ud835\udc379. Here the realizations are 18307 and 22151, leading to 19076 (slightly higher than the true value) being the point estimate of the 0.8-quantile. (Diagram by Tony Dunnigan.) A t-quantile is a population parameter such that a fraction \ud835\udc61of the population is at or above that value. Formally, for 0 < \ud835\udc61< 1, we define \ud835\udc4f\ud835\udc61to be the \ud835\udc61-quantile of finite population B if \ud835\udc4f\ud835\udc61= inf{\ud835\udc65: \ud835\udc43[\ud835\udc4b\u2264\ud835\udc65] \u2265\ud835\udc61} for X drawn uniformly from the population [10, Chapter 7]. Let B be just the relevant documents within collection A, but in sorted order by their A-ranks. Let A-rank \ud835\udc4f\ud835\udc61be the \ud835\udc61-quantile for B. Then the recall of a TAR process stopping at \ud835\udc4f\ud835\udc61is the smallest \ud835\udc61\u2032 such that \ud835\udc61\u2032 \u2265\ud835\udc61and \ud835\udc61\u2032 is achievable for some stopping point. The quantile perspective links recall at a stopping point to a single population parameter. We then require an estimator that maps from order statistics \ud835\udc37\ud835\udc57in the positive sample D to population quantiles within the positive subpopulation B, i.e. a quantile point estimator. Hyndman & Fan [22] review the properties of nine quantile point estimators, of which their Q7 is the default in the R statistical package.4 Q7 is defined by letting \u210e= (\ud835\udc5f\u22121)\ud835\udc61+ 1, \ud835\udc57= \u230a\u210e\u230b, and using \ud835\udc37\ud835\udc57+ (\u210e\u2212\ud835\udc57)(\ud835\udc37\ud835\udc57+1 \u2212\ud835\udc37\ud835\udc57) as the estimator for the \ud835\udc61-quantile. Figure 1 diagrams the logic of quantile estimation using Q7. Using Q7 as our quantile point estimator, we define the Quantile Point Estimate Threshold (QPET) stopping rule as follows. Given 4https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/quantile \fCIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Lewis, Yang, and Frieder a sample size \ud835\udc5fand recall goal \ud835\udc61, we compute \u210e= (\ud835\udc5f\u22121)\ud835\udc61+ 1 and \ud835\udc57= \u230a\u210e\u230b. We need to stop at a point where we can apply our estimator \ud835\udc37\ud835\udc57+(\u210e\u2212\ud835\udc57)(\ud835\udc37\ud835\udc57+1\u2212\ud835\udc37\ud835\udc57). If\u210eis an integer, then \ud835\udc57= \u210eand we only need the value of \ud835\udc37\ud835\udc57. We therefore stop at \ud835\udc51\ud835\udc57, i.e., after finding the \ud835\udc57\u2019th positive sample document. If \u210eis not an integer, then we need the values of both \ud835\udc37\ud835\udc57and \ud835\udc37\ud835\udc57+1, so we stop at \ud835\udc51\ud835\udc57+1, i.e., after finding the \ud835\udc57+ 1\u2019th positive sample document. In either scenario, \ud835\udc51\ud835\udc57+ (\u210e\u2212\ud835\udc57)(\ud835\udc51\ud835\udc57+1 \u2212\ud835\udc51\ud835\udc57) is our point estimate of the \ud835\udc61\u2212quantile at the stopping point (with the second term 0 if \u210eis an integer). If a point estimate of recall at the stopping point is required, we can use \ud835\udc61as that estimate. This point estimate of recall is conservative in the sense that recall at the \ud835\udc61-quantile is always at least \ud835\udc61, but can be higher. When does the QPET rule stop in comparison with the PET rule? Assume a nontrivial recall goal 0 < \ud835\udc61< 1 and positive sample size \ud835\udc5f. The PET rule stops after reviewing \ud835\udc51\ud835\udc57total examples, where \ud835\udc57is lowest value such that \ud835\udc57/\ud835\udc5f\u2265\ud835\udc61. If \ud835\udc5f\ud835\udc61is an integer, this will be when \ud835\udc57= \ud835\udc5f\ud835\udc61. In this case, the QPET rule has \ud835\udc57\u2032 = \u230a\u210e\u230b= \u230a(\ud835\udc5f\u22121)\ud835\udc61+ 1\u230b= \u230a\ud835\udc5f\ud835\udc61+ 1 \u2212\ud835\udc61\u230b= \ud835\udc5f\ud835\udc61+ \u230a1 \u2212\ud835\udc61\u230b= \ud835\udc5f\ud835\udc61= \ud835\udc57. Since \ud835\udc61= \ud835\udc57/\ud835\udc5f, we can also write \u210e= (\ud835\udc5f\u22121)\ud835\udc61+ 1 = \ud835\udc57(\ud835\udc5f\u22121)/\ud835\udc5f+ 1. Ignoring the trivial case \ud835\udc5f= 2, (\ud835\udc5f\u22121)/\ud835\udc5fis never an integer, and thus \u210eis never an integer. So both \ud835\udc37\ud835\udc57\u2032 and \ud835\udc37\ud835\udc57\u2032+1 are needed, and QPET stops at \ud835\udc51\ud835\udc57+1. If \ud835\udc5f\ud835\udc61is not an integer, then the PET rule stops at \ud835\udc51\ud835\udc57where \ud835\udc57= \u2308\ud835\udc5f\ud835\udc61\u2309= \u230a\ud835\udc5f\ud835\udc61\u230b+ 1. Suppose first that (\ud835\udc5f\u22121)\ud835\udc61is an integer. Then for the QPET rule \ud835\udc57\u2032 = \u230a(\ud835\udc5f\u22121)\ud835\udc61+ 1\u230b= (\ud835\udc5f\u22121)\ud835\udc61+ 1 = \ud835\udc5f\ud835\udc61+ (1 \u2212\ud835\udc61). Since 0 < (1 \u2212\ud835\udc61) < 1, this is either \u2308\ud835\udc5f\ud835\udc61\u2309or \u2308\ud835\udc5f\ud835\udc61\u2309+ 1. Conversely, if (\ud835\udc5f\u22121)\ud835\udc61 is not an integer, \ud835\udc57\u2032 = \u230a(\ud835\udc5f\u22121)\ud835\udc61+ 1\u230b= \u230a\ud835\udc5f\ud835\udc61+ (1 \u2212\ud835\udc61)\u230b, which is either \u230a\ud835\udc5f\ud835\udc61\u230bor \u230a\ud835\udc5f\ud835\udc61\u230b+ 1. In all cases then, the QPET rule requires finding at most one more positive sample document than the PET rule, and sometimes, no additional documents. 4 A CONFIDENCE INTERVAL RULE The QPET rule outputs a point estimate on the \ud835\udc61-quantile (i.e., the number of documents to review), and with a point estimate of recall equal to \ud835\udc61. However, we surely should feel less good about these estimates if they are based, say, on a sample of 10 positive examples than on a sample of 1000 positive examples. A confidence interval is an estimate that consists of a pair of scalar values with an associated confidence level, conventionally expressed as 1 \u2212\ud835\udefc[20, Chapter 2]. A confidence interval estimator specifies a closed interval [\ud835\udc4b,\ud835\udc4c], where \ud835\udc4band \ud835\udc4care random variables defined in terms of the sample. We say that such an estimator produces a 1 \u2212\ud835\udefcconfidence interval for a population value \ud835\udc67when the probability is at least 1 \u2212\ud835\udefcthat, over many draws of a random sample of the specified type, the sample-based realizations \ud835\udc65of \ud835\udc4b and \ud835\udc66of \ud835\udc4care such that \ud835\udc65\u2264\ud835\udc67\u2264\ud835\udc66. Confidence interval estimators are used in two ways in TAR. The first is to use a power analysis on the estimator as a guide to sample size [42]. A review manager will draw a random sample large enough to guarantee that any confidence interval estimate produced from it will have some property, such as a maximum margin of error of 0.05. The second use of confidence intervals is in reporting, i.e., in making a statistical claim based on a labeled random sample about the effectiveness achieved by a TAR process. If the only use of the random sample is in reporting, this is unproblematic. But if (as is common) the same sample is used to decide when to stop the review, the reported confidence interval estimate will have sequential bias [53]. 4.1 Quantile Confidence Intervals As with point estimates, the quantile perspective can rescue confidence intervals on recall from sequential bias. A quantile confidence interval is a confidence interval on a quantile [54, Chapter 4]. To avoid distributional assumptions about the TAR process, we can use a nonparametric quantile confidence interval estimator [10, Chapter 7]. This takes the form of a pair of order statistics, [\ud835\udc37\ud835\udc56, \ud835\udc37\ud835\udc57]. The estimator determines the values \ud835\udc56and \ud835\udc57based on the quantile level \ud835\udc61, sample size \ud835\udc5f, and confidence level 1 \u2212\ud835\udefc. It provides the guarantee that, with at least 1 \u2212\ud835\udefcprobability over draws of the random sample, the \ud835\udc61-quantile \ud835\udc4f\ud835\udc61falls within the sample-based realization [\ud835\udc51\ud835\udc56,\ud835\udc51\ud835\udc57]. If an estimator of this form is available, we can define the stopping point of a TAR review to be the value of \ud835\udc51\ud835\udc57for our positive random sample, and have 1 \u2212\ud835\udefcconfidence that the \ud835\udc61-quantile in B falls within [\ud835\udc51\ud835\udc56,\ud835\udc51\ud835\udc57]. By definition of the \ud835\udc61-quantile, we thus have 1 \u2212\ud835\udefcconfidence that stopping at \ud835\udc51\ud835\udc57gives a recall of at least \ud835\udc61. For many uses of confidence intervals we want estimators that make the interval narrow (the realization\ud835\udc51\ud835\udc57\u2212\ud835\udc51\ud835\udc56is likely to be small) and/or symmetric (i.e., \ud835\udc51\ud835\udc57\u2212\ud835\udc4f\ud835\udc61and \ud835\udc4f\ud835\udc61\u2212\ud835\udc51\ud835\udc56are likely to be similar, where \ud835\udc4f\ud835\udc61is some point estimate). For a stopping rule, however, the most important criterion is that \ud835\udc51\ud835\udc57is likely to be small, since this reduces the number of documents the TAR process must review before stopping. We can minimize the likely value of \ud835\udc51\ud835\udc57by using a nonparametric one-sided upper confidence interval (UCI) on a quantile [20, Chapter 5]. Such an interval has the form [\ud835\udc370, \ud835\udc37\ud835\udc57], where \ud835\udc370 is the 0th order statistic, i.e., the lowest logically possible value. For us this is \ud835\udc370 = 1 (the lowest A-rank); so the interval is [1, \ud835\udc37\ud835\udc57]. We refer to the pair as an 1-s UCI, and the upper end of the interval \ud835\udc37\ud835\udc57as a 1-s UCB (one-sided upper confidence bound). The estimator must choose \ud835\udc57such that the realization \ud835\udc51\ud835\udc57will be, with 1 \u2212\ud835\udefcprobability, a \ud835\udc61-quantile or higher. This is equivalent to requiring a probability 1 \u2212\ud835\udefcor higher that fewer than \ud835\udc57elements of positive random sample D have A-rank less than the \ud835\udc61\u2212quantile. Suppose there are \ud835\udc45positives in B, and that our sample of positives is of size \ud835\udc5f. Then our estimator should choose the smallest \ud835\udc57such that: \ud835\udc57\u22121 \u2211\ufe01 \ud835\udc58=0 \u0000\u2308\ud835\udc61\ud835\udc45\u2309\u22121 \ud835\udc58 \u0001 \u0000\ud835\udc45\u2212\u2308\ud835\udc61\ud835\udc45\u2309+1 \ud835\udc5f\u2212\ud835\udc58 \u0001 \u0000\ud835\udc45 \ud835\udc5f \u0001 \u22651 \u2212\ud835\udefc (2) In a TAR setting we do not know \ud835\udc45. However, if \ud835\udc45is large relative to \ud835\udc5f, the binomial distribution is a good approximation to the above hypergeometric distribution [48, Chapter 3]. In this condition, we want the smallest \ud835\udc57such that \ud835\udc57\u22121 \u2211\ufe01 \ud835\udc58=0 \u0012\ud835\udc5f \ud835\udc58 \u0013 \ud835\udc61\ud835\udc58(1 \u2212\ud835\udc61)\ud835\udc5f\u2212\ud835\udc58\u22651 \u2212\ud835\udefc. (3) In fact, we can use the binomial approximation safely even when we are not confident that \ud835\udc5fis small relative to \ud835\udc45. For values of \ud835\udc61 \fCertifying One-Phase Technology-Assisted Reviews CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia greater than 0.5, the fact that the binomial has larger variance than the hypergeometric means that the \ud835\udc57chosen using the binomial approximation will never be less than the one chosen using the hypergeometric. Values of recall less than 0.5 are rarely of interest, but if needed we could find a similarly conservative value of \ud835\udc57for such a \ud835\udc61by running the summation from \ud835\udc5fdownwards instead of 0 upwards. Based on the above analysis, we define the Quantile Binomial Confidence Bound (QBCB) stopping rule. Given a sample size \ud835\udc5f, recall target \ud835\udc61, and confidence level 1 \u2212\ud835\udefc, it specifies stopping a one-phase TAR process when the \ud835\udc57th positive sample document is found. Here \ud835\udc57is smallest integer such that [1, \ud835\udc37\ud835\udc57] contains the \ud835\udc61-quantile from the unknown population of positive examples with (1 \u2212\ud835\udefc) confidence, based on the binomial approximation. 4.2 The QBCB Rule and Recall Estimation We observed that the recall goal\ud835\udc61can be used as a conservative point estimate of recall at the QPET stopping point. By the same logic, \ud835\udc61is a conservative point estimate of recall at the QBCB stopping point. If we prefer an interval estimate, we can use a 1\u2212\ud835\udefcone-sided lower confidence interval (1-s LCI) (or one-sided lower confidence bound, 1-s LCB) estimator [20, Chapter 2]). This defines a pair [\ud835\udc3f, 1.0] where, with probability at least 1 \u2212\ud835\udefcover random samples of size \ud835\udc5f, the realization [\u2113, 1.0] contains a desired population value. Given the definition of \ud835\udc61-quantile, we know that [\ud835\udc61, 1.0] is a 1 \u2212\ud835\udc4e\ud835\udc59\ud835\udc5d\u210e\ud835\udc4e1-s LCI on recall at the QBCB stopping point. This interval estimate on recall may seem unsatisfying: it is identical regardless of sample size. However, this simply reflects the task we have set for the QBCB rule: stop as soon as one has confidence that a recall goal has been met. Larger sample sizes translate to earlier stopping, not a tighter 1-s LCI. 4.3 What to Expect from the QBCB Rule We can also compute more conventional estimates of recall at the QBCB stopping point. As long as those estimates depend only on \ud835\udc57(which is fixed as soon as we choose \ud835\udc5fand \ud835\udc61) and not on \ud835\udc51\ud835\udc57(the actual A-rank at which we stop), these estimates are not affected by sequential bias. These estimates give insight into the behavior of the QBCB rule. Table 1 shows the QBCB values of \ud835\udc57for recall goal 0.8, confidence level 95% (1 \u2212\ud835\udefc= 0.95), and selected sample sizes \ud835\udc5ffrom 14 to 457. (The choice of the sample sizes is discussed in Section 5.) Sample sizes 8 to 13 are also included. However, with these sample sizes, the only 95% 1-s UCI based on order statistics that includes the 0.8-quantile is the trivial interval [\ud835\udc370, \ud835\udc37\ud835\udc5f+1] = [1, \ud835\udc41] (using the convention that the \ud835\udc5f+ 1\u2019th order statistic for a sample of size \ud835\udc5fis the maximum population value). So for \ud835\udc5f< 14, the QBCB value of \ud835\udc57is \ud835\udc5f+ 1, and the rule does not provide meaningful stopping behavior. For these sizes we instead show \ud835\udc57\u2217= \ud835\udc57\u22121 = \ud835\udc5f, the largest non-trivial stopping point. We also show both the QBCB \ud835\udc57and \ud835\udc57\u2217= \ud835\udc57\u22121 for the case r=21, discussed in Section 5. Rows with QBCB \ud835\udc57\u2217= \ud835\udc57\u22121 values are indicated by \"*\". We show the value of three estimates of recall based solely on \ud835\udc57 or \ud835\udc57\u2217. The first is a 95% 1-s LCI, but for recall rather than for the \ud835\udc61-quantile. In particular, we use the Clopper-Pearson exact interval [4]. Second, we show the plug-in estimate \ud835\udc57/\ud835\udc5fdiscussed earlier for the PET rule. Finally, we show a 95% 1-s UCI on recall, again computed using the Clopper-Pearson method. For rows with the QBCB \ud835\udc57value, the lower end of the 95% 1-s LCI is always at or above 0.80, but fluctuates and is closer to 0.80 when the sample size is larger. This reflects the fact that the ClopperPearson LCI is based on the same binomial approximation used in the QBCB rule. The only difference is that the QBCB computation solves for integer \ud835\udc57based on fixed real \ud835\udc61, while the LCI computation solves for real \ud835\udc61based on fixed integer \ud835\udc57. The QBCB requirement that \ud835\udc57be an integer means that the \ud835\udc61at the lower end of the LCI is typically slightly more than 0.8, with the difference decreasing as \ud835\udc5f increases and more \ud835\udc57values are available. The plug-in point estimates (which are simply \ud835\udc57/\ud835\udc5for \ud835\udc57\u2217/\ud835\udc5fdepending on the row) for small sample sizes are much higher than 0.8. We can think of these as the estimated recall at which the naive PET rule would need to stop to achieve the same confidence bound as the QBCB rule, and reflects how uncertain recall estimates from small samples are. The last column shows a 95% 1-s UCI on recall at the QBCB stopping point. This estimate shows that, as sample sizes increase, we slowly become more confident that the QBCB stopping point will not have very high recall. Section 5 discusses why, counterintuitively, we should want such confidence. 5 SAMPLE SIZE AND RECALL Past evaluations of stopping rules have often treated overshooting a recall goal as a lucky outcome [7]. By definition, however, a certification rule that stops with an recall higher than its goal has incurred extra costs. A TAR process that incurs high costs, particularly unpredictably high costs, while overshooting stakeholder requirements is not a success. Further, in some contexts exceeding a recall goal may be a negative outcome even if costs are ignored. A litigant that would like to produce 0% of responsive documents to an adversary, but has a legal obligation to produce 80% of responsive documents, is not happier if their legal service provider delivers 90% of responsive documents to the adversary. Recall is an expensive measure on which to overshoot a goal. As a TAR method pushes for high recall, relevant documents tend to be spaced increasingly farther apart. This is a basis of the common heuristic rule that stops review when batch precision drops below some minimum value. Larger intervals between relevant documents mean that each percentage point of recall achieved beyond the goal value comes at increasing marginal cost. Thus part of the benefit of using a larger random sample in a certification rule is lower recall. Indeed, jointly choosing an order statistic and a sample size so that both a UCB and an LCB are bounded is an old technique from statistical quality control [18]. For Table 1 we chose the sample sizes \ud835\udc5f\u226530 to be the smallest sizes for which the 95% 1-s LCB on recall is less than or equal to each of the values 0.99 to 0.86, decreasing by increments of 0.01. For instance, 158 is the smallest sample size such that the 95% 1-s LCB on recall is 0.90 or lower. For sample sizes of 14 and above we always have 95% confidence that we achieve the specified minimum recall, 0.80. What we get for larger sample sizes is a lower expected recall \fCIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Lewis, Yang, and Frieder Table 1: Stopping points within a positive sample for the QBCB rule with 0.80 recall and 95% confidence estimates, plus three conventional estimates of recall at the stopping point. Recall Estimators Applied at j \ud835\udc5f \ud835\udc57 95% 1-s LCI Plug-in 95% 1-s UCI 8 8* [0.688, 1.000] 1.000 [0.000, 1.000] 9 9* [0.717, 1.000] 1.000 [0.000, 1.000] 10 10* [0.741, 1.000] 1.000 [0.000, 1.000] 11 11* [0.762, 1.000] 1.000 [0.000, 1.000] 12 12* [0.779, 1.000] 1.000 [0.000, 1.000] 13 13* [0.794, 1.000] 1.000 [0.000, 1.000] 14 14 [0.807, 1.000] 1.000 [0.000, 1.000] 21 20* [0.793, 1.000] 0.952 [0.000, 0.998] 21 21 [0.867, 1.000] 1.000 [0.000, 1.000] 22 21 [0.802, 1.000] 0.955 [0.000, 0.998] 29 28 [0.847, 1.000] 0.965 [0.000, 0.998] 30 28 [0.805, 1.000] 0.933 [0.000, 0.988] 31 29 [0.811, 1.000] 0.936 [0.000, 0.988] 37 34 [0.804, 1.000] 0.912 [0.000, 0.978] 44 40 [0.804, 1.000] 0.909 [0.000, 0.968] 50 45 [0.801, 1.000] 0.900 [0.000, 0.960] 63 56 [0.801, 1.000] 0.889 [0.000, 0.947] 76 67 [0.803, 1.000] 0.882 [0.000, 0.937] 88 77 [0.802, 1.000] 0.875 [0.000, 0.928] 106 92 [0.801, 1.000] 0.868 [0.000, 0.918] 129 111 [0.800, 1.000] 0.861 [0.000, 0.908] 158 135 [0.800, 1.000] 0.854 [0.000, 0.898] 198 168 [0.800, 1.000] 0.849 [0.000, 0.889] 255 215 [0.800, 1.000] 0.843 [0.000, 0.879] 332 278 [0.800, 1.000] 0.837 [0.000, 0.870] 457 380 [0.800, 1.000] 0.832 [0.000, 0.860] (point estimate) and, as shown by the 1-s LCI column, confidence that we will not stop with very high (and expensive) recall. For small sample sizes, an additional consideration arises. Consider the first sample size for which we are able to leave \ud835\udc5f\u2212\ud835\udc57sample examples undetected and still hit the desired LCB on recall. As the examples (21,20), (21,21), and (22, 21) show, \ud835\udc5f= 22 is the lowest sample size for which \ud835\udc5f\u2212\ud835\udc57= 1, i.e., we can leave one example undetected and still meet our criterion. For sample sizes from \ud835\udc5f= 23 through \ud835\udc5f= 29 the LCB, point estimate, and UCB of recall all increase steadily with increasing sample size, with the largest values at \ud835\udc5f= 29. This is a lose-lose situation: increasing sample size in this range both increases sampling costs and increases TAR costs (since we expect to stop at a higher recall). The pattern is not broken until \ud835\udc5f= 30, the lowest sample size for which we can leave two examples undetected, at which point the pattern starts again. This pattern results from the fact that a sample of size \ud835\udc5fonly provides \ud835\udc5f+ 1 possible stopping points if stopping is at an order statistic. Some combinations of sample size, confidence level, and population parameter (recall goal) inevitably poorly match the available choices. This problem decreases for larger sample sizes, since more order statistics are available. As in other estimation situations with small sample sizes, careful choice of sample size can reduce costs substantially [42, Chapter 2]. This phenomenon is also relevant to empirical studies of certification rules: poor choices of sample size will introduce unneeded variation in the relationship between sample size and achieved recall (and thus cost). In our tests in Section 8 we use the optimal sample sizes from Table 1. For the most part, however, larger samples reduce excess recall. How large a sample is appropriate depends on how much overshooting the recall goal costs. This depends on numerous details, including the difficulty of the classification problem, size of the collection, type of classifier, active learning approach, and batch size. In Section 8, we examine some typical situations. 6 PROPOSED CERTIFICATION RULES We previously discussed the PET rule and our proposed QPET and QBCB rules. In this section, we examine other certification stopping rules in common TAR practice or proposed in the scientific literature. 6.1 Repeated PET Rules Practitioners often carry out a one-phase TAR workflow until a heuristic rule suggests that they have found most relevant documents. A common hybrid stopping approach is to first do this, then draw a random sample from the unreviewed documents, and make some statistical test on this sample. If the test succeeds, review stops. If the test fails, the sample is recycled as training data, and the review is restarted until the heuristic again indicates stopping and sampling. This can be thought of as a repeated PET (RPET) rule: we repeatedly test against some threshold value until succeeding. One statistical test used is accept on zero [19, 33, 39], i.e., recycle unless no relevant documents are in the sample. More generally one can estimate elusion from the sample, and recycle unless elusion is low enough. A variant on this uses the elusion estimate to compute an ad hoc estimate of recall [50], and recycles unless estimated recall is high enough. Regardless of the particular variant, all RPET approaches suffer from sequential bias induced by multiple testing: the process is more likely to stop when sampling fluctuation gives an over-optimistic estimate of effectiveness. Dimm [13] provides a detailed analysis of how accept on zero fails when used in an RPET rule. 6.2 The Countdown Rule Shemilt, et. al. discuss systematic review projects in which several stopping criteria were considered [45]. One is based on what they call the BIR (Baseline Inclusion Rate): simply the plug-in estimate \u02c6 \ud835\udc5d= \ud835\udc5f/\ud835\udc5bof the proportion of relevant documents in the collection. They convert this to an estimate (\ud835\udc41\ud835\udc5f)/\ud835\udc5bof the number of relevant documents in the collection. They propose stopping the TAR process when the number of relevant documents found equals this value, or the budgeted time runs out. This is equivalent to using (\ud835\udc45\ud835\udc4e\ud835\udc5b)/(\ud835\udc41\ud835\udc5f) as an estimator for recall, and stopping when estimated recall hits a recall target \ud835\udc61, which for Shemilt was 1.0. \fCertifying One-Phase Technology-Assisted Reviews CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia This stopping rule is known in e-discovery as the \u201ccountdown method\u201d or \u201cindirect method\u201d.5 The method is seriously flawed. First, the countdown estimator can produce recall estimates greater than 1.0. Second, in those cases where the point estimate of the number of relevant in the population is an overestimate, the TAR process may reach the end of the collection without stopping. Finally, the countdown method does not take into account sampling variation, and so provides no statistical characterization of the actual recall achieved. 6.3 The Target Rule The Target rule [7] uses a simple random sample of 10 positive examples (the target set) and stops when the one-phase TAR process has found all of them. It would be viewed in our framework as implicitly computing a 1-s UCI [1, \ud835\udc3710] based on a positive sample of size 10, and stopping when the realization of \ud835\udc3710 is reached. Cormack and Grossman analyze the Target rule and conclude it achieves a recall of 0.70 with 95% confidence. However, their analysis uses the binomial approximation in an unnecessarily conservative way, by treating 0.3\ud835\udc45/\ud835\udc45= 0.3 as small. In fact, Table 1 shows that a target set of only 9 positive documents is sufficient to achieve a recall goal of 0.70 with 95% confidence, while their suggested target set of 10 positive documents achieves a recall goal slightly over 0.74. The Target rule satisfies (actually exceeds) its claimed statistical guarantee, but does not allow any flexibility in recall goal or confidence level. Further, as shown in Section 8, using the minimum possible positive sample size usually increases total review cost. Requiring that every positive sample document be found also means a single coding error would have large consequences. 7 EXPERIMENT: METHODS The correctness of the QPET and QBCB stopping rules is completely determined by the theory of quantile statistics, regardless of sample size. Our goal in empirical work here is not, therefore, to verify the correctness of the rules, but simply to provide a demonstration of how sample size and cost interact in perhaps counterintuitive ways. We worked with a random 20% subset of the RCV1-v2 [29] text categorization collection defined in a prior TAR study [56]. An advantage of RCV1-v2 over collections used in past TAR evaluations is the ability to explore a range of category difficulties and prevalences simultaneously. That study defined three levels of category prevalence and three of classification difficulty. For our demonstration, we selected the category with closest to median difficulty and prevalence from each of their nine bins, and seed document with closest to median difficulty for each category. Based on that seed document, iterative relevance feedback with a batch size of 200 was carried out until the collection was exhausted (805 iterations). Supervised learning used the logistic regression implementation in scikit-learn with L2 regularization and 1.0 as the penalty strength. The resulting batches were concatenated in order. When applying the QBCB rule we considered stopping points only at the end 5https://www.courtlistener.com/docket/4259682/304/kleen-products-llc-vinternational-paper/ Figure 2: Relationship between positive sample sizes (x-axis) and collection recall at the stopping point (y-axis) for the QBCB rule on category E12. 100 replications of each sample size are displayed using boxplot conventions: the box ranges from the 25% (q1) to 75% (q3) quartiles of recall with the 100 replications, the green and purple lines are the median and mean recall respectively, and whiskers extend to the lesser of the most extreme cost observed or to a distance 1.5(q3 q1) from the edges of the box. Outliers are presented as dots above and below the whiskers. of each batch, so order within bins had no effect. For each category and each positive sample size value, we then generated 100 simple random samples constrained to have exactly that number of positive examples. We applied the QBCB rule with 95% confidence and recall goal 0.80 to those samples, found the stopping iteration, and computed actual recall and cost at that point. Sample sizes used were all those from Table 1 that allow the confidence level and recall goal to be met. We separated the review cost at a stopping point into four components for analysis purposes: the positive and negative documents in the random sample, and the positive and negative documents encountered during relevance feedback prior to the stopping point. We assume that the random sample is, to avoid bias, reviewed by different personnel than conduct the main review. Thus encountering the same document in both the sample and during relevance feedback costs twice. We discuss costs further in the next section. 8 EXPERIMENT: RESULTS AND ANALYSIS Figure 2 displays a boxplot of recall values at the stopping point for category E12 (from the common-hard bin) using 100 replications of each positive sample size and the QBCB rule. We see the usual decrease in variance with increased sample size. Only a few outliers are below a recall of 0.80 at any sample size. Measures of central and high recall consistently decrease. We would expect that reducing the occurrence of very high recall values would also reduce the occurrence of very high costs. Figure 3 explores this in detail. It is again a boxplot for 100 replications, but this time for all 9 of our exemplar TAR workflows and displaying total cost rather than recall. Category ALG (Algeria) is a category where Cormack and Grossman\u2019s approach of of using the minimum possible sample size leads \fCIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Lewis, Yang, and Frieder Figure 3: Relationship between positive sample sizes (x-axis) and total review cost (including sampling) at the QBCB stopping point (y-axis). 100 replications of each sample size are used. Boxplot conventions are as in Figure 2. Figure 4: An example cost dynamics analysis. For category E12 (common-hard bin) we show the components of total cost for each of the first 200 iterations of the TAR process. For iterations where recall 0.80 has not been reached, we add a penalty equal to the cost of an optimal Phase 2 continuation reaching 0.80 recall. For each of four sample sizes we choose the worst stopping point across 100 replications. Its cost is shown as a horizontal dashed line, and the stopping point as vertical dashed line. For sample size 14, worst case stopping is at iteration 759 with a cost of 152,324. \fCertifying One-Phase Technology-Assisted Reviews CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia to minimum cost. The TAR workflow reaches recall of 1.0 after only 32 batches, so overshooting recall can never cost the review of more than 32 \u00d7 200 = 6400 documents. Conversely, the prevalence of positive examples is very low (0.002437) so random positive examples are very expensive (each costing on average 410.34 negative examples). For most categories however, investing in random samples large enough to get more than the minimum number of positives brings down the maximum cost over 100 replications substantially. For I22100 (medium frequency and medium difficult) the maximum cost over 100 replications is a factor of 14 times greater for a sample of 14 positive than for an optimal sample of 30 positives. The graphs also emphasize the importance of a power analysis in choosing small sample sizes. For most categories and cost statistics, 21 positives is actually worse than 14, while 22 is better. For categories E12 (Common-Hard) and I300003 (Common Medium), using a larger than minimum sample size brings down not just the worst case cost, but even the median cost. It is worth noting while these categories are in our \"Common\" bin, their prevalences are 3% and 1% respectively, which is typical or even low for e-discovery projects, depending on collection strategies. Samplebased stopping will be even more practical for a project in, say, the 10% prevalence range. 8.1 Cost Dynamics Our focus in this study has been on one-phase TAR reviews. Was anything lost by not considering two-phase review? Figure 4 uses cost dynamics graphs [56] to provide a perspective on this question. For a single TAR run (i.e., one seed) on category E12 we plot the total cost at stopping points from 0 to 200 iterations for four sample sizes. In addition to the four costs accounted for in Figure 3, for iterations where stopping would give recall less than 0.80 we add the cost of an optimal second phase review to reach 0.80 recall. That is, for each iteration we rank the unreviewed documents and assume that a top-down review through that ranking is carried out until 0.80 recall is reached. This is the minimum cost that a two-phase review reaching 0.80 recall would incur. The graphs immediately show that a one-phase review is optimal for this example: the minimum cost is at a point where no secondphase cost is incurred. This is typical for the setting of this paper, where the costs of all forms of review (sampling, phase 1, and phase 2 if present) are equal. One-phase review is typically not optimal when costs are unequal [56]. The graphs also provide an interesting perspective on the role of sample size in minimizing cost. A horizontal dashed line shows the worst case total cost for QBCB over 100 replications for each sample size, while the vertical line shows the corresponding stopping point. As the sample size is increased, the stopping point comes closer to the minimum of the cost landscape, but the entire landscape is raised. The sample size that minimizes the worst case cost over our 100 replications, sample size 129 in this case, strikes a balance between the two effects. 9 FUTURE WORK The QBCB rule makes use only of the positive documents in a random sample. Exploiting both positive and negative documents using a hypergeometric distribution should modestly reduce sample size, if the unknown number of relevant documents can be addressed. The bounding technique proposed in Callaghan and M\u00fcller-Hansen [5] is one possible approach, as is treating the positive subpopulation size as a nuisance parameter [27, Chapter 6]. Other approaches to reducing sample size that could be applied are stratified sampling [48, Chapter 11] and multi-stage or sequential sampling [48, Chapter 13]. Dimm [13] has presented promising results on using multi-stage sampling to reduce costs in making a binary acceptance decision for a complete TAR production, and this approach likely can be adapted to stopping rules. Desirable extensions of QBCB would be to two-sided confidence intervals, to two-phase workflows [56], to multiple assessors who may disagree, to effectiveness measures other than recall, and to rolling collections (where the TAR workflow must be started before all documents have arrived). Techniques from survey research for repeated sampling may be applicable to the last [34]. Finally, the QPET and QBCB rules are based on viewing a onephase TAR process as incrementally exposing a ranking of a collection. The rules may also be applied to actual rankings of collections produced by, for instance, search engines and text classifiers. In this scenario, QPET and QBCB become rules for avoiding sequential bias in choosing a sample-based cutoff that hits an estimated recall target. 10 SUMMARY The philosophy of statistical quality control is to accurately characterize and control a process [16]. We have shown in this study that previously proposed certification rules for one-phase TAR reviews are statistically invalid, inflexible, expensive, or all three. Drawing on the statistical theory of quantile estimation, we derive a new rule, the QBCB rule, that avoids sequential bias and allows controlling the risk of excessive costs. The rule applies to any one-phase TAR workflow, and can immediately be put into practice in real-world TAR environments. By using this rule, valid statistical guarantees of recall can be produced for the first time, while mitigating the risks of extreme cost. ACKNOWLEDGMENTS We thank Lilith Bat-Leah and William Webber for their thoughtful feedback on drafts of this paper, and Tony Dunnigan for the Figure 1 diagram. All errors are the responsibility of the authors." + } + ], + "Kyle Lo": [ + { + "url": "http://arxiv.org/abs/2303.14334v2", + "title": "The Semantic Reader Project: Augmenting Scholarly Documents through AI-Powered Interactive Reading Interfaces", + "abstract": "Scholarly publications are key to the transfer of knowledge from scholars to\nothers. However, research papers are information-dense, and as the volume of\nthe scientific literature grows, the need for new technology to support the\nreading process grows. In contrast to the process of finding papers, which has\nbeen transformed by Internet technology, the experience of reading research\npapers has changed little in decades. The PDF format for sharing research\npapers is widely used due to its portability, but it has significant downsides\nincluding: static content, poor accessibility for low-vision readers, and\ndifficulty reading on mobile devices. This paper explores the question \"Can\nrecent advances in AI and HCI power intelligent, interactive, and accessible\nreading interfaces -- even for legacy PDFs?\" We describe the Semantic Reader\nProject, a collaborative effort across multiple institutions to explore\nautomatic creation of dynamic reading interfaces for research papers. Through\nthis project, we've developed ten research prototype interfaces and conducted\nusability studies with more than 300 participants and real-world users showing\nimproved reading experiences for scholars. We've also released a production\nreading interface for research papers that will incorporate the best features\nas they mature. We structure this paper around challenges scholars and the\npublic face when reading research papers -- Discovery, Efficiency,\nComprehension, Synthesis, and Accessibility -- and present an overview of our\nprogress and remaining open challenges.", + "authors": "Kyle Lo, Joseph Chee Chang, Andrew Head, Jonathan Bragg, Amy X. Zhang, Cassidy Trier, Chloe Anastasiades, Tal August, Russell Authur, Danielle Bragg, Erin Bransom, Isabel Cachola, Stefan Candra, Yoganand Chandrasekhar, Yen-Sung Chen, Evie Yu-Yen Cheng, Yvonne Chou, Doug Downey, Rob Evans, Raymond Fok, Fangzhou Hu, Regan Huff, Dongyeop Kang, Tae Soo Kim, Rodney Kinney, Aniket Kittur, Hyeonsu Kang, Egor Klevak, Bailey Kuehl, Michael Langan, Matt Latzke, Jaron Lochner, Kelsey MacMillan, Eric Marsh, Tyler Murray, Aakanksha Naik, Ngoc-Uyen Nguyen, Srishti Palani, Soya Park, Caroline Paulic, Napol Rachatasumrit, Smita Rao, Paul Sayre, Zejiang Shen, Pao Siangliulue, Luca Soldaini, Huy Tran, Madeleine van Zuylen, Lucy Lu Wang, Christopher Wilhelm, Caroline Wu, Jiangjiang Yang, Angele Zamarron, Marti A. Hearst, Daniel S. Weld", + "published": "2023-03-25", + "updated": "2023-04-23", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CL" + ], + "main_content": "INTRODUCTION The exponential growth of scientific publication [7, 8] and increasing interdisciplinary nature of scientific progress [71, 94] makes \u2217[....... Click... to....... open.... in..... the ............ Semantic.......... Reader] [ ............. Download..... the .......... version....... with ......... alt-text] it increasingly hard for scholars to keep up with the latest developments. Academic search engines, such as Google Scholar and Semantic Scholar help scholars discover research papers. Automated summarization for research papers [11] helps scholars triage between research papers. But when it comes to actually reading research papers, the process, based on a static PDF format, has remained largely unchanged for many decades. This is a problem because digesting technical research papers is difficult [2, 5]. In contrast, interactive and personalized documents have seen significant adoption in domains outside of academic research. For example, news websites such as the New York Times often present interactive articles with explorable visualizations that allow readers to understand complex data in a personalized way. E-readers, such as the Kindle, provide in-situ context to help readers better comprehend complex documents, showing inline term definitions and tracking occurrence of characters in a long novel. While prior work has focused on authoring support tools [21, 22, 54] that can reduce effort in creating interactive scientific documents [34, 38], they have not seen widespread adoption due to a lack of incentive structure [27]. Furthermore, millions of research papers are locked in the rigid and static PDF format, whose low-level syntax makes it extremely difficult for systems to access semantic content, augment interactivity, or even provide basic reading functionality for assistive tools like screen readers [6]. Fortunately, recent work on layout-aware document parsing [39, 86, 104] and large language models [4, 10, 83] show promise for accessing the content of PDF documents, and building systems that can better understand their semantics. This raises an exciting challenge: Can we create intelligent, interactive, and accessible reading interfaces for research papers, even atop existing PDFs? arXiv:2303.14334v2 [cs.HC] 23 Apr 2023 \fLo and Chang, et al. Figure 1: The Semantic Reader Project consists of research, product, and open science resources. The Semantic Reader product1 is a free interactive interface for research papers. It supports standard reading features (e.g., (A) table of contents), integration with Semantic Scholar (e.g., (B) save to library), useful augmentations atop the existing PDF (e.g., (C) in-situ Paper Cards when clicking inline citations), and integration with third-party features (e.g. (D) Hypothes.is5 for user highlights). We continues to integrate research features into this product as they mature (e.g., (E) Scim automated highlights \u00a73.1). To explore this question, we present the Semantic Reader Project, a broad collaborative effort across multiple non-profit, industry, and academic institutions to create interactive, intelligent reading interfaces for research papers. This project consists of three pillars: research, product, and open science resources. On the research front, the Semantic Reader Project combines AI and HCI research to design novel, AI-powered interactive reading interfaces that address a variety of user challenges faced by today\u2019s scholars. We developed research prototypes and conducted usability studies that clarify their benefits. On the product front, we are developing the Semantic Reader (Figure 1),1 a freely available reading interface that integrates features from research prototypes as they mature.2 Finally, we are developing and releasing open science resources that drive both the research and the product. These resources together open-source software,3,4 AI models [11, 20, 42, 86], and open datasets [50, 61] to support continued work in this area. In this paper, we focus on summarizing our efforts under the research pillar of the Semantic Reader Project. We structure our discussion around five broad challenges faced by readers of research papers: \u2022 Discovery: Following paper citations is one of the main strategies that scholars employ to discover additional relevant papers, but keeping track of the large numbers of citations can be overwhelming. In \u00a72, we explore ways to visually augment research papers to help readers prioritize their paper exploration during literature reviews. 1Semantic Reader: https://www.semanticscholar.org/product/semantic-reader 2Available for over 369K papers as of February 2023. 3For UI development: https://github.com/allenai/pdf-component-library 4For processing PDFs: https://github.com/allenai/papermage 5Hypothes.is: https://web.hypothes.ishttps://web.hypothes.is \u2022 Efficiency: The exponential growth of publication makes it difficult for scholars to keep up-to-date with the literature\u2014 scholars need to skim and read many papers while making sure they capture enough details in each. In \u00a73, we explore how support for non-linear reading can help readers consume research papers more efficiently. \u2022 Comprehension: Research papers can be dense and contain terms that are unfamiliar either because the author newly introduces them or assumes readers have prerequisite domain knowledge. In \u00a74, we explore how providing in-situ definitions and summaries can benefit readers especially when reading outside of their domains. \u2022 Synthesis: The sensemaking [84] process of synthesizing knowledge scattered across multiple papers is effortful but important. It allows scholars to make connections between prior work and identify opportunities for future research. In \u00a75, we explore how to help readers collect information from and make sense of many papers to gain better understanding of broad research topics. \u2022 Accessibility: Static PDFs are an ill-suited format for many reading interfaces. For example, PDFs are notoriously incompatible with screen readers, and represent a significant barrier for blind and low vision readers [6]. Furthermore, an increasing number of scholars access content on mobile devices, on which PDFs of papers are difficult to read. In \u00a76, we explore methods for converting legacy papers to more accessible representations. Specifically, we present ten research prototypes developed in the Semantic Reader Project\u2014CiteSee [15], CiteRead [82], Scim [28], Ocean [75], ScholarPhi [33], Paper Plain [1], Papeo [92], Threddy [43], \fThe Semantic Reader Project Figure 2: CiteSee [15] highlights citations to familiar papers (e.g., recently read or saved in their libraries) as well as unfamiliar papers to help readers avoid overlooking important citations when conducting literature reviews. Clicking on Expand surfaces additional context, such as citing sentences from recently read papers. Relatedly [73], and SciA11y [91, 97]\u2014and explain how they address these reading challenges. We conclude by discussing ongoing research opportunities in both AI and HCI for developing the future of scholarly reading interfaces. We provide pointers to our production reading interface and associated open resources to invite the broader research community to join our effort. 2 UNLOCKING CITATIONS FOR DISCOVERY Scholars use many methods to discover relevant research papers to read, including search engines, word of mouth, and browsing familiar venues. However, once they find one research paper, it\u2019s especially common for scholars to use its references and citations to further expand their knowledge of a research area. This behavior, sometimes referred to as forward/backward chaining or footnote chasing, is ubiquitous and has been observed across many scholarly disciplines [74]. Supporting this, one popular feature in the Semantic Reader1 is in-situ Paper Cards that pop up when readers click on an inline citation, dramatically reducing the interaction cost caused by jumping back-and-forth between inline citations and their corresponding references at the end of a research paper (Figure 1). Despite this affordance, during literature reviews, readers may still be overwhelmed trying to make sense of the tens to hundreds of inline citations in each paper [15, 24, 77]. Conversely, when reading a given paper, a reader cannot see relevant follow-on research papers that cited the current paper. Here we discuss how interactive reading interfaces can help scholars more effectively explore citations to important relevant work in both directions with two systems called CiteSee [15] and CiteRead [82]. 2.1 Augmenting Citations with CiteSee While most prior work on supporting research paper discovery has focused on developing bespoke interfaces of recommender systems or visualizations based on paper contents [78, 89], the citation graph [16, 30, 32, 40, 63, 80, 103], or a combination of the two [20, 96], research paper discovery via inline citations in a reading interface is important but under-explored. One study estimates that reading and exploring inline citations accounts for around one in five research paper discoveries during active research [49]. However, while all inline citations are relevant to the current research paper, it is likely that some are more relevant to the current reader than others. For example, a reader reading papers about aspect extraction of online product reviews to learn more about natural language processing techniques would be less interested in citations to research papers around e-commerce and marketing. In addition, citations to the same research papers often have different surface forms across papers (i.e., reference numbers), making it all the more difficult for readers to keep track of all the inline citations they should explore or have already explored during literature reviews. To address this, CiteSee provides a personalized research paper reading experience by automatically identifying and resolving inline citations in PDFs to research paper entities in our academic graph [50], and visually augmenting inline citations based on their connections to the current reader. First, CiteSee leverages a reader\u2019s reading behavior and history as a way to capture their short-term and fluid interests during literature reviews. Using this signal, CiteSee scores and highlights inline citations to help the reader triage them and discover prior work that are likely relevant to their literature review topics (Figure 2). Second, CiteSee leverages research papers saved in the reader\u2019s Semantic Scholar paper library and the reader\u2019s publication record [50] to understand their longer-term research interests. Using this signal, CiteSee changes the colors of the inline citations to familiar papers so that the reader can both better contextualize the current paper and keep track of citations to papers they have already explored. In addition, CiteSee also helps readers better make sense of the cited papers by showing how they connect to a reader\u2019s previous activities; for example, showing which library folders they were saved under or the citing sentences from a familiar research paper (Figure 2). Based on lab and field studies, CiteSee showed promise that providing visual augmentation and personalized context around inline citations in an interactive reading environment can allow readers to more effectively discover relevant prior work and keep track of their exploration during real-world literature review tasks. 2.2 Exploring Future Work with CiteRead While augmenting inline citations helps readers to triage them, many relevant research papers are not cited in a research paper in the first place, for example, because they were published afterwards. CiteRead is a novel reading interface that helps readers discover how follow-on work has built on or engaged with the research paper [82]. Much like social document annotation systems [109], CiteRead annotates text in the paper with margin notes containing relevant commentary from citing papers [70], thereby helping the reader to become aware of the citing paper and its connection. In order to produce these annotations automatically, CiteRead first filters citing research papers for ones that are most relevant to the reader using a trained model atop a number of features representing citational discourse and textual similarity, i.e. from scientific paper embeddings [20]. CiteRead then localizes citing papers to particular spans of text in the paper being read, and extracts relevant information from the citing paper. Figure 3 shows a research paper annotated with this information from citing papers. Localization is \fLo and Chang, et al. Figure 3: CiteRead [82] finds subsequently published citing research papers, extracts the citation context, and localizes it to relevant parts of the current research paper as margin notes. This allows readers to become aware of important follow on work and explore them in-situ. a technical challenge because while inline citations reference cited papers, they do not typically reference specific locations in the cited paper; CiteRead determines location by looking for overlapping spans of text (e.g., a number in common in the citing paper and the cited paper) or localizes to the relevant section when this overlap is unavailable. With CiteRead, a reader can directly examine follow-on work while keeping the citation contexts of both the current paper and the citing paper. In a lab study, CiteRead helped readers better understand a research paper and its follow-on work compared to providing readers with a separate interface for faceted browsing of follow-on work. 3 NAVIGATION AND EFFICIENT READING An important part of reading a paper is knowing what and where to read. Scholars often read papers non-linearly; they might return to a previously-read passage to recall some information, or jump forward to a different section of the paper (or to another paper) to satisfy an information need before jumping back. While jumping can help scholars orient their reading to sections of interest, it can also be a distraction by causing readers to constantly switch contexts. Non-linear navigation can be especially burdensome when the reader is interested in a particular type of information (e.g., skimming a paper for the main results), but doesn\u2019t know precisely where to find it within the paper. In this section we discuss two systems, Scim [28] and Ocean [75], which demonstrate different approaches to helping readers navigate efficiently through a paper toward high-value, relevant information. 3.1 Guided Reading with Scim Scholarly reading can be considered a type of sensemaking represented as a continuous interplay between two processes: information foraging in which readers identify relevant paper content, and comprehension in which readers attempt to integrate the new information into their working model of the paper and with relevant prior knowledge [79, 84]. Distinguishing between relevant and irrelevant content could help facilitate efficient reading. Paper Figure 4: The Scim [28] interface guides reader attention using color highlights corresponding to discourse facets. A sidebar allows users to toggle facets on/off. Clicking a colorcoded snippet scrolls the reader to the relevant passage. abstracts offer one such separation, in essence an author-crafted determination of relevant content. However, static paper abstracts can leave readers to desire additional details that then require them to skim the paper itself. Scim [28] addresses this problem via an augmented reading interface designed to guide readers\u2019 attention using automaticallycreated in-situ faceted highlights (Figure 4). Though prior work has explored highlighting as a visual cue for guiding reader attention [18, 102, 105], the efficacy for reading of scholarly text is less well-understood. Scim investigated the following design goals for intelligent highlights in scholarly reading: highlights should be (1) evenly-distributed throughout a paper, (2) have just the right density (too few highlights will present the guise of an inept tool, and too many will slow a reader down), and (3) highlight several key categories of information in the paper. Because readers often skim for common types of information, Scim uses a pretrained language model [99] to classify salient sentences within papers into one of four information facets: research objectives, novel aspects of the research, methodology, and results, coupled with heuristics that ensure an even distribution of highlights. Usability studies of Scim have shown these highlights can reduce the time it takes readers to find specific information within a paper. Readers found Scim particularly useful when skimming text-dense papers, or for papers that fell outside their area of expertise. Moreover, readers learned to use both Scim\u2019s inline highlights and a sidebar summary of highlights to augment their existing reading strategies. 3.2 Low-Vision Navigation Support and Reader-Sourced Hyperlinks with Ocean The task of navigating between sections and retrieving content can be particularly challenging for blind and low-vision readers due to limitations in auditory information access or small viewports under high magnification [90]. Even when related content is linked, a small viewport can make navigation difficult and necessitate scrolling [75]. Most existing tools such as for auditory skimming [45] do not address such challenges associated with lowvision and magnification. \fThe Semantic Reader Project Ocean [75] minimizes scrolling requirements for low-vision readers by providing bi-directional, viewport-preserving hyperlinks that enable navigating to and from associated content without disrupting the viewport. Based on reported findings from interviews with low-vision readers, Ocean also allows for easily revisiting portions of the paper with tabbed reading. Since papers do not always provide hyperlinks and automated link creation is imperfect, Ocean includes an authoring interface that allows readers to create and share paper links during reading. An exploratory field deployment study with mixed-ability groups of low-vision and sighted readers revealed that readers found value in creating and consuming these links, and that reader-created links can increase trust. 4 IN-SITU EXPLANATIONS FOR BETTER COMPREHENSION Could an augmented reading application help readers understand a paper by reducing the cognitive load associated with reading a paper? In this section, we discuss several ways in which interactive reading aids can help a reader understand a paper with less work through three systems: ScholarPhi [33], PaperPlain [1] and Papeo [92]. In particular, papers can be augmented with definitions of terms and symbols, provide plain-language summaries of paper passages, and connect readers with alternative forms of expression (for instance, video clips of research talks) that offer more approachable explanations of the paper\u2019s content. 4.1 Defining Terms and Symbols with ScholarPhi Understanding a paper requires understanding its vocabulary. However, this is by no means an easy task\u2014a typical paper may contain dozens of acronyms, symbols, and invented terms. And often, these terms appear without accompanying definitions [69]. How can we design interactive aids that present definitions of terms when and where readers most need them? ScholarPhi [33] takes as its basis the term gloss\u2014an extension to a reading interface that shows a reader an explanation of a phrase when they click it. Glosses appeared in early research interfaces for reading hypertext [107] and Figure 5: ScholarPhi [33] shows definitions of terms and symbols in pop-up tooltips. When a reader selects a formula, all known definitions of symbols are shown simultaneously. To let readers select nested symbols (e.g., \u201c\u210e\u201d in \u201c\ud835\udc49(\ud835\udc57) \u210e \u201d), ScholarPhi supports \u201cdrill-down\u201d subsymbol selection. Figure 6: Paper Plain [1] provides in-situ plain language summaries of passages called \u201cgists\u201d to help readers who are overwhelmed by complex textual passages. Readers access gists by clicking a flag next to a section header. These gists are generated by large language models. have since become part of widely-used reading interfaces including Wikipedia and Kindle. That said, familiar gloss designs do not work well for scientific papers, where glosses run the risk of distracting readers, terms have multiple meanings, and phrases (specifically math symbols) are difficult to unambiguously select. The ScholarPhi design addresses these challenges. First, it aims to reduce distraction by showing definitions with high economy: glosses show multiple definitions and and in-context usages within a compact tooltip. Second, it provides position-sensitive definitions, revealing definitions that appears most recently prior to the selected usages of terms. Terms and definitions are automatically identified using a pretrained language model [42]. Finally, it provides easier access to definitions of mathematical symbols. Readers can access definitions of both a symbol and the subsymbols it is made of through a multi-click, \u201cdrill-down\u201d selection mechanism. Furthermore, when a reader selects a formula, they can see definitions for all symbols at once, automatically placed adjacent to the symbols in the formula\u2019s margins (see Figure 5). In a usability study, the above interactions reduced the time it took readers to find answers to questions involving the understanding of terminology. All readers reported they would use the definition tooltips and formula diagrams often or always if available in their PDF reader tools. 4.2 Simplifying Complex Passages with Paper Plain Helping a reader understand individual terms and phrases only addresses part of the problem. Papers often contain passages so dense and complex that individual definitions are not enough to help someone read the passages, especially if they are a novice or non-expert in a field [9]. Can we make complex texts more approachable by incorporating plain language summaries in the margins of the text? With Paper Plain [1], when a reader encounters a section they find difficult to read, they can access a plain language summary of that section by clicking a button adjacent to the section \fLo and Chang, et al. header (see Figure 6). These summaries are generated by prompting a large language model with section text [10]. Furthermore, Paper Plain helps guide readers using these summaries as an \u201cindex\u201d into the text. A sidebar containing questions a reader may have about the text (e.g., What did the paper find? or What were the limitations?) provides links into answering passages identified using a question-answering system [106] alongside their associated plain language summaries. These features were designed to help readers understand the \u201cgist\u201d of passages that contain unfamiliar vocabulary, providing support beyond that of individual term definitions. Drawing inspiration from prior interactive reading affordances for term definitions [41], in-situ question answering [17, 108], and guiding reading [26], Paper Plain seeks to bring these features together into a holistic system capable of supporting reading of a paper by a non-expert readership. In a usability study, readers made more frequent use of passage summaries than definition tooltips when both were available, suggesting the potential value of plain language summaries as allowing readers to bypass definitions of individual terms when acquiring a broad understanding of a paper. 4.3 Fusing Papers and Videos with Papeo Sometimes, the best explanation of an idea is non-textual. Videos can enhance understanding [65] while also requiring less mental load [66], and various tools have been designed to facilitate searching and browsing for explanations in informational videos such as lectures [47, 52, 57, 76] and tutorials [46, 48, 93]. Similarly, for research papers, an algorithm might be better explained through an animation, a user interface might be better showcased through an screen recording, compared to the proses of a paper [37]. Instead of consuming the two formats independently, could interactive reading interfaces offer readers access to these alternative, more powerful descriptive forms as they read? For this, Papeo [92] was developed as a tool that supplements papers with more engaging, concise, dynamic presentations of information by linking excerpts of talk videos to corresponding paper passages. To grant authors more control over how their work is presented, we developed an AIsupported authoring interface for linking paper passages and videos efficiently: candidate passages are linked to excerpts of videos as suggestions using a pretrained language model [100], and an author interactively confirms or refines them. Unlike text-skimming with Scim (\u00a73.1) and Paper Plain (\u00a74.2), video-skimming in Papeo combines multiple modalities to explain complex information. For example, instead of reading a long text description of an interactive system, readers could see the system\u2019s behavior in a screen recording video with the author\u2019s commentary, and switch to corresponding passages to see implementation details or design motivations if desired. Our early-stage evaluations of Papeo suggest that readers can use these interactions to fluidly transition between watching video and reading text, using video to quickly understand, and then selectively descending into the text when they desire a detailed understanding of the paper. 5 SCAFFOLDING SYNTHESIS WITH RELATED WORK SECTIONS Scientific breakthroughs often rely upon scholars synthesizing multiple published works into broad overviews to identify gaps in the Figure 7: Papeo [92] enables authors to map segments of talk videos to relevant passages in the paper, allowing readers to fluidly switch between the two formats. Color-coded bars show the mapping between the two formats, and allow readers to scrub through video segments for quick previews. current literature [81]. For this, scholars periodically compile survey articles to help other scholars gain a comprehensive overview of important research topics. For example, some fields have dedicated outlet for such articles (e.g., the Psychological Bulletin [5]). However, survey articles require significant time and effort to synthesize, and can quickly become outdated with the exponential growth of scientific publication [7]. Instead, scholars in fast-paced disciplines often rely on the related work section when they need to better understand the broader background when reading a paper. While related work sections also summarize multiple prior works, unlike comprehensive survey articles, they typically provide partial views of the larger research topic most relevant to a single paper. There is an opportunity to build better tooling for scholars to consume and synthesize multiple related work sections across many papers to gain richer and more comprehensive overviews of fast-paced domains. The Threddy [43] and Related [73] projects explored this opportunity using two different approaches: clipping and organizing research threads mentioned across papers [43], and directly exploring and reading related work sections extracted across many papers [73]. 5.1 Clipping and Synthesizing across Papers with Threddy Clipping and note-taking is one common approach to supporting synthesis across multiple documents. Prior work has pointed to the importance of tightly integrating clipping and synthesis support in the reading process, and how incurring significant contextswitching costs can be detrimental to sensemaking [51, 79, 84]. Therefore, recent work has developed tools aimed at reducing the cognitive and interaction costs of clipping [12, 60] and structuring [13, 53, 58, 59, 88] to support everyday online researchers [13], programmers [58], and students [88]. However, designing clipping and synthesis support tools for research papers is relatively underexplored and introduces exciting new research opportunities. For example, additional organizational structures for literature reviews \fThe Semantic Reader Project (e.g., threads of prior work instead of tables [13, 58]), and research paper discovery (e.g., based on inline citations in clipped text). For this, Threddy [43] is a thread-focused clipping tool integrated into scholars\u2019 paper reading process to support literature review and discovery. Using Threddy, readers can select and save sentences into a sidebar from the related work sections of a paper. The system maintains rich context for each clip, including its provenance and inline citations. This allows readers to navigate back to the clipped paper and cited papers afterward. In the sidebar, readers can further organize clips collected across papers into a hierarchy of threads to form their view of the research landscape. The content of the sidebar is preserved across papers that were read over time, and provides valuable context for subsequent reading based on the emerging threads of research the reader have curated. Finally, readers can further expand their coverage by exploring paper recommendations for each thread, based on the referenced papers in the corresponding clips. A lab study showed that Threddy was able to lower the interaction costs of saving clips while maintaining context, allowed participants to curate research threads without breaking reading flows, and discover interesting new papers to further grow their understanding of the research fields. 5.2 Reading and Exploring Related Work Sections across Papers with Relatedly In contrast to Threddy, which aims to improve readers\u2019 existing literature review process through enhanced in-situ clipping and synthesis [43], the Relatedly system introduced a novel workflow that allows readers to explore many related work sections across papers in an interactive search and reading interface to quickly gain a comprehensive overview of rich research topics [73]. While prior work have explored providing overview structure of multiple documents based on citations [16, 80], semantic similarity [36, 85], or human computation [14, 31, 62], they could still lead to complex structures that are hard to interpret [35] or require significant crowdsourcing efforts. Relatedly sidesteps these issues by reusing existing related work paragraphs in published papers which already cite sets of related references with descriptions connecting them [73]. As an example, consider a scholar trying to better understand the space of online misinformation. With online misinformation as the query term, Relatedly shows the reader a list of paragraphs that describe and cite multiple relevant prior work. Using a pretrained language model for summarization [56], Relatedly generates short and descriptive titles for each paragraph, and uses a diversitybased ranking algorithm so that the reader can quickly see and explore paragraphs describing different research threads, such as Fact Checking Datasets, Social Media and Misinformation, and Fake News Detection Techniques. One challenge here is that paragraphs of the same threads often cite overlapping prior work, making them hard to explore and read while keeping track of which papers were new versus already explored. For this, Relatedly provides reading and cross-referencing support by keeping track of paragraphs and references explored by the readers. This allows Relatedly to help readers prioritize their reading for both breadth and depth. Specifically, Relatedly dynamically re-ranks paragraphs and highlights sentences to spotlight unexplored and dissimilar references for breadth, but also allow readers to explore clusters of paragraphs that cited similar references for depth. A usability study comparing Relatedly to a strong document-centric baseline showed that Relatedly led to participants writing summaries that were rated significantly more coherent, insightful, and detailed after 20 minutes of literature review. 6 DYNAMIC DOCUMENTS FOR IMPROVED ACCESSIBILITY A range of disabilities cause people to read scientific documents using a wide variety of devices and reading tools. For example, blind and low vision readers may use assistive reading technology such as screen readers, screen magnification, or text-to-speech to read documents [90]. Furthermore, people without disabilities face situational impairments, such as the inability to view a screen while driving or may have a preference for consuming content on a small, mobile device. Many of these reading tools, such as screen readers, do not function properly on document formats designed for print such as PDF unless the document has been manually post-processed to add information about reading order, content type, etc., which is rarely performed on scientific documents [6, 98]. Further, certain content elements such as figures require the addition of alternative text in order to be read aloud at all (figure captions typically assume the reader can see the figure and do not provide the same semantic content as alt text). High magnification reduces the viewport (the amount of visible content) and can dramatically increase the amount of scrolling and panning required, especially for multi-columnar formats that are commonly used by scientific documents. Visual scanning for information may be impacted or unavailable in these settings, making it more difficult to find and navigate between content in the document [75]. One way to render legacy PDF content more accessibly is to parse and convert it into a more flexible format, such as XML or HTML, which can then be formatted for mobile devices and augmented for reading by screen readers. The SciA11y system6 demonstrates this approach, automatically converting 12M academic PDFs to HTML [97]; a user study with blind and low vision participants demonstrated strong user appreciation of the output, though some errors remain (e.g., failing in certain cases to distinguish footnotes from body text, difficulty parsing math equations) [98]. When available, alt text can be automatically categorized into semantic content types, enabling new reading experiences that allow skipping or prioritizing certain types [19]. Other approaches provide complementary benefits, such as interfaces tailored for low-vision readers (\u00a7 3.2), as well as the range of reading support systems outlined above. 7 DISCUSSION AND FUTURE WORK There are additional directions to explore to better support scholarly activities through the Semantic Reader Project. Towards a full-featured reading experience. One question is how to integrate the different kinds of functionality across these projects into one coherent user interface, especially as we migrate research features into the production interface. Another question is how 6A demo of a subsequent version is available at https://papertohtml.org/ \fLo and Chang, et al. to develop support for the oftentimes social and collaborative nature of scholarly reading. Scholars frequently leverage their social networks and other social signals for paper discovery [44], work in groups to conduct literature review triage and synthesis, or engage in reading group discussions to aid comprehension. Existing augmentations within the Semantic Reader product could imbue social information, such as providing signals from one\u2019s co-author network (e.g., in CiteSee \u00a72.1) or aggregate navigation traces (e.g., in Scim \u00a73.1). The publicly-available Semantic Reader tool could also scaffold the creation of novel crowdor community-sourced content, such as authoror reader-provided explanations, commentary, or verification of paper content. Finally is the question of how we can allow the scholarly community to step in where current AI systems fall short, such as by fixing improperly-extracted content or incorrect generated text which are especially problematic for interfaces such as SciA11y (\u00a76). Advancing AI for scholarly documents. The Semantic Reader Project presents an opportunity for further AI research in scholarly document processing, especially when paired with human-centered research grounded in user-validated systems and scenarios. The bar for deploying AI models to support real-world reading is high; we often found during iterative design and usability studies that even slight errors in these models can have detrimental effects on the readers. Until recently, interface design could require months of development of bespoke AI models which creates a barrier for quickly iterate different system designs. Recent advancements in scaling large language models (LLMs) has altered this landscape by enabling researchers to experiment with a wide range of new NLP capabilities at relatively low cost [10]. This has the potential of significantly lowering the cost of human-centered AI design by incorporating user feedback in earlier stages of system development to create AI systems that work in symphony with the users beyond pure automation [87]. For example, when developing Paper Plain (\u00a74.2), LLMs enabled us to quickly test different granularities and complexity-levels of plain language summaries with participants, eschewing the need for expensive changes to data requirements and model retraining. In the near-term, we will revisit interface designs relying on bespoke AI models to evaluate whether LLMs can close the gap between research prototype and ready-for-production (e.g., more accurate definition identification for ScholarPhi \u00a74.1). Longerterm, we will explore whether LLMs can power new interactions (e.g., user-provided natural language queries while reading [23, 95]). While recent work has shown that these models can occasionally make critical errors or generate factually incorrect text when processing scientific text [72], we remain cautiously optimistic about developing ways to address their limitations [25, 55]. Ethics of augmented papers. Finally, all the new interfaces for reading that we propose pose a number of important ethical considerations that will require further research and discussion. One aspect that arises with any system for elevating certain papers or certain content over others is bias. For instance, using signals such as citation counts faces the risk of a \u201crich get richer\u201d bias, which can reflect other kinds of documented biases [3, 64, 101]. As a result, systems such as CiteSee (\u00a72.1) or Relatedly (5.2) should carefully consider additional signals of relevance such as semantic similarity to surface newer and overlooked papers. Another tension that we have encountered is the potential discrepancy between author desires and reader desires for how a work is presented and how much control to provide authors. For instance, in our work on Papeo (\u00a74.3), we found that authors desired control over placement of their talk video snippets, even as they found automated mapping support to be helpful. In other cases, authors might not have the requisite expertise (e.g., they may not have a good sense of reader needs or what non-experts are confused by) or may have the wrong incentives. Future work should consider author perspectives on these augmented experiences. A related issue is around systems for more efficient reading or synthesis, which may encourage readers to take shortcuts that lead to incorrect understanding, sloppy research, or even outright plagiarism. Instead of simply seeking to increase reading throughput uniformly, our systems should enable triage, so that readers can dedicate time for thoughtful and careful reading when the content is important. For instance, our systems could design pathways that, while they may be more efficient, do not obfuscate the full context (e.g., Scim \u00a73.1), and that encourage good practices such as verification and provenance tracing. A final consideration is around what is ethical reuse of a paper\u2019s contents to support reader experiences outside of that paper and its licensing implications. For instance, CiteRead (\u00a72.2) extracts paper citances and places them in the cited paper, and Relatedly (\u00a75.2) extracts related work sections from different papers for users to explore. Recent trends in open science and datasets [29, 61, 67, 68] point to a promising future where we could continue to explore different ways to remix and reuse scholarly content across context so that future scientists can take fuller advantage of prior research. 8" + }, + { + "url": "http://arxiv.org/abs/1911.02782v3", + "title": "S2ORC: The Semantic Scholar Open Research Corpus", + "abstract": "We introduce S2ORC, a large corpus of 81.1M English-language academic papers\nspanning many academic disciplines. The corpus consists of rich metadata, paper\nabstracts, resolved bibliographic references, as well as structured full text\nfor 8.1M open access papers. Full text is annotated with automatically-detected\ninline mentions of citations, figures, and tables, each linked to their\ncorresponding paper objects. In S2ORC, we aggregate papers from hundreds of\nacademic publishers and digital archives into a unified source, and create the\nlargest publicly-available collection of machine-readable academic text to\ndate. We hope this resource will facilitate research and development of tools\nand tasks for text mining over academic text.", + "authors": "Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, Dan S. Weld", + "published": "2019-11-07", + "updated": "2020-07-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.DL" + ], + "main_content": "Introduction Academic papers are an increasingly important textual domain for natural language processing (NLP) research. Aside from capturing valuable knowledge from humankind\u2019s collective research efforts, academic papers exhibit many interesting characteristics \u2013 thousands of words organized into sections, objects such as tables, \ufb01gures and equations, frequent inline references to these objects, footnotes, other papers, and more. Different types of resources have been used to support research over academic papers. Citation graphs like AMiner\u2019s Open Academic Graph (Tang et al., 2008), the Microsoft Academic Graph (MAG) (Shen et al., 2018), and the Semantic Scholar literature graph (Ammar et al., 2018), have had widespread application in bibliometrics, science-of-science, information retrieval, and network analysis. Digital archives like arXiv,2 \u2217denotes equal contribution 1Instructions for access to the data and model are available at https://github.com/allenai/s2orc/. 2https://arxiv.org Figure 1: Inline citations and references to \ufb01gures and tables are annotated in S2ORC\u2019s structured full text. Citations are linked to bibliography entries, which are linked to other papers in S2ORC. Figure and table references are linked to their captions. PubMed Central,3 CiteSeerX (Giles et al., 1998),4 and the ACL Anthology (Bird et al., 2008),5 are popular resources for deriving large text corpora for summarization and language modeling or, with further annotation, development of datasets for tasks like entity extraction, text classi\ufb01cation, parsing, and discourse analysis. We focus on bibliometrically-enhanced derivations of these corpora, such as the ACL Anthology Network (AAN) (Radev et al., 2009)6 derived from the ACL Anthology, RefSeer (Huang et al., 2015) derived from CiteSeerX, and Saier and F\u00a8 arber (2019) derived from arXiv, which combine useful aspects of citation graphs and raw text corpora. These resources provide citation mentions linked to paper identi\ufb01ers in their corresponding digital archives, such as the ACL Anthology and CiteSeerX, or to nodes in citation graphs such as MAG, enabling new forms of cross-paper discourse analysis (e.g., studying how or why papers are related). 3https://www.ncbi.nlm.nih.gov/pmc 4https://citeseerx.ist.psu.edu 5https://www.aclweb.org/anthology 6http://aan.how/ arXiv:1911.02782v3 [cs.CL] 7 Jul 2020 \fCorpus Papers w/ body text Citation contexts References to tables / \ufb01gures / equations Linked to graph Academic disciplines S2ORC (PDF-parse) 8.1M full text yes S2ORC (full) multi S2ORC (LATEX-parse) 1.5M full text yes S2ORC (full) physics, math, CS PubMed Central (OA) 2.6M full text yes PubMed bio, med AAN (Radev et al., 2009) 25k full text no ACL Anthology comp ling Saier and F\u00a8 arber (2019)\u2020 1.0M snippets no MAG physics, math, CS RefSeer (Huang et al., 2015) 1.0M snippets no CiteSeerX multi Table 1: A comparison of S2ORC with other publicly-available academic text corpora. Of the other corpora: PubMed Central (OA) links to PubMed, which contains 30M papers at the time of writing. AAN links to the ACL Anthology (which contained 25k papers at the time of dataset construction, and 54k papers at the time of writing). Saier and F\u00a8 arber (2019) is derived from arXiv and links to MAG (which contained 213M papers and other non-paper documents at the time of dataset construction, and 226M nodes at the time of writing). RefSeer links to CiteSeerX (which contained 1M papers at the time of dataset construction, and 6M papers at the time of writing). S2ORC contains three times more full text papers than PubMed Central (OA), the next largest corpus with bibliometric enhancements, while covering a more diverse set of academic disciplines. Citations in S2ORC are linked to the full set of S2ORC papers, 81.1M paper nodes derived from Semantic Scholar. In addition, the LATEX subset of S2ORC captures additional structure omitted by Saier and F\u00a8 arber (2019), who also parse LATEX sources from arXiv. \u2020Saier and F\u00a8 arber (2020) is an update to this work which now includes full text. It is released concurrently with this work. Yet, existing corpora are not without their limitations. Some cover a small number of papers (e.g. AAN), are domain-speci\ufb01c (e.g. AAN, PubMed Central, Saier and F\u00a8 arber (2019)), or may not provide usable full text (e.g. Saier and F\u00a8 arber (2019) and RefSeer). To address these issues, we introduce S2ORC,7 the Semantic Scholar8 Open Research Corpus, a large publicly-available collection of 81.1M academic papers covering dozens of academic disciplines. Each paper is associated with metadata and abstracts aggregated from hundreds of trusted sources such as academic publishers and literature archives like PubMed and arXiv. Notably, we release structured, machinereadable full text extracted from PDFs for 8.1M papers which we\u2019ve identi\ufb01ed as having open access status. S2ORC full text preserves meaningful structure, e.g., paragraph breaks, section headers, inline citation mentions, references to tables and \ufb01gures, and resolved citation links to other papers. Additionally, we provide 1.5M full text LATEX parses from which we have extracted, in addition to citations and references, the source text of tables and mathematical formulas. As shown in Table 1, S2ORC provides substantially more structured full text papers and covers a more diverse set of academic disciplines than other resources. 7pronounced \u201cstork\u201d 8The papers included in S2ORC are a curated subset of the papers in the Semantic Scholar literature graph (Ammar et al., 2018) that focuses only on English-language papers with abstracts or full text available. See \u00a72.5 for details on \ufb01ltering through Semantic Scholar papers. In this paper, we describe the construction of S2ORC (\u00a72). We provide summary statistics of the corpus (\u00a73) and evaluate the data quality (\u00a74). We then evaluate a BERT model pretrained on S2ORC (\u00a75), and discuss potential applications to a variety of NLP and analysis tasks over academic text (\u00a76). Finally, we compare S2ORC with other publicly-available academic text corpora (\u00a77). 2 Constructing the corpus S2ORC is constructed using data from the Semantic Scholar literature corpus (Ammar et al., 2018). Papers in Semantic Scholar are derived from numerous sources: obtained directly from publishers, from resources such as MAG, from various archives such as arXiv or PubMed, or crawled from the open Internet. Semantic Scholar clusters these papers based on title similarity and DOI overlap, resulting in an initial set of approximately 200M paper clusters. To construct S2ORC, we must overcome challenges in (i) paper metadata aggregation, (ii) identifying open access publications, and (iii) clustering papers, in addition to identifying, extracting, and cleaning the full text and bibliometric annotations associated with each paper. The pipeline for creating S2ORC is: 1) Process PDFs and LATEX sources to derive metadata, clean full text, inline citations and references, and bibliography entries, 2) Select the best metadata and full text parses for each paper cluster, \f3) Filter paper clusters with insuf\ufb01cient metadata or content, and 4) Resolve bibliography links between paper clusters in the corpus. Details for these steps are provided below. See Appendix \u00a7A for de\ufb01nitions of terminology. The output of this pipeline is visualized in Figure 1. 2.1 Processing PDFs We process PDFs from the Semantic Scholar corpus using SCIENCEPARSE v3.0.09 and GROBID v0.5.510 (Lopez, 2009). Our processing pipeline is described below. Selecting PDFs We remove PDFs which are less likely to be academic papers. SCIENCEPARSE and GROBID are not optimized for processing nonpaper academic documents such as dissertations, reports, slides, etc., and this \ufb01ltering step is necessary to increase output data quality. See Appendix \u00a7B for \ufb01lter details. There are around 31.3M PDFs associated with approximately 200M initial paper clusters, and 30.5M PDFs are selected for processing based on these \ufb01ltering criteria. Extracting structured data from PDFs We use SCIENCEPARSE to extract title and authors from each PDF.11 We then use GROBID to process each PDF. From the XML output of GROBID, we extract (i) metadata such as title, authors, and abstract, (ii) paragraphs from the body text organized under section headings, (iii) \ufb01gure and table captions, (iv) equations, table content, headers, and footers, which we remove from the body text, (v) inline citations in the abstract and body text, (vi) parsed bibliography entries with title, authors, year, and venue identi\ufb01ed, and (vi) links between inline citation mentions and their corresponding bibliography entries. Postprocessing GROBID output We postprocess GROBID output using regular expressions to classify the parenthetical citation style of a paper as BRACKET (e.g. [2]), NAME-YEAR (e.g. ABC, 2019), or OTHER (superscripts and other mixed styles). We focus on addressing two types of common errors in GROBID\u2019s inline citation extractions: (i) false positives resulting from superscripts or equation references being recognized as 9https://github.com/allenai/ science-parse 10https://github.com/kermitt2/grobid 11Our evaluations suggest SCIENCEPARSE outperforms GROBID for title and author extraction. inline citations in papers with BRACKET-style citations, and (ii) false negatives resulting from an inability to expand bracket citation ranges (e.g. \u201c[3]-[5]\u201d should be expanded to \u201c[3], [4], [5]\u201d before linking). False positives are detected using regular expressions and removed from GROBID output. Bracket citation ranges are manually expanded and linked to their corresponding bibliography entries. The resulting parses are expressed in JSON format.12 2.2 Processing LATEX source LATEX document source is available for a majority of arXiv submissions, and where available, are used to construct a full text parse. We retrieve body text, section headers, \ufb01gure/table captions, table representations, equations, and inline citations and references directly from LATEX source. Inspired by Saier and F\u00a8 arber (2019), we \ufb01rst convert LATEX source into XML documents and then extract structured information from the XML. Due to direct access to source, the accuracy of citation span, reference, caption, section header, and equation detection is near-perfect. We process 1.5M papers from LATEX source derived from arXiv, all of which are included as part of S2ORC. Surprisingly, due to the diversity of ways in which authors de\ufb01ne metadata in LATEX, the quality of metadata extracted from LATEX documents is worse than those extracted from PDF. Therefore, we do not use LATEX-derived metadata for paper clustering or metadata selection. 2.3 Selecting canonical metadata Canonical values for title, authors and other metadata \ufb01elds are selected from among the papers in a cluster. First, if a cluster contains multiple PDFs, we select one to be canonical. This can occur, for example, in a cluster containing an arXiv preprint and its eventual camera-ready version. We preferentially select PDFs from open access sources and break ties by prioritizing PDFs for which there exist richer publisher-provided metadata (e.g. abstract, year, venue, DOI). If the selected PDF is associated with publisher-provided metadata, we select those publisher-provided metadata \ufb01elds to be canonical. In cases where publisher-provided metadata is incomplete, we use majority voting to select 12The S2ORC data format is described at https:// github.com/allenai/s2orc \fcanonical metadata values. We break ties by minimizing the total number of sources from which we select metadata (e.g., if IEEE provides title, authors and abstract, DBLP provides title and authors, and arXiv provides title and abstract, we prioritize selecting IEEE over the union of DBLP and arXiv). S2ORC metadata \ufb01elds include title, author, year, venue, journal, abstract, and identi\ufb01ers (DOI, PubMed, PubMed Central (PMC), arXiv, and ACL Anthology). In cases where the title and authors are not provided by any publishers, we derive the values for these \ufb01elds from the parsed PDF, prioritizing SCIENCEPARSE over GROBID. We further comment on paper clustering as it pertains to metadata selection in Appendix \u00a7C. 2.4 Assembling the corpus We construct the \ufb01nal corpus by assembling clustered paper metadata with GROBID and LATEX parse objects. We associate the GROBID parse with the S2ORC paper object if a valid GROBID parse is produced from the PDF, and the PDF is open access. Open access status is assigned if a paper is derived from arXiv, ACL Anthology, PubMed Central (OA), and/or associated with an open-access DOI in the Unpaywall database.13 If the PDF is not open access, we only include the bibliography from the GROBID parse in S2ORC. If arXiv LATEX source is available for the paper cluster, we also associate the LATEX parse with the S2ORC paper object. 2.5 Filtering paper clusters We further \ufb01lter paper clusters to remove papers with (i) no title, (ii) no authors, (iii) fewer than 100 characters of abstract and body text, and (iv) where English is not the primary language. The \ufb01rst three \ufb01lters remove papers that provide little value for bibliometric-based or text-based analyses. The English language \ufb01lter14 reduces GROBID parsing errors. All \ufb01lters are applied in series. Subsequently, 95.5M paper clusters are \ufb01ltered out based on the aforementioned criteria and removed from the corpus. The distribution of \ufb01ltered papers is given in Table 2. We note that a large number of paper clusters are \ufb01ltered out; 80.0M of these \ufb01ltered clusters have no associated publisher-provided abstract or associated PDF and 13Unpaywall 2019-04-19 data dump 14We use the cld2 tool for language detection with a threshold of 0.9 over the English language score. do not provide signi\ufb01cant value to our dataset in their current state. Although these papers that lack text may be useful as cite-able nodes in S2ORC, they are generally of lower quality and are \ufb01ltered out of the corpus to improve corpus quality. Filter Number of papers No title 20k No authors 0.3M < 100 chars of text 80.0M Not English 15.2M Table 2: Post-processing data quality \ufb01lters for papers 2.6 Linking bibliographies to papers Each bibliography entry in both GROBID and LATEX parses are linked to the most similar papers in the corpus. For linking, we score each bibliography entry and paper cluster pair using a similarity score computed between their titles. Each title is \ufb01rst normalized (i.e. white spaces stripped, lower-cased, special characters removed) and represented by its character 3-grams. The similarity score Stitle is computed as the harmonic mean between a Jaccard index and a containment metric: Stitle = 2 \u00d7 J \u00d7 C J + C (1) where the Jaccard index J and containment metric C are computed from the n-grams of the two titles N1 and N2 as: J = |N1 \u2229N2| |N1 \u222aN2| C = |N1 \u2229N2| min (|N1|, |N2|) For each bibliography entry, the bibliographypaper pair with the highest similarity score above 0.8 is output as the correct link. Otherwise, the bibliography entry remains unlinked. We perform an evaluation of linking performance in \u00a74. 3 The S2ORC dataset The resulting corpus consists of 81.1M papers. Our publisher-provided abstract coverage is 90.4%, or 73.4M papers. Our PDF coverage is 35.6%, or 28.9M papers. These PDFs are processed using the pipeline discussed in \u00a72.1. The \fTotal papers 81.1M Papers w/ PDF 28.9M (35.6%) Papers w/ bibliographies 27.6M (34.1%) Papers w/ GROBID full text 8.1M (10.0%) Papers w/ LaTeX full text 1.5M (1.8%) Papers w/ publisher abstract 73.4M (90.4%) Papers w/ DOIs 52.2M (64.3%) Papers w/ Pubmed IDs 21.5M (26.5%) Papers w/ PMC IDs 4.7M (5.8%) Papers w/ ArXiv IDs 1.7M (2.0%) Papers w/ ACL IDs 42k (0.1%) Table 3: Statistics on paper provenance. We note that categories are not mutually exclusive and do not sum to 100%. All papers in S2ORC have either a publisherprovided abstract or an associated PDF from which we derive full text and/or bibliography entries, or both. Statistic GROBID LATEX Paragraphs (abstract) 1.1 Paragraphs (body) 9.9 93.3* Inline cite spans (abstract) 0.7 Inline cite spans (body) 45.2 46.8 Bibliography entries 27.6 21.9 Linked bib. entries 19.3 6.8\u2020 Table 4: Extraction and linking statistics over PDF and LATEX parses. Reported values are averaged over all open access papers, which consist of 8.1M GROBIDparsed PDFs and 1.5M parsed LATEX sources. *LATEX preserves line breaks rather than paragraph breaks. \u2020The lower number of linked bibliography entries in LATEX parses is due to large numbers of papers (mostly in the \ufb01eld of physics) for which the bibliography entries are formatted without paper titles. Our linking algorithm strongly depends on titles and fails to link these entries. vast majority of these PDFs are successfully processed using GROBID, and we extract bibliography entries for 27.6M of the 28.9M PDFs. We identify 8.1M of the 28.9M PDFs as open access (\u00a72.4), and we provide full text for all papers in this open access subset. For the 1.5M papers for which LATEX source is available through arXiv, we further obtain and provide LATEX parses (\u00a72.2). Using these extracted bibliographies, we resolve a total 380.5M citation links between papers (\u00a72.6), 156.5M of which can be tied back to their inline citation mentions in the full text. See Table 3 for more provenance statistics. We provide statistics for the GROBID and LATEX full text parses and bibliography linking in Figure 2: Distribution of papers by Microsoft Academic \ufb01eld of study. Table 4. On average, LATEX parses contain many more \u201cparagraphs\u201d of body text, because LATEX source \ufb01les preserve line breaks rather than paragraph breaks. We speculate that differences in bibliography entry and linking counts between the GROBID and LATEX parses are due to a combination of: (i) challenges in LATEX bibliography expansion and parsing, and (ii) differences in bibliography formatting in some math and physics venues (where bibliography entries do not include paper titles, which we depend on for bibliography linking). The distribution of academic disciplines in S2ORC is given in Figure 2 using Microsoft Academic \ufb01elds of study. Not all papers in S2ORC can be found in Microsoft Academic \u2013 those not found are denoted as Unclassi\ufb01ed. Approximately 677k papers have more than one primary Microsoft Academic \ufb01eld of study; Figure 2 represents only the top \ufb01eld of study for each paper. 4 Evaluation To evaluate the quality of our metadata selection, we randomly sample 500 paper clusters, restricting to those with PDFs. Within each sampled cluster, we determine whether the canonical title and authors match the title and authors in the selected canonical PDF. Inline citation detection and bibliography parsing are dependent on GROBID (Lopez, 2009). Ahmad and Afzal (2018) evaluate GROBID for de\fDomain Dataset Reference Task SCIBERT S2ORCSCIBERT BC5CDR Li et al. (2016) NER 90.01 90.41 \u00b1 0.06 JNLPBA Collier and Kim (2004) NER 77.28 77.70 \u00b1 0.25 NCBI-disease Do\u02d8 gan et al. (2014) NER 88.57 88.70 \u00b1 0.52 Biomed EBM-NLP Nye et al. (2018) PICO 72.28 72.35 \u00b1 0.95 GENIA Kim et al. (2003) DEP (LAS) 90.43 90.80 \u00b1 0.19 GENIA Kim et al. (2003) DEP (UAS) 91.99 92.31 \u00b1 0.18 ChemProt Krallinger et al. (2017) REL 83.64 84.59 \u00b1 0.93 SciERC Luan et al. (2018) NER 67.57 68.93 \u00b1 0.19 CS SciERC Luan et al. (2018) REL 79.97 81.77 \u00b1 1.64 ACL-ARC Jurgens et al. (2018) CLS 70.98 68.45 \u00b1 2.47 Biomed & CS SciCite Cohan et al. (2019) CLS 85.49 84.76 \u00b1 0.37 Multi-domain PaperField Beltagy et al. (2019) CLS 65.71 65.99 \u00b1 0.08 Table 5: S2ORC-SCIBERT test results are comparable with reported SCIBERT test results on the set of tasks and datasets from Beltagy et al. (2019), to which we refer the reader for descriptions. Reported statistics are spanlevel F1 for NER, token-level F1 for PICO, dependency parsing (DEP), and macro-F1 for relation (REL) and text (CLS) classi\ufb01cation. We report micro-F1 for ChemProt. All S2ORC-SCIBERT results are the mean \u00b1 standard deviation of 5 runs with different random seeds. Beltagy et al. (2019) do not report standard deviation or number of runs. tecting inline citations using a corpus of 5k CiteSeer papers, and found GROBID to have an F1score of 0.89 on this task. Tkaczyk et al. (2018) report GROBID as the best among 10 out-of-the-box tools for parsing bibliographies, also achieving an F1 of 0.89 in an evaluation corpus of 9.5k papers. We perform an evaluation over 200 randomly sampled papers from S2ORC and found comparable F1-scores for GROBID performance on both tasks. For bibliography linking, we randomly sample S2ORC papers (500 GROBID PDF parses and 100 LATEX parses) and select one linked bibliography entry from each sampled paper (while avoiding selecting multiple entries linked to the same paper). We determine whether the title and authors in the bibliography entry agree with the title and authors of the linked paper. We present these evaluation results in Table 6 and detail valuation criteria in Appendix \u00a7D. Evaluated task Title Authors Paper clustering 0.93 0.89 Bib. linking (GROBID) 1.00 0.96 Bib. linking (LATEX) 1.00 0.92 Table 6: Accuracy of paper clustering and bibliography linking for titles and authors in sampled evaluation sets. 5 Pretraining BERT on S2ORC To demonstrate the suitability of S2ORC for language model pretraining, we train BERT-Base (Devlin et al., 2019) on the parsed full text of S2ORC and show that the resulting model (S2ORC-SCIBERT) performs similarly to SCIBERT (Beltagy et al., 2019) on a diverse suite of scienti\ufb01c NLP tasks and datasets. While SCIBERT is a BERT-Base model also trained on multiple domains of scienti\ufb01c text, key differences in its pretraining corpus and vocabulary and those used for S2ORC-SCIBERT are: \u2022 Domain: Beltagy et al. (2019) report a pretraining corpus consisting of 82% biomedical and 18% computer science papers. Our S2ORC pretraining corpus consists of a more balanced distribution of papers across diverse academic disciplines (see Figure 2), such that biomedical (42.7%) and computer science (7.2%) papers only comprise half the corpus. \u2022 Preprocessing: S2ORC identi\ufb01es \ufb01gure captions, table text and captions, headers, footers, and footnotes. We exclude these from the pretraining corpus. We tokenize and sentencize the text using scispaCy (Neumann et al., 2019). We also use heuristic \ufb01lters to remove ill-formed paragraphs (such as those containing too many symbols). \u2022 Size: The resulting S2ORC pretraining cor\fpus contains 16.4B tokens, nearly \ufb01ve times larger than the corpus for SCIBERT. \u2022 Vocab: Following Beltagy et al. (2019), we construct a cased WordPiece (Wu et al., 2016) vocabulary of size 31k using 15% of the S2ORC pretraining corpus. The Jaccard index between the S2ORC-SCIBERT and SCIBERT vocabularies is 0.536. We follow a similar setup to Beltagy et al. (2019) for both pretraining and \ufb01ne-tuning S2ORC-SCIBERT. Like SCIBERT, S2ORCSCIBERT is pretrained from scratch using the original BERT code15 and default BERT-Base con\ufb01gurations on a single TPU v3-8 for one week. Also like SCIBERT, S2ORC-SCIBERT is \ufb01netuned on all tasks by optimizing a cross entropy loss using Adam (Kingma and Ba, 2014), a linear learning rate decay with 10% warm-up, batch size of 32, and dropout of 0.1. We search over an equal-sized grid of hyperparameters as Beltagy et al. (2019). We \ufb01ne-tune for 1 to 4 epochs with a maximum learning rate of 1e-5, 2e-5, 3e-5, or 5e-5. For each task, we select the optimal combination of these two hyperparameters using the development set and report the corresponding test set results. For details, we refer the reader to SCIBERT code,16 which we use for all experiments. The results in Table 5 show that S2ORCSCIBERT outperforms SCIBERT on many tasks despite including a large percentage of data outside of the biomedical and computer science domains. As the pretraining corpus for SCIBERT is not publicly-available, S2ORC can serve as a large pretraining corpus for evaluating and comparing pretraining approaches on academic text. We also release S2ORC-SCIBERT to serve as a baseline for research. 6 Applications of S2ORC S2ORC can be used for many NLP and analysis tasks over academic text. We give a summary of potential applications below. The combination of structured full text annotated with linked inline citations makes S2ORC well-suited for a variety of citation-related textbased tasks. Without any additional supervision, S2ORC can be used directly for both inline (He 15https://github.com/google-research/ bert 16https://github.com/allenai/scibert et al., 2010; Duma and Klein, 2014; Jeong et al., 2019) and document-level (Yu et al., 2012; Liu et al., 2015; Bhagavatula et al., 2018) citation recommendation. Among document-level recommenders, S2ORC is well-suited to the setting of Liu et al. (2015), who use inline citation contexts to \ufb01lter document-level recommendations. Figure 3: Word2vec embeddings associated with 20k papers in six AI-related arXiv categories visualized using t-SNE (van der Maaten and Hinton, 2008). Example papers from two randomly selected sub-regions A and B are given in Table 7. Region A cs.LG \u201cOn Unifying Deep Generative Models\u201d stat.ML \u201cLearning Disentangled Representations with Semi-Supervised Deep Generative Models\u201d cs.LG \u201cDenoising Criterion for Variational AutoEncoding Framework\u201d cs.CV \u201cVariational methods for conditional multimodal deep learning\u201d Region B cs.CL \u201cTransA: An Adaptive Approach for Knowledge Graph Embedding\u201d cs.AI \u201cTorusE: Knowledge Graph Embedding on a Lie Group\u201d cs.CV \u201cImage-embodied Knowledge Representation Learning\u201d stat.ML \u201cNeural Embeddings of Graphs in Hyperbolic Space\u201d Table 7: Sampled papers in clusters from t-SNE embedding space in Figure 3. Region A consists of papers related to deep generative models; region B consists of papers concerned with graph representation learning. Other tasks that leverage citation contexts in\fclude classifying citation intent (Teufel et al., 2006; Jurgens et al., 2018; Cohan et al., 2019), identifying citation sentiment (Athar and Teufel, 2012), identifying meaningful citations (Valenzuela et al., 2015), extracting key phrases (Caragea et al., 2014), and citation context-based paper summarization (Teufel et al., 2006; Qazvinian and Radev, 2008; Cohan and Goharian, 2015; Mitrovi\u00b4 c and M\u00a8 uller, 2015). The models in these papers require labeled citation contexts for training. S2ORC could potentially bene\ufb01t task performance without additional annotation, for example, by pretraining language models on S2ORC citation contexts before \ufb01ne-tuning to these tasks. Cohan et al. (2019) \ufb01nd that long citation contexts (beyond sentence boundary) are important for tasks like summarization; the wider citation contexts available in S2ORC could be used to augment existing datasets for document-level tasks. Citation contexts can also be used for the more general tasks of identifying similar papers (Kanakia et al., 2019; Eto, 2019; Haruna et al., 2018; Small, 1973) or bibliometric analysis (Ding et al., 2014; Trujillo and Long, 2018; Asatani et al., 2018). Towards these tasks, the citation contexts in S2ORC can provide insight into how and why papers are cited. We illustrate this by following Berger et al. (2016) in training a word2vec skip-gram model (Mikolov et al., 2013) using full text citation contexts in S2ORC, where each inline citation span is replaced with its linked paper identi\ufb01er. When training over this modi\ufb01ed text, the word2vec model learns embeddings corresponding to each unique paper identi\ufb01er, which can be leveraged as paper embeddings. The resulting embeddings shown in Figure 3 and Table 7 form clusters corresponding closely to arXiv Machine Learning categories. Upon inspection, papers of different categories in the same embedding sub-region share research themes (see Table 7), indicating that these paper embeddings trained from citation contexts capture coherent topic similarity and relatedness. These paper embeddings can be used to identify similar papers, using the similarity between two papers\u2019 citing contexts as a proxy for paper similarity. The LATEX subset of S2ORC also provides unique opportunities for research. In addition to citations and references, we also extract and parse tables from LATEX source into a structured format. There is an opportunity to use these tables for corpus-level results extraction and aggregation. The LATEX subset also has \ufb01ne-grained extraction and labeling of mathematical formulas, which can be used to understand proof construction, or to assist in symbol co-reference resolution. 7 Related work The ACL Anthology Network (AAN) (Radev et al., 2009) is a bibliometric-enhanced corpus covering papers in the \ufb01eld of computational linguistics. It is built from the ACL Anthology (Bird et al., 2008) and consists of 24.6k papers manually augmented with citation information. The PubMed Central Open Access corpus is a large corpus of 2.6M papers in the biomedical domain with citations linked to PubMed identi\ufb01ers.17 CiteSeerX (Giles et al., 1998), consists of papers collected primarily via web crawl, without integrating metadata provided by sources outside of the PDF. Although citation contexts are no longer available through CiteSeerX, the RefSeer dataset (Huang et al., 2015)18 is a dataset of short citation context snippets derived from 1.0M papers from CiteSeerX. More recently, Saier and F\u00a8 arber (2019) introduce a corpus built using 1.0M arXiv publications. They use LATEX source to extract text, citation spans and bibliography entries, which are linked to papers in the Microsoft Academic Graph. The citation context they provide are extracted snippets and no bibliography parses are provided. An updated version of this dataset (Saier and F\u00a8 arber, 2020) released concurrently with this work now includes full text. Compared with these resources, S2ORC represents a signi\ufb01cantly larger dataset of linked papers covering broad domains of science by leveraging PDF parsing in addition to LATEX source. S2ORC also provides clean full text for text mining and NLP needs with additional enhancements such as annotations of table and \ufb01gure references and captions. S2ORC\u2019s wealth of metadata and structured text allows it to be \ufb02exibly adapted to a variety of downstream tasks. 8" + } + ], + "Arman Cohan": [ + { + "url": "http://arxiv.org/abs/2004.07180v4", + "title": "SPECTER: Document-level Representation Learning using Citation-informed Transformers", + "abstract": "Representation learning is a critical ingredient for natural language\nprocessing systems. Recent Transformer language models like BERT learn powerful\ntextual representations, but these models are targeted towards token- and\nsentence-level training objectives and do not leverage information on\ninter-document relatedness, which limits their document-level representation\npower. For applications on scientific documents, such as classification and\nrecommendation, the embeddings power strong performance on end tasks. We\npropose SPECTER, a new method to generate document-level embedding of\nscientific documents based on pretraining a Transformer language model on a\npowerful signal of document-level relatedness: the citation graph. Unlike\nexisting pretrained language models, SPECTER can be easily applied to\ndownstream applications without task-specific fine-tuning. Additionally, to\nencourage further research on document-level models, we introduce SciDocs, a\nnew evaluation benchmark consisting of seven document-level tasks ranging from\ncitation prediction, to document classification and recommendation. We show\nthat SPECTER outperforms a variety of competitive baselines on the benchmark.", + "authors": "Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, Daniel S. Weld", + "published": "2020-04-15", + "updated": "2020-05-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction As the pace of scienti\ufb01c publication continues to increase, Natural Language Processing (NLP) tools that help users to search, discover and understand the scienti\ufb01c literature have become critical. In recent years, substantial improvements in NLP tools have been brought about by pretrained neural language models (LMs) (Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019). While such models are widely used for representing individual words \u2217Equal contribution 1 https://github.com/allenai/specter or sentences, extensions to whole-document embeddings are relatively underexplored. Likewise, methods that do use inter-document signals to produce whole-document embeddings (Tu et al., 2017; Chen et al., 2019) have yet to incorporate stateof-the-art pretrained LMs. Here, we study how to leverage the power of pretrained language models to learn embeddings for scienti\ufb01c documents. A paper\u2019s title and abstract provide rich semantic content about the paper, but, as we show in this work, simply passing these textual \ufb01elds to an \u201coff-the-shelf\u201d pretrained language model\u2014even a state-of-the-art model tailored to scienti\ufb01c text like the recent SciBERT (Beltagy et al., 2019)\u2014does not result in accurate paper representations. The language modeling objectives used to pretrain the model do not lead it to output representations that are helpful for document-level tasks such as topic classi\ufb01cation or recommendation. In this paper, we introduce a new method for learning general-purpose vector representations of scienti\ufb01c documents. Our system, SPECTER,2 incorporates inter-document context into the Transformer (Vaswani et al., 2017) language models (e.g., SciBERT (Beltagy et al., 2019)) to learn document representations that are effective across a wide-variety of downstream tasks, without the need for any task-speci\ufb01c \ufb01ne-tuning of the pretrained language model. We speci\ufb01cally use citations as a naturally occurring, inter-document incidental supervision signal indicating which documents are most related and formulate the signal into a triplet-loss pretraining objective. Unlike many prior works, at inference time, our model does not require any citation information. This is critical for embedding new papers that have not yet been cited. In experiments, we show that SPECTER\u2019s representations substantially outperform the state2SPECTER: Scienti\ufb01c Paper Embeddings using Citationinformed TransformERs arXiv:2004.07180v4 [cs.CL] 20 May 2020 \fof-the-art on a variety of document-level tasks, including topic classi\ufb01cation, citation prediction, and recommendation. As an additional contribution of this work, we introduce and release SCIDOCS3 , a novel collection of data sets and an evaluation suite for documentlevel embeddings in the scienti\ufb01c domain. SCIDOCS covers seven tasks, and includes tens of thousands of examples of anonymized user signals of document relatedness. We also release our training set (hundreds of thousands of paper titles, abstracts and citations), along with our trained embedding model and its associated code base. 2 Model 2.1 Overview Our goal is to learn task-independent representations of academic papers. Inspired by the recent success of pretrained Transformer language models across various NLP tasks, we use the Transformer model architecture as basis of encoding the input paper. Existing LMs such as BERT, however, are primarily based on masked language modeling objective, only considering intra-document context and do not use any inter-document information. This limits their ability to learn optimal document representations. To learn high-quality documentlevel representations we propose using citations as an inter-document relatedness signal and formulate it as a triplet loss learning objective. We then pretrain the model on a large corpus of citations using this objective, encouraging it to output representations that are more similar for papers that share a citation link than for those that do not. We call our model SPECTER, which learns Scienti\ufb01c Paper Embeddings using Citation-informed TransformERs. With respect to the terminology used by Devlin et al. (2019), unlike most existing LMs that are \u201c\ufb01ne-tuning based\u201d, our approach results in embeddings that can be applied to downstream tasks in a \u201cfeature-based\u201d fashion, meaning the learned paper embeddings can be easily used as features, with no need for further task-speci\ufb01c \ufb01ne-tuning. In the following, as background information, we brie\ufb02y describe how pretrained LMs can be applied for document representation and then discuss the details of SPECTER. 3https://github.com/allenai/scidocs Transformer (initialized with SciBERT) Related paper (P+) Query paper (PQ) Unrelated paper (P\u2212) Triplet loss =max n\u0010 d \u0000PQ, P+\u0001 \u2212d \u0000PQ, P\u2212\u0001 + m \u0011 , 0 o Figure 1: Overview of SPECTER. 2.2 Background: Pretrained Transformers Recently, pretrained Transformer networks have demonstrated success on various NLP tasks (Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019); we use these models as the foundation for SPECTER. Speci\ufb01cally, we use SciBERT (Beltagy et al., 2019) which is an adaptation of the original BERT (Devlin et al., 2019) architecture to the scienti\ufb01c domain. The BERT model architecture (Devlin et al., 2019) uses multiple layers of Transformers (Vaswani et al., 2017) to encode the tokens in a given input sequence. Each layer consists of a self-attention sublayer followed by a feedforward sublayer. The \ufb01nal hidden state associated with the special [CLS] token is usually called the \u201cpooled output\u201d, and is commonly used as an aggregate representation of the sequence. Document Representation Our goal is to represent a given paper P as a dense vector v that best represents the paper and can be used in downstream tasks. SPECTER builds embeddings from the title and abstract of a paper. Intuitively, we would expect these \ufb01elds to be suf\ufb01cient to produce accurate embeddings, since they are written to provide a succinct and comprehensive summary of the paper.4 As such, we encode the concatenated title and abstract using a Transformer LM (e.g., SciBERT) and take the \ufb01nal representation of the [CLS] token as the output representation of the paper:5 v = Transformer(input)[CLS], (1) where Transformer is the Transformer\u2019s forward function, and input is the concatenation of the [CLS] token and WordPieces (Wu et al., 2016) of the title and abstract of a paper, separated by 4We also experimented with additional \ufb01elds such as venues and authors but did not \ufb01nd any empirical advantage in using those (see \u00a76). See \u00a77 for a discussion of using the full text of the paper as input. 5It is also possible to encode title and abstracts individually and then concatenate or combine them to get the \ufb01nal embedding. However, in our experiments this resulted in sub-optimal performance. \fthe [SEP] token. We use SciBERT as our model initialization as it is optimized for scienti\ufb01c text, though our formulation is general and any Transformer language model instead of SciBERT. Using the above method with an \u201coff-the-shelf\u201d SciBERT does not take global inter-document information into account. This is because SciBERT, like other pretrained language models, is trained via language modeling objectives, which only predict words or sentences given their in-document, nearby textual context. In contrast, we propose to incorporate citations into the model as a signal of inter-document relatedness, while still leveraging the model\u2019s existing strength in modeling language. 2.3 Citation-Based Pretraining Objective A citation from one document to another suggests that the documents are related. To encode this relatedness signal into our representations, we design a loss function that trains the Transformer model to learn closer representations for papers when one cites the other, and more distant representations otherwise. The high-level overview of the model is shown in Figure 1. In particular, each training instance is a triplet of papers: a query paper PQ, a positive paper P+ and a negative paper P\u2212. The positive paper is a paper that the query paper cites, and the negative paper is a paper that is not cited by the query paper (but that may be cited by P+). We then train the model using the following triplet margin loss function: L = max n\u0010 d \u0000PQ, P+\u0001 \u2212d \u0000PQ, P\u2212\u0001 + m \u0011 , 0 o (2) where d is a distance function and m is the loss margin hyperparameter (we empirically choose m = 1). Here, we use the L2 norm distance: d(PA, PB) = \u2225vA \u2212vB\u22252, where vA is the vector corresponding to the pooled output of the Transformer run on paper A (Equation 1).6 Starting from the trained SciBERT model, we pretrain the Transformer parameters on the citation objective to learn paper representations that capture document relatedness. 2.4 Selecting Negative Distractors The choice of negative example papers P\u2212is important when training the model. We consider two sets of negative examples: the \ufb01rst set simply consists of randomly selected papers from the corpus. 6We also experimented with other distance functions (e..g, normalized cosine), but they underperformed the L2 loss. Given a query paper, intuitively we would expect the model to be able to distinguish between cited papers, and uncited papers sampled randomly from the entire corpus. This inductive bias has been also found to be effective in content-based citation recommendation applications (Bhagavatula et al., 2018). But, random negatives may be easy for the model to distinguish from the positives. To provide a more nuanced training signal, we augment the randomly drawn negatives with a more challenging second set of negative examples. We denote as \u201chard negatives\u201d the papers that are not cited by the query paper, but are cited by a paper cited by the query paper, i.e. if P1 cite \u2212 \u2212 \u2192P2 and P2 cite \u2212 \u2212 \u2192P3 but P1 \u0338 cite \u2212 \u2212 \u2192P3, then P3 is a candidate hard negative example for P1. We expect the hard negatives to be somewhat related to the query paper, but typically less related than the cited papers. As we show in our experiments (\u00a76), including hard negatives results in more accurate embeddings compared to using random negatives alone. 2.5 Inference At inference time, the model receives one paper, P, and it outputs the SPECTER\u2019s Transfomer pooled output activation as the paper representation for P (Equation 1). We note that for inference, SPECTER requires only the title and abstract of the given input paper; the model does not need any citation information about the input paper. This means that SPECTER can produce embeddings even for new papers that have yet to be cited, which is critical for applications that target recent scienti\ufb01c papers. 3 SCIDOCS Evaluation Framework Previous evaluations of scienti\ufb01c document representations in the literature tend to focus on small datasets over a limited set of tasks, and extremely high (99%+) AUC scores are already possible on these data for English documents (Chen et al., 2019; Wang et al., 2019). New, larger and more diverse benchmark datasets are necessary. Here, we introduce a new comprehensive evaluation framework to measure the effectiveness of scienti\ufb01c paper embeddings, which we call SCIDOCS. The framework consists of diverse tasks, ranging from citation prediction, to prediction of user activity, to document classi\ufb01cation and paper recommendation. Note that SPECTER will not be further \ufb01ne-tuned on any of the tasks; we simply plug in the embeddings as features for each task. Below, we describe each of the \ftasks in detail and the evaluation data associated with it. In addition to our training data, we release all the datasets associated with the evaluation tasks. 3.1 Document Classi\ufb01cation An important test of a document-level embedding is whether it is predictive of the class of the document. Here, we consider two classi\ufb01cation tasks in the scienti\ufb01c domain: MeSH Classi\ufb01cation In this task, the goals is to classify scienti\ufb01c papers according to their Medical Subject Headings (MeSH) (Lipscomb, 2000).7 We construct a dataset consisting of 23K academic medical papers, where each paper is assigned one of 11 top-level disease classes such as cardiovascular diseases, diabetes, digestive diseases derived from the MeSH vocabulary. The most populated category is Neoplasms (cancer) with 5.4K instances (23.3% of the total dataset) while the category with least number of samples is Hepatitis (1.7% of the total dataset). We follow the approach of Feldman et al. (2019) in mapping the MeSH vocabulary to the disease classes. Paper Topic Classi\ufb01cation This task is predicting the topic associated with a paper using the prede\ufb01ned topic categories of the Microsoft Academic Graph (MAG) (Sinha et al., 2015)8. MAG provides a database of papers, each tagged with a list of topics. The topics are organized in a hierarchy of 5 levels, where level 1 is the most general and level 5 is the most speci\ufb01c. For our evaluation, we derive a document classi\ufb01cation dataset from the level 1 topics, where a paper is labeled by its corresponding level 1 MAG topic. We construct a dataset of 25K papers, almost evenly split over the 19 different classes of level 1 categories in MAG. 3.2 Citation Prediction As argued above, citations are a key signal of relatedness between papers. We test how well different paper representations can reproduce this signal through citation prediction tasks. In particular, we focus on two sub-tasks: predicting direct citations, and predicting co-citations. We frame these as ranking tasks and evaluate performance using MAP and nDCG, standard ranking metrics. 7https://www.nlm.nih.gov/mesh/meshhome. html 8https://academic.microsoft.com/ Direct Citations In this task, the model is asked to predict which papers are cited by a given query paper from a given set of candidate papers. The evaluation dataset includes approximately 30K total papers from a held-out pool of papers, consisting of 1K query papers and a candidate set of up to 5 cited papers and 25 (randomly selected) uncited papers. The task is to rank the cited papers higher than the uncited papers. For each embedding method, we require only comparing the L2 distance between the raw embeddings of the query and the candidates, without any additional trainable parameters. Co-Citations This task is similar to the direct citations but instead of predicting a cited paper, the goal is to predict a highly co-cited paper with a given paper. Intuitively, if papers A and B are cited frequently together by several papers, this shows that the papers are likely highly related and a good paper representation model should be able to identify these papers from a given candidate set. The dataset consists of 30K total papers and is constructed similar to the direct citations task. 3.3 User Activity The embeddings for similar papers should be close to each other; we use user activity as a proxy for identifying similar papers and test the model\u2019s ability to recover this information. Multiple users consuming the same items as one another is a classic relatedness signal and forms the foundation for recommender systems and other applications (Schafer et al., 2007). In our case, we would expect that when users look for academic papers, the papers they view in a single browsing session tend to be related. Thus, accurate paper embeddings should, all else being equal, be relatively more similar for papers that are frequently viewed in the same session than for other papers. To build benchmark datasets to test embeddings on user activity, we obtained logs of user sessions from a major academic search engine. We de\ufb01ne the following two tasks on which we build benchmark datasets to test embeddings: Co-Views Our co-views dataset consists of approximately 30K papers. To construct it, we take 1K random papers that are not in our train or development set and associate with each one up to 5 frequently co-viewed papers and 25 randomly selected papers (similar to the approach for citations). Then, we require the embedding model to rank the \fco-viewed papers higher than the random papers by comparing the L2 distances of raw embeddings. We evaluate performance using standard ranking metrics, nDCG and MAP. Co-Reads If the user clicks to access the PDF of a paper from the paper description page, this is a potentially stronger sign of interest in the paper. In such a case we assume the user will read at least parts of the paper and refer to this as a \u201cread\u201d action. Accordingly, we de\ufb01ne a \u201cco-reads\u201d task and dataset analogous to the co-views dataset described above. This dataset is also approximately 30K papers. 3.4 Recommendation In the recommendation task, we evaluate the ability of paper embeddings to boost performance in a production recommendation system. Our recommendation task aims to help users navigate the scienti\ufb01c literature by ranking a set of \u201csimilar papers\u201d for a given paper. We use a dataset of user clickthrough data for this task which consists of 22K clickthrough events from a public scholarly search engine. We partitioned the examples temporally into train (20K examples), validation (1K), and test (1K) sets. As is typical in clickthrough data on ranked lists, the clicks are biased toward the top of original ranking presented to the user. To counteract this effect, we computed propensity scores using a swap experiment (Agarwal et al., 2019). The propensity scores give, for each position in the ranked list, the relative frequency that the position is over-represented in the data due to exposure bias. We can then compute de-biased evaluation metrics by dividing the score for each test example by the propensity score for the clicked position. We report propensity-adjusted versions of the standard ranking metrics Precision@1 ( \u02c6 P@1) and Normalized Discounted Cumulative Gain ( \u02c6 nDCG). We test different embeddings on the recommendation task by including cosine embedding distance9 as a feature within an existing recommendation system that includes several other informative features (title/author similarity, reference and citation overlap, etc.). Thus, the recommendation experiments measure whether the embeddings can boost the performance of a strong baseline system on an end task. For SPECTER, we also perform an online A/B test to measure whether its advantages 9Embeddings are L2 normalized and in this case cosine distance is equivalent to L2 distance. on the of\ufb02ine dataset translate into improvements on the online recommendation task (\u00a75). 4 Experiments Training Data To train our model, we use a subset of the Semantic Scholar corpus (Ammar et al., 2018) consisting of about 146K query papers (around 26.7M tokens) with their corresponding outgoing citations, and we use an additional 32K papers for validation. For each query paper we construct up to 5 training triples comprised of a query, a positive, and a negative paper. The positive papers are sampled from the direct citations of the query, while negative papers are chosen either randomly or from citations of citations (as discussed in \u00a72.4). We empirically found it helpful to use 2 hard negatives (citations of citations) and 3 easy negatives (randomly selected papers) for each query paper. This process results in about 684K training triples and 145K validation triples. Training and Implementation We implement our model in AllenNLP (Gardner et al., 2018). We initialize the model from SciBERT pretrained weights (Beltagy et al., 2019) since it is the stateof-the-art pretrained language model on scienti\ufb01c text. We continue training all model parameters on our training objective (Equation 2). We perform minimal tuning of our model\u2019s hyperparameters based on the performance on the validation set, while baselines are extensively tuned. Based on initial experiments, we use a margin m=1 for the triplet loss. For training, we use the Adam optimizer (Kingma and Ba, 2014) following the suggested hyperparameters in Devlin et al. (2019) (LR: 2e-5, Slanted Triangular LR scheduler10 (Howard and Ruder, 2018) with number of train steps equal to training instances and cut fraction of 0.1). We train the model on a single Titan V GPU (12G memory) for 2 epochs, with batch size of 4 (the maximum that \ufb01t in our GPU memory) and use gradient accumulation for an effective batch size of 32. Each training epoch takes approximately 1-2 days to complete on the full dataset. We release our code and data to facilitate reproducibility. 11 Task-Speci\ufb01c Model Details For the classi\ufb01cation tasks, we used a linear SVM where embedding vectors were the only features. The C hyperparameter was tuned via a held-out validation set. 10Learning rate linear warmup followed by linear decay. 11https://github.com/allenai/specter \fFor the recommendation tasks, we use a feedforward ranking neural network that takes as input ten features designed to capture the similarity between each query and candidate paper, including the cosine similarity between the query and candidate embeddings and manually-designed features computed from the papers\u2019 citations, titles, authors, and publication dates. Baseline Methods Our work falls into the intersection of textual representation, citation mining, and graph learning, and we evaluate against stateof-the-art baselines from each of these areas. We compare with several strong textual models: SIF (Arora et al., 2017), a method for learning document representations by removing the \ufb01rst principal component of aggregated word-level embeddings which we pretrain on scienti\ufb01c text; SciBERT (Beltagy et al., 2019) a state-of-the-art pretrained Transformer LM for scienti\ufb01c text; and Sent-BERT (Reimers and Gurevych, 2019), a model that uses negative sampling to tune BERT for producing optimal sentence embeddings. We also compare with Citeomatic (Bhagavatula et al., 2018), a closely related paper representation model for citation prediction which trains content-based representations with citation graph information via dynamically sampled triplets, and SGC (Wu et al., 2019a), a state-of-the-art graph-convolutional approach. For completeness, additional baselines are also included; due to space constraints we refer to Appendix A for detailed discussion of all baselines. We tune hyperparameters of baselines to maximize performance on a separate validation set. 5 Results Table 1 presents the main results corresponding to our evaluation tasks (described in \u00a73). Overall, we observe substantial improvements across all tasks with average performance of 80.0 across all metrics on all tasks which is a 3.1 point absolute improvement over the next-best baseline. We now discuss the results in detail. For document classi\ufb01cation, we report macro F1, a standard classi\ufb01cation metric. We observe that the classi\ufb01er performance when trained on our representations is better than when trained on any other baseline. Particularly, on the MeSH (MAG) dataset, we obtain an 86.4 (82.0) F1 score which is about a \u2206= + 2.3 (+1.5) point absolute increase over the best baseline on each dataset respectively. Our evaluation of the learned representations on predicting user activity is shown in the \u201cUser activity\u201d columns of Table 1. SPECTER achieves a MAP score of 83.8 on the co-view task, and 84.5 on coread, improving over the best baseline (Citeomatic in this case) by 2.7 and 4.0 points, respectively. We observe similar trends for the \u201ccitation\u201d and \u201cco-citation\u201d tasks, with our model outperforming virtually all other baselines except for SGC, which has access to the citation graph at training and test time.12 Note that methods like SGC cannot be used in real-world setting to embed new papers that are not cited yet. On the other hand, on cocitation data our method is able to achieve the best results with nDCG of 94.8, improving over SGC with 2.3 points. Citeomatic also performs well on the citation tasks, as expected given that its primary design goal was citation prediction. Nevertheless, our method slightly outperforms Citeomatic on the direct citation task, while substantially outperforming it on co-citations (+2.0 nDCG). Finally, for recommendation task, we observe that SPECTER outperforms all other models on this task as well, with nDCG of 53.9. On the recommendations task, as opposed to previous experiments, the differences in method scores are generally smaller. This is because for this task the embeddings are used along with several other informative features in the ranking model (described under task-speci\ufb01c models in \u00a74), meaning that embedding variants have less opportunity for impact on overall performance. We also performed an online study to evaluate whether SPECTER embeddings offer similar advantages in a live application. We performed an online A/B test comparing our SPECTER-based recommender to an existing production recommender system for similar papers that ranks papers by a textual similarity measure. In a dataset of 4,113 clicks, we found that SPECTER ranker improved clickthrough rate over the baseline by 46.5%, demonstrating its superiority. We emphasize that our citation-based pretraining objective is critical for the performance of SPECTER; removing this and using a vanilla SciBERT results in decreased performance on all tasks. 12For SGC, we remove development and test set citations and co-citations during training. We also remove incoming citations from development and test set queries as these would not be available at test time in production. \fTask \u2192 Classi\ufb01cation User activity prediction Citation prediction Recomm. Avg. Subtask \u2192 MAG MeSH Co-View Co-Read Cite Co-Cite Model \u2193/ Metric \u2192 F1 F1 MAP nDCG MAP nDCG MAP nDCG MAP nDCG \u02c6 nDCG \u02c6 P@1 Random 4.8 9.4 25.2 51.6 25.6 51.9 25.1 51.5 24.9 51.4 51.3 16.8 32.5 Doc2vec (2014) 66.2 69.2 67.8 82.9 64.9 81.6 65.3 82.2 67.1 83.4 51.7 16.9 66.6 Fasttext-sum (2017) 78.1 84.1 76.5 87.9 75.3 87.4 74.6 88.1 77.8 89.6 52.5 18.0 74.1 SIF (2017) 78.4 81.4 79.4 89.4 78.2 88.9 79.4 90.5 80.8 90.9 53.4 19.5 75.9 ELMo (2018) 77.0 75.7 70.3 84.3 67.4 82.6 65.8 82.6 68.5 83.8 52.5 18.2 69.0 Citeomatic (2018) 67.1 75.7 81.1 90.2 80.5 90.2 86.3 94.1 84.4 92.8 52.5 17.3 76.0 SGC (2019a) 76.8 82.7 77.2 88.0 75.7 87.5 91.6 96.2 84.1 92.5 52.7 18.2 76.9 SciBERT (2019) 79.7 80.7 50.7 73.1 47.7 71.1 48.3 71.7 49.7 72.6 52.1 17.9 59.6 Sent-BERT (2019) 80.5 69.1 68.2 83.3 64.8 81.3 63.5 81.6 66.4 82.8 51.6 17.1 67.5 SPECTER (Ours) 82.0 86.4 83.6 91.5 84.5 92.4 88.3 94.9 88.1 94.8 53.9 20.0 80.0 Table 1: Results on the SCIDOCS evaluation suite consisting of 7 tasks. 6 Analysis In this section, we analyze several design decisions in SPECTER, provide a visualization of its embedding space, and experimentally compare SPECTER\u2019s use of \ufb01xed embeddings against a \ufb01netuning approach. Ablation Study We start by analyzing how adding or removing metadata \ufb01elds from the input to SPECTER alters performance. The results are shown in the top four rows of Table 2 (for brevity, here we only report the average of the metrics from each task). We observe that removing the abstract from the textual input and relying only on the title results in a substantial decrease in performance. More surprisingly, adding authors as an input (along with title and abstract) hurts performance.13 One possible explanation is that author names are sparse in the corpus, making it dif\ufb01cult for the model to infer document-level relatedness from them. As another possible reason of this behavior, tokenization using Wordpieces might be suboptimal for author names. Many author names are out-of-vocabulary for SciBERT and thus, they might be split into sub-words and shared across names that are not semantically related, leading to noisy correlation. Finally, we \ufb01nd that adding venues slightly decreases performance,14 except on document classi\ufb01cation (which makes sense, as we would expect venues to have high correlation 13We experimented with both concatenating authors with the title and abstract and also considering them as an additional \ufb01eld. Neither were helpful. 14Venue information in our data came directly from publisher provided metadata and thus was not normalized. Venue normalization could help improve results. CLS USR CITE REC Avg. SPECTER 84.2 88.4 91.5 36.9 80.0 \u2212abstract 82.2 72.2 73.6 34.5 68.1 + venue 84.5 88.0 91.2 36.7 79.9 + author 82.7 72.3 71.0 34.6 67.3 No hard negatives 82.4 85.8 89.8 36.8 78.4 Start w/ BERT-Large 81.7 85.9 87.8 36.1 77.5 Table 2: Ablations: Numbers are averages of metrics for each evaluation task: CLS: classi\ufb01cation, USR: User activity, CITE: Citation prediction, REC: Recommendation, Avg. average over all tasks & metrics. with paper topics). The fact that SPECTER does not require inputs like authors or venues makes it applicable in situations where this metadata is not available, such as matching reviewers with anonymized submissions, or performing recommendations of anonymized preprints (e.g., on OpenReview). One design decision in SPECTER is to use a set of hard negative distractors in the citation-based \ufb01netuning objective. The \ufb01fth row of Table 2 shows that this is important\u2014using only easy negatives reduces performance on all tasks. While there could be other potential ways to include hard negatives in the model, our simple approach of including citations of citations is effective. The sixth row of the table shows that using a strong general-domain language model (BERT-Large) instead of SciBERT in SPECTER reduces performance considerably. This is reasonable because unlike BERT-Large, SciBERT is pretrained on scienti\ufb01c text. Visualization Figure 2 shows t-SNE (van der Maaten, 2014) projections of our embeddings (SPECTER) compared with the SciBERT baseline \f(a) SPECTER (b) SciBERT Figure 2: t-SNE visualization of paper embeddings and their corresponding MAG topics. for a random set of papers. When comparing SPECTER embeddings with SciBERT, we observe that our embeddings are better at encoding topical information, as the clusters seem to be more compact. Further, we see some examples of crosstopic relatedness re\ufb02ected in the embedding space (e.g., Engineering, Mathematics and Computer Science are close to each other, while Business and Economics are also close to each other). To quantify the comparison of visualized embeddings in Figure 2, we use the DBScan clustering algorithm (Ester et al., 1996) on this 2D projection. We use the completeness and homogeneity clustering quality measures introduced by Rosenberg and Hirschberg (2007). For the points corresponding to Figure 2, the homogeneity and completeness values for SPECTER are respectively 0.41 and 0.72 compared with SciBERT\u2019s 0.19 and 0.63, a clear improvement on separating topics using the projected embeddings. Comparison with Task Speci\ufb01c Fine-Tuning While the fact that SPECTER does not require \ufb01netuning makes its paper embeddings less costly to use, often the best performance from pretrained Transformers is obtained when the models are \ufb01netuned directly on each end task. We experiment with \ufb01ne-tuning SciBERT on our tasks, and \ufb01nd this to be generally inferior to using our \ufb01xed representations from SPECTER. Speci\ufb01cally, we \ufb01netune SciBERT directly on task-speci\ufb01c signals instead of citations. To \ufb01ne-tune on task-speci\ufb01c data (e.g., user activity), we used a dataset of coviews with 65K query papers, co-reads with 14K query papers, and co-citations (instead of direct citations) with 83K query papers. As the end tasks are ranking tasks, for all datasets we construct up to 5 triplets and \ufb01ne-tune the model using triplet ranking loss. The positive papers are sampled from Training signal CLS USR CITE REC All SPECTER 84.2 88.4 91.5 36.9 80.0 SciBERT \ufb01ne-tune on co-view 83.0 84.2 84.1 36.4 76.0 SciBERT \ufb01ne-tune on co-read 82.3 85.4 86.7 36.3 77.1 SciBERT \ufb01ne-tune on co-citation 82.9 84.3 85.2 36.6 76.4 SciBERT \ufb01ne-tune on multitask 83.3 86.1 88.2 36.0 78.0 Table 3: Comparison with task-speci\ufb01c \ufb01ne-tuning. the most co-viewed (co-read, or co-cited) papers corresponding to the query paper. We also include both easy and hard distractors as when training SPECTER (for hard negatives we choose the least non-zero co-viewed (co-read, or co-cited) papers). We also consider training jointly on all task-speci\ufb01c training data sources in a multitask training process, where the model samples training triplets from a distribution over the sources. As illustrated in Table 3, without any additional \ufb01nal task-speci\ufb01c \ufb01ne-tuning, SPECTER still outperforms a SciBERT model \ufb01ne-tuned on the end tasks as well as their multitask combination, further demonstrating the effectiveness and versatility of SPECTER embeddings.15 7 Related Work Recent representation learning methods in NLP rely on training large neural language models on unsupervised data (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Beltagy et al., 2019; Liu et al., 2019). While successful at many sentenceand token-level tasks, our focus is on using the models for document-level representation learning, which has remained relatively under-explored. There have been other efforts in document representation learning such as extensions of word vectors to documents (Le and Mikolov, 2014; Ganesh et al., 2016; Liu et al., 2017; Wu et al., 2018; Gysel et al., 2017), convolution-based methods (Liu et al., 2018; Zamani et al., 2018), and variational autoencoders (Holmer and Marfurt, 2018; Wang et al., 2019). Relevant to document embedding, sentence embedding is a relatively well-studied area of research. Successful approaches include seq2seq models (Kiros et al., 2015), BiLSTM Siamese networks (Williams et al., 2018), leveraging supervised data from other corpora (Conneau et al., 2017), and using discourse relations (Nie et al., 2019), and BERT-based methods (Reimers and Gurevych, 2019). Unlike our proposed method, 15We also experimented with further task-speci\ufb01c \ufb01netuning of our SPECTER on the end tasks but we did not observe additional improvements. \fthe majority of these approaches do not consider any notion of inter-document relatedness when embedding documents. Other relevant work combines textual features with network structure (Tu et al., 2017; Zhang et al., 2018; Bhagavatula et al., 2018; Shen et al., 2018; Chen et al., 2019; Wang et al., 2019). These works typically do not leverage the recent pretrained contextual representations and with a few exceptions such as the recent work by Wang et al. (2019), they cannot generalize to unseen documents like our SPECTER approach. Context-based citation recommendation is another related application where models rely on citation contexts (Jeong et al., 2019) to make predictions. These works are orthogonal to ours as the input to our model is just paper title and abstract. Another related line of work is graphbased representation learning methods (Bruna et al., 2014; Kipf and Welling, 2017; Hamilton et al., 2017a,b; Wu et al., 2019a,b). Here, we compare to a graph representation learning model, SGC (Simple Graph Convolution) (Wu et al., 2019a), which is a state-of-the-art graph convolution approach for representation learning. SPECTER uses pretrained language models in combination with graph-based citation signals, which enables it to outperform the graph-based approaches in our experiments. SPECTER embeddings are based on only the title and abstract of the paper. Adding the full text of the paper would provide a more complete picture of the paper\u2019s content and could improve accuracy (Cohen et al., 2010; Lin, 2008; Schuemie et al., 2004). However, the full text of many academic papers is not freely available. Further, modern language models have strict memory limits on input size, which means new techniques would be required in order to leverage the entirety of the paper within the models. Exploring how to use the full paper text within SPECTER is an item of future work. Finally, one pain point in academic paper recommendation research has been a lack of publicly available datasets (Chen and Lee, 2018; Kanakia et al., 2019). To address this challenge, we release SCIDOCS, our evaluation benchmark which includes an anonymized clickthrough dataset from an online recommendations system. 8" + }, + { + "url": "http://arxiv.org/abs/1909.04054v2", + "title": "Pretrained Language Models for Sequential Sentence Classification", + "abstract": "As a step toward better document-level understanding, we explore\nclassification of a sequence of sentences into their corresponding categories,\na task that requires understanding sentences in context of the document. Recent\nsuccessful models for this task have used hierarchical models to contextualize\nsentence representations, and Conditional Random Fields (CRFs) to incorporate\ndependencies between subsequent labels. In this work, we show that pretrained\nlanguage models, BERT (Devlin et al., 2018) in particular, can be used for this\ntask to capture contextual dependencies without the need for hierarchical\nencoding nor a CRF. Specifically, we construct a joint sentence representation\nthat allows BERT Transformer layers to directly utilize contextual information\nfrom all words in all sentences. Our approach achieves state-of-the-art results\non four datasets, including a new dataset of structured scientific abstracts.", + "authors": "Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Daniel S. Weld", + "published": "2019-09-09", + "updated": "2019-09-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Inspired by the importance of document-level natural language understanding, we explore classi\ufb01cation of a sequence of sentences into their respective roles or functions. For example, one might classify sentences of scienti\ufb01c abstracts according to rhetorical roles (e.g., Introduction, Method, Result, Conclusion, etc). We refer to this task as Sequential Sentence Classi\ufb01cation (SSC), because the meaning of a sentence in a document is often informed by context from neighboring sentences. Recently, there have been a surge of new models for contextualized language representation, resulting in substantial improvements on many natural language processing tasks. These models use multiple layers of LSTMs (Hochreiter and Schmidhu\u22c6Equal contribution. ber, 1997) or Transformers (Vaswani et al., 2017), and are pretrained on unsupervised text with language modeling objectives such as next word prediction (Peters et al., 2018; Radford et al., 2018) or masked token prediction (Devlin et al., 2018; Dong et al., 2019). BERT is among the most successful models for many tokenand sentence-level tasks (Devlin et al., 2018; Liu et al., 2019). In addition to a masked token objective, BERT optimizes for next sentence prediction, allowing it to capture sentential context. These objectives allow BERT to learn some document-level context through pretraining. In this work we explore the use of BERT for SSC. For this task, prior models are primarily based on hierarchical encoders over both words and sentences, often using a Conditional Random Field (CRF) (Lafferty et al., 2001) layer to capture documentlevel context (Cheng and Lapata, 2016; Jin and Szolovits, 2018; Chang et al., 2019). These models encode and contextualize sentences in two consecutive steps. In contrast, we propose an input representation which allows the Transformer layers in BERT to directly leverage contextualized representations of all words in all sentences, while still utilizing the pretrained weights from BERT. Speci\ufb01cally, we represent all the sentences in the document as one long sequence of words with special delimiter tokens in between them. We use the contextualized representations of the delimiter tokens to classify each sentence. The transformer layers allow the model to \ufb01netune the weights of these special tokens to encode contextual information necessary for correctly classifying sentences in context. We apply our model to two instances of the SSC task in scienti\ufb01c text that can bene\ufb01t from better contextualized representations of sentences: scienti\ufb01c abstract sentence classi\ufb01cation and extractive summarization of scienti\ufb01c documents. arXiv:1909.04054v2 [cs.CL] 22 Sep 2019 \fOur contributions are as follows: (i) We present a BERT-based approach for SSC that jointly encodes all sentences in the sequence, allowing the model to better utilize documentlevel context. (ii) We introduce and release CSABSTRUCT, a new dataset of manually annotated sentences from computer science abstracts. Unlike biomedical abstracts which are written with explicit structure, computer science abstracts are free-form and exhibit a variety of writing styles, making our dataset more challenging than existing datasets for this task. (iii) We achieve state-of-the-art (SOTA) results on multiple datasets of two SSC tasks: scienti\ufb01c abstract sentence classi\ufb01cation and extractive summarization of scienti\ufb01c documents.1 2 Model In Sequential Sentence Classi\ufb01cation (SSC), the goal is to classify each sentence in a sequence of n sentences in a document. We propose an approach for SSC based on BERT to encode sentences in context. The BERT model architecture consists of multiple layers of Transformers and uses a speci\ufb01c input representation, with two special tokens, [CLS] and [SEP], added at the beginning of the input sentence pair and between the sentences (or bag of sentences) respectively. The pretrained multi-layer TRANSFORMER architecture allows the BERT model to contextualize the input over the entire sequence, allowing it to capture necessary information for correct classi\ufb01cation. To utilize this for the SSC task, we propose a special input representation without any additional complex architecture augmentation. Our approach allows the model to better incorporate context from all surrounding sentences. Figure 1 gives an overview of our model. Given the sequence of sentences S = \u27e8S1, ..., Sn\u27e9we concatenate the \ufb01rst sentence with BERT\u2019s delimiter, [SEP], and repeat this process for each sentence, forming a large sequence containing all tokens from all sentences. After inserting the standard [CLS] token at the beginning of this sequence, we feed it into BERT. Unlike BERT, which uses the [CLS] token for classi\ufb01cation, we use the encodings of the [SEP] tokens to classify each sentence. We use a multi-layer feedforward network on top of the [SEP] representations of 1Code & data: https://github.com/allenai/ sequential_sentence_classification \ud835\udc13\ud835\udc12\ud835\udc04\ud835\udc0f \ud835\udc13\ud835\udc2c\ud835\udfcf \ud835\udc13\ud835\udc02\ud835\udc0b\ud835\udc12 MLP BERT [SEP] Sentence 1 [CLS] [SEP] Sentence 2 [SEP] Sentence 3 Sentence 3 Sentence 2 Sentence 1 \ud835\udc13\ud835\udc12\ud835\udc04\ud835\udc0f \ud835\udc13\ud835\udc2c\ud835\udfcf \ud835\udc13\ud835\udc12\ud835\udc04\ud835\udc0f \ud835\udc13\ud835\udc2c\ud835\udfcf \ud835\udc9a\ud835\udfcf \ud835\udc9a\ud835\udfd0 \ud835\udc9a\ud835\udfd1 Document: MLP MLP Figure 1: Overview of our model. Each [SEP] token is mapped to a contextualized representation of its sentence and then used to predict a label yi for sentencei. each sentence to classify them to their corresponding categories.2 Intuitively, through BERT\u2019s pretraining, the [SEP] tokens learn sentence structure and relations between continuous sentences (through the next sentence objective). The model is then \ufb01netuned on task-speci\ufb01c training data, where most of the model parameters are already pretrained using BERT and only a thin task-speci\ufb01c network on top is needed. During \ufb01netuning3 the model learns appropriate weights for the [SEP] token to allow it to capture contextual information for classifying sentences in the sequence. This way of representing a sequence of sentences allows the self-attention layers of BERT to directly leverage contextual information from all words in all sentences, while still utilizing the pretrained weights from BERT. This is in contrast to existing hierarchical models which encode then contextualize sentences in two consecutive steps.4 Handling long sequences Released BERT pretrained weights support sequences of up to 512 wordpieces (Wu et al., 2016). This is limiting for our model on datasets where the length of each document is large, as we represent all sentences in one single sequence. However, the semantics of a sentence are usually more dependent on local context, rather than all sentences in a long docu2It is also possible to add another special token (e.g., [CLS]) at the beginning of each sentence and perform classi\ufb01cation on that token. Empirically, we found the approaches to perform similarly. 3 Following terminology from Howard and Ruder (2018), \u201c\ufb01netuning\u201d refers to \u201ctraining\u201d a model that was previously pretrained. We use both terms interchangeably. 4It is possible to add a CRF layer or another contextualizing layer on top of [SEP] tokens in our model, but empirically, we did not \ufb01nd this addition to be helpful. One explanation is that the self-attention layers of our model are already capturing necessary contextual information from the document. \fPubMed NICTA CSAbstruct CSPubSum # docs 20K 1K 2.2K 21K # sents 225K 21K 15K 601K Table 1: Statistics of the evaluation datasets. The \ufb01rst three datasets are for the abstract sentence classi\ufb01cation task and the last dataset is for summarization. ment. Therefore, we set a threshold on the number of sentences in each sequence. We recursively bisect the document until each split has less sentences than the speci\ufb01ed threshold. At a limit of 10 sentences, only one division is needed to \ufb01t nearly all examples for the abstract sentence classi\ufb01cation datasets. A limitation of this approach is that sentences on the edge of the splits could lose context from the previous(next) split. We leave this limitation to future work. 3 Tasks and Datasets This section describes our tasks and datasets, and any model changes that are task-speci\ufb01c (see Table 1 for comparison of evaluation datasets). 3.1 Scienti\ufb01c abstract sentence classi\ufb01cation This task requires classifying sentences in scienti\ufb01c abstracts into their rhetorical roles (e.g., INTRODUCTION, METHOD, RESULTS, etc). We use the following three datasets in our experiments. PUBMED-RCT (Dernoncourt and Lee, 2017) contains 20K biomedical abstracts from PubMed, with sentences classi\ufb01ed as one of 5 categories {BACKGROUND, OBJECTIVE, METHOD, RESULT, CONCLUSION}. We use the preprocessed version of this dataset by Jin and Szolovits (2018). CSABSTRUCT is a new dataset that we introduce. It has 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the PUBMED-RCT categories. See \u00a73.3 for details. NICTA (Kim et al., 2011) contains 1,000 biomedical abstracts with sentences classi\ufb01ed into PICO categories (Population, Intervention, Comparison, Outcome) (Richardson et al., 1995). 3.2 Extractive summarization of scienti\ufb01c documents This task is to select a few text spans in a document that best summarize it. When the spans are CSAbstruct characteristics Doc length (sentences) avg : 6.7 std : 1.99 Sentence length (words) avg : 21.8 std : 10.0 Label distribution BACKGROUND METHOD RESULT OBJECTIVE OTHER 0.33 0.32 0.21 0.12 0.03 Table 2: Characteristics of our CSABSTRUCT dataset sentences, this task can be viewed as SSC, classifying each sentence as a good summary sentence or not. Choosing the best summary sentences can bene\ufb01t from context of surrounding sentences. We train on CSPUBSUMEXT (Collins et al., 2017), an extractive summarization dataset of 10k scienti\ufb01c papers, with sentences scored as good/bad summary sentences using ROUGE overlap scores with paper highlights. For evaluation, a separate test set, CSPUBSUM, of 150 publications and their paper highlights is used.5 A key difference between the training of our model and that of Collins et al. (2017) is that they use the ROUGE scores to label the top (bottom) 20 sentences as positive (negative), and the rest are neutral. However, we found it better to train our model to directly predict the ROUGE scores, and the loss function we used is Mean Square Error. 3.3 CSABSTRUCT construction details CSABSTRUCT is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles. The key difference between this dataset and PUBMED-RCT is that PubMed abstracts are written according to a prede\ufb01ned structure, whereas computer science papers are free-form. Therefore, there is more variety in writing styles in CSABSTRUCT. CSABSTRUCT is collected from the Semantic Scholar corpus (Ammar et al., 2018). Each sentence is annotated by 5 workers on the Figure-eight platform,6 with one of 5 categories {BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}. Table 2 shows characteristics of the dataset. We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers. Annotators whose accuracy is less than 75% are disquali\ufb01ed from doing the actual annotation job. The annotations are 5Dataset generated using author provided scripts: https://github.com/EdCo95/ scientific-paper-summarisation 6http://figure-eight.com \fModel PUBMED CSABST. NICTA Jin and Szolovits (2018) 92.6 81.3 84.7 BERT +Transformer 89.6 78.8 78.4 BERT +Transformer+CRF 92.1 78.5 79.1 Our model 92.9 83.1 84.8 Table 3: Abstract sentence classi\ufb01cation (micro F1). aggregated using the agreement on a single sentence weighted by the accuracy of the annotator on the initial test questions. A con\ufb01dence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance. We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest con\ufb01dence scores. Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the dif\ufb01culty of the task. Compared with PUBMEDRCT, our dataset exhibits a wider variety of writing styles, since its abstracts are not written with an explicit structural template. 4 Experiments Training and Implementation We implement our models using AllenNLP (Gardner et al., 2018). We use SCIBERT pretrained weights (Beltagy et al., 2019) in both our model and BERT-based baselines, because our datasets are from the scienti\ufb01c domain. As in prior work (Devlin et al., 2018; Howard and Ruder, 2018), for training, we use dropout of 0.1, the Adam (Kingma and Ba, 2015) optimizer for 2-5 epochs, and learning rates of 5e-6, 1e-5, 2e-5, or 5e-5. We use the largest batch size that \ufb01ts in the memory of a Titan V GPU (between 1 to 4 depending on the dataset/model) and use gradient accumulation for effective batch size of 32. We report the average of results from 3 runs with different random seeds for the abstract sentence classi\ufb01cation datasets to control potential non-determinism associated with deep neural models (Reimers and Gurevych, 2017). For summarization, we use the best model on the validation set. We choose hyperparameters based on the best performance on the validation set. We release our code and data to facilitate reproducibility.7 Baselines We compare our approach with two strong BERT-based baselines, \ufb01netuned for the task. The \ufb01rst baseline, BERT+Transformer, uses 7https://github.com/allenai/ sequential_sentence_classification Model ROUGE-L SAF + F Ens (Collins et al., 2017) 0.313 BERT +Transformer 0.287 Our model 0.306 Our model + ABSTRACTROUGE 0.314 Table 4: Results on CSPUBSUM the [CLS] token to encode individual sentences as described in Devlin et al. (2018). We add an additional Transformer layer over the [CLS] vectors to contextualize the sentence representations over the entire sequence. The second baseline, BERT+Transformer+CRF, additionally adds a CRF layer. Both baselines split long lists of sentences into splits of length 30 using the method in \u00a72 to \ufb01t into the GPU memory. We also compare with existing SOTA models for each dataset. For the PUBMED-RCT and NICTA datasets, we report the results of Jin and Szolovits (2018), who use a hierarchical LSTM model augmented with attention and CRF. We also apply their model on our dataset, CSABSTRUCT, using the authors\u2019 original implementation.8 For extractive summarization, we compare to Collins et al. (2017)\u2019s model, SAF+F Ens, the model with highest reported results on this dataset. This model is an ensemble of an LSTM-based model augmented with global context and abstract similarity features, and a model trained on a set of hand-engineered features. 4.1 Results Table 3 summarizes results for abstract sentence classi\ufb01cation. Our approach achieves state-ofthe-art results on all three datasets, outperforming Jin and Szolovits (2018). It also outperforms our BERT-based baselines. The performance gap between our baselines and our best model is large for small datasets (CSABSTRUCT, NICTA), and smaller for the large dataset (PUBMED-RCT). This suggests the importance of pretraining for small datasets. Table 4 summarizes results on CSPUBSUM. Following Collins et al. (2017) we take the top 10 predicted sentences as the summary and use ROUGE-L scores for evaluation. It is clear that our approach outperforms BERT+TRANSFORMER. The BERT +TRANSFORMER+CRF baseline is not included 8https://github.com/jind11/ HSLN-Joint-Sentence-Classification \f[CLS] sen. 1 [SEP] sen. 2 [SEP] sen. 3 [SEP] sen. 4 [SEP] sen. 5 [SEP] sen. 6 [SEP] [CLS] sen. 1 [SEP] sen. 2 [SEP] sen. 3 [SEP] sen. 4 [SEP] sen. 5 [SEP] sen. 6 [SEP] (a) Before \ufb01netuning [CLS] sen. 1 [SEP] sen. 2 [SEP] sen. 3 [SEP] sen. 4 [SEP] sen. 5 [SEP] sen. 6 [SEP] [CLS] sen. 1 [SEP] sen. 2 [SEP] sen. 3 [SEP] sen. 4 [SEP] sen. 5 [SEP] sen. 6 [SEP] (b) After \ufb01netuning Figure 2: Self-attention weights of the top 2 layers of BERT for one abstract. Cell value in row i, column j, is the maximum attention weight of token i attending to token j across all 12 Transformer attention heads. here because, as mentioned in section 3, we train our model to predict ROUGE, not binary labels as in Collins et al. (2017). As in Collins et al. (2017), we found the ABSTRACT-ROUGE feature to be useful. Our model augmented with this feature slightly outperforms Collins et al. (2017)\u2019s model, which is a relatively complex ensemble model and uses a number of carefully engineered features for the task. Our model is a single model with only one added feature. Analysis To better understand the advantage of our joint sentence encoding relative to the BERT+Transformer baseline, we qualitatively analyze examples from CSABSTRUCT that our model gets right and the baseline gets wrong. We found that 34/134 of such examples require context to classify correctly.9 For example, sentences 2 and 3 from one abstract are as follows: \u201cWe present an improved oracle for the arc-eager transition system, which provides a set of optimal transitions [...].\u201d, \u201cIn such cases, the oracle provides transitions that will lead to the best reachable tree [...].\u201d. In isolation, the label for sentence 3 is ambiguous, but with context from the previous sentence, it clearly falls under the METHOD category. Figure 2 shows BERT self-attention weights for the above-mentioned abstract before and after \ufb01netuning. Before (Figure 2a), attention weights don\u2019t exhibit a clear pattern. After (Figure 2b), we observe blocks along the matrix diagonal of sentences attending to themselves, except for the block encompassing sentences 2 and 3. The words in these two sentences attend to each other, enabling the encoding of sentence 3 to capture the 9Of the 1349 examples in the test set, our model gets 134 correct that the BERT +Transformer baseline gets wrong, and the baseline gets 79 correct that our model gets wrong. information needed from sentence 2 to predict its label (see Appendix A for additional patterns). 5 Related Work Prior work on scienti\ufb01c Sequential Sentence Classi\ufb01cation datasets (e.g. PUBMED-RCT and NICTA) use hierarchical sequence encoders (e.g. LSTMs) to encode each sentence and contextualize the encodings, and apply CRF on top (Dernoncourt and Lee, 2017; Jin and Szolovits, 2018). Hierarchical models are also used for summarization (Cheng and Lapata, 2016; Nallapati et al., 2016; Narayan et al., 2018), usually trained in a seq2seq fashion (Sutskever et al., 2014) and evaluated on newswire data such as the CNN/Daily mail benchmark (Hermann et al., 2015). Prior work proposed generating summaries of scienti\ufb01c text by leveraging citations (Cohan and Goharian, 2015) and highlights (Collins et al., 2017). The highlights-based summarization dataset introduced by Collins et al. (2017) is among the largest extractive scienti\ufb01c summarization datasets. Prior work focuses on speci\ufb01c architectures designed for each of the tasks described in \u00a73, giving them more power to model each task directly. Our approach is more general, uses minimal architecture augmentation, leverages language model pretraining, and can handle a variety of SSC tasks. 6" + }, + { + "url": "http://arxiv.org/abs/1904.01608v2", + "title": "Structural Scaffolds for Citation Intent Classification in Scientific Publications", + "abstract": "Identifying the intent of a citation in scientific papers (e.g., background\ninformation, use of methods, comparing results) is critical for machine reading\nof individual publications and automated analysis of the scientific literature.\nWe propose structural scaffolds, a multitask model to incorporate structural\ninformation of scientific papers into citations for effective classification of\ncitation intents. Our model achieves a new state-of-the-art on an existing ACL\nanthology dataset (ACL-ARC) with a 13.3% absolute increase in F1 score, without\nrelying on external linguistic resources or hand-engineered features as done in\nexisting methods. In addition, we introduce a new dataset of citation intents\n(SciCite) which is more than five times larger and covers multiple scientific\ndomains compared with existing datasets. Our code and data are available at:\nhttps://github.com/allenai/scicite.", + "authors": "Arman Cohan, Waleed Ammar, Madeleine van Zuylen, Field Cady", + "published": "2019-04-02", + "updated": "2019-09-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Citations play a unique role in scienti\ufb01c discourse and are crucial for understanding and analyzing scienti\ufb01c work (Luukkonen, 1992; Leydesdorff, 1998). They are also typically used as the main measure for assessing impact of scienti\ufb01c publications, venues, and researchers (Li and Ho, 2008). The nature of citations can be different. Some citations indicate direct use of a method while some others merely serve as acknowledging a prior work. Therefore, identifying the intent of citations (Figure 1) is critical in improving automated analysis of academic literature and scienti\ufb01c impact measurement (Leydesdorff, 1998; Small, 2018). Other applications of citation intent classi\ufb01cation are enhanced research experience (Moravcsik and Murugesan, 1975), information retrieval (Ritchie, 2009), summarization (Co\u2026. A previously described computerized force sensitive system was used to quantify gait cycle timing, specifically the swing time and the stride-to-stride variability of swing time (Bazner et al. 2000). \u2026. Title: Gait asymmetry in patients with Parkinson\u2019s disease and elderly fallers ... Citing paper method background Bazner et al. 2000 Springer et al. 2006 Cited papers \u2026 Further details are included in the earlier reports (Springer et al. 2006). \u2026. Figure 1: Example of citations with different intents (BACKGROUND and METHOD). han and Goharian, 2015), and studying evolution of scienti\ufb01c \ufb01elds (Jurgens et al., 2018). In this work, we approach the problem of citation intent classi\ufb01cation by modeling the language expressed in the citation context. A citation context includes text spans in a citing paper describing a referenced work and has been shown to be the primary signal in intent classi\ufb01cation (Teufel et al., 2006; Abu-Jbara et al., 2013; Jurgens et al., 2018). Existing models for this problem are feature-based, modeling the citation context with respect to a set of prede\ufb01ned handengineered features (such as linguistic patterns or cue phrases) and ignoring other signals that could improve prediction. In this paper we argue that better representations can be obtained directly from data, sidestepping problems associated with external features. To this end, we propose a neural multitask learning framework to incorporate knowledge into citations from the structure of scienti\ufb01c papers. In particular, we propose two auxiliary tasks as structural scaffolds to improve citation intent prediction:1 (1) predicting the section title in which the citation occurs and (2) predicting whether a sentence needs a citation. Unlike the primary task of citation intent prediction, it is easy to collect large 1We borrow the scaffold terminology from Swayamdipta et al. (2018) in the context of multitask learning. arXiv:1904.01608v2 [cs.CL] 30 Sep 2019 \fInput citation MLP MLP MLP Citation intent Section title Citation worthiness shared parameters Word representation GloVe ELMo Main task Task specific parameters BiLSTM BiLSTM BiLSTM BiLSTM z \ud835\udc65\" \ud835\udc65# \ud835\udc65$ \ud835\udc65% \u2112\" \u2112# \u2112$ \u2112 Attention w Scaffolds Figure 2: Our proposed scaffold model for identifying citation intents. The main task is predicting the citation intent (top left) and two scaffolds are predicting the section title and predicting if a sentence needs a citation (citation worthiness). amounts of training data for scaffold tasks since the labels naturally occur in the process of writing a paper and thus, there is no need for manual annotation. On two datasets, we show that the proposed neural scaffold model outperforms existing methods by large margins. Our contributions are: (i) we propose a neural scaffold framework for citation intent classi\ufb01cation to incorporate into citations knowledge from structure of scienti\ufb01c papers; (ii) we achieve a new state-of-the-art of 67.9% F1 on the ACLARC citations benchmark, an absolute 13.3% increase over the previous state-of-the-art (Jurgens et al., 2018); and (iii) we introduce SciCite, a new dataset of citation intents which is at least \ufb01ve times as large as existing datasets and covers a variety of scienti\ufb01c domains. 2 Model We propose a neural multitask learning framework for classi\ufb01cation of citation intents. In particular, we introduce and use two structural scaffolds, auxiliary tasks related to the structure of scienti\ufb01c papers. The auxiliary tasks may not be of interest by themselves but are used to inform the main task. Our model uses a large auxiliary dataset to incorporate this structural information available in scienti\ufb01c documents into the citation intents. The overview of our model is illustrated in Figure 2. Let C denote the citation and x denote the citation context relevant to C. We encode the tokens in the citation context of size n as x = {x1, ..., xn}, where xi \u2208Rd1 is a word vector of size d1 which concatenates non-contextualized word representations (GloVe, Pennington et al., 2014) and contextualized embeddings (ELMo, Peters et al., 2018), i.e.: xi = \u0002 xGloVe i ; xELMo i \u0003 We then use a bidirectional long short-term memory (Hochreiter and Schmidhuber, 1997) (BiLSTM) network with hidden size of d2 to obtain a contextual representation of each token vector with respect to the entire sequence:2 hi = \u0002\u2212 \u2212 \u2212 \u2212 \u2192 LSTM(x, i); \u2190 \u2212 \u2212 \u2212 \u2212 LSTM(x, i) \u0003 , where h \u2208R(n,2d2) and \u2212 \u2212 \u2212 \u2212 \u2192 LSTM(x, i) processes x from left to write and returns the LSTM hidden state at position i (and vice versa for the backward direction \u2190 \u2212 \u2212 \u2212 \u2212 LSTM). We then use an attention mechanism to get a single vector representing the whole input sequence: z = n X i=1 \u03b1ihi, \u03b1i = softmax(w\u22a4hi), where w is a parameter served as the query vector for dot-product attention.3 So far we have obtained the citation representation as a vector z. Next, we describe our two proposed structural scaffolds for citation intent prediction. 2.1 Structural scaffolds In scienti\ufb01c writing there is a connection between the structure of scienti\ufb01c papers and the intent of citations. To leverage this connection for more effective classi\ufb01cation of citation intents, we propose a multitask framework with two structural scaffolds (auxiliary tasks) related to the structure of scienti\ufb01c documents. A key point for our proposed scaffolds is that they do not need any additional manual annotation as labels for these tasks occur naturally in scienti\ufb01c writing. The structural scaffolds in our model are the following: 2In our experiments BiGRUs resulted in similar performance. 3We also experimented BiLSTMs without attention; we found that BiLSTMs/BiGRUs along with attention provided best results. Other types of attention such as additive attention result in similar performance. \fCitation worthiness. The \ufb01rst scaffold task that we consider is \u201ccitation worthiness\u201d of a sentence, indicating whether a sentence needs a citation. The language expressed in citation sentences is likely distinctive from regular sentences in scienti\ufb01c writing, and such information could also be useful for better language modeling of the citation contexts. To this end, using citation markers such as \u201c[12]\u201d or \u201cLee et al (2010)\u201d, we identify sentences in a paper that include citations and the negative samples are sentences without citation markers. The goal of the model for this task is to predict whether a particular sentence needs a citation.4 Section title. The second scaffold task relates to predicting the section title in which a citation appears. Scienti\ufb01c documents follow a standard structure where the authors typically \ufb01rst introduce the problem, describe methodology, share results, discuss \ufb01ndings and conclude the paper. The intent of a citation could be relevant to the section of the paper in which the citation appears. For example, method-related citations are more likely to appear in the methods section. Therefore, we use the section title prediction as a scaffold for predicting citation intents. Note that this scaffold task is different than simply adding section title as an additional feature in the input. We are using the section titles from a larger set of data than training data for the main task as a proxy to learn linguistic patterns that are helpful for citation intents. In particular, we leverage a large number of scienti\ufb01c papers for which the section information is known for each citation to automatically generate large amounts of training data for this scaffold task.5 Multitask formulation. Multitask learning as de\ufb01ned by Caruana (1997) is an approach to inductive transfer learning that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It requires the model to have at least some sharable parameters between the tasks. In a general setting in our model, we have a main task Task(1) and n \u22121 auxiliary tasks Task(i). As shown in Figure 2, each scaffold task will have its task-speci\ufb01c parameters for effective classi\ufb01ca4We note that this task may also be useful for helping authors improve their paper drafts. However, this is not the focus of this work. 5We also experimented with adding section titles as additional feature to the input, however, it did not result in any improvements. tion and the parameters for the lower layers of the network are shared across tasks. We use a Multi Layer Perceptron (MLP) for each task and then a softmax layer to obtain prediction probabilites. In particular, given the vector z we pass it to n MLPs and obtain n output vectors y(i): y(i) = softmax(MLP(i)(z)) We are only interested in the output y(1) and the rest of outputs (y(2), ..., y(n)) are regarding the scaffold tasks and only used in training to inform the model of knowledge in the structure of the scienti\ufb01c documents. For each task, we output the class with the highest probability in y. An alternative inference method is to sample from the output distribution. 2.2 Training Let D1 be the labeled dataset for the main task Task(1), and Di denote the labeled datasets corresponding to the scaffold task Task(i) where i \u2208 {2, ..., n}. Similarly, let L1 and Li be the main loss and the loss of the auxiliary task i, respectively. The \ufb01nal loss of the model is: L = X (x,y)\u2208D1 L1(x, y) + n X i=2 \u03bbi X (x,y)\u2208Di Li(x, y), (1) where \u03bbi is a hyper-parameter specifying the sensitivity of the parameters of the model to each speci\ufb01c task. Here we have two scaffold tasks and hence n=3. \u03bbi could be tuned based on performance on validation set (see \u00a74 for details). We train this model jointly across tasks and in an end-to-end fashion. In each training epoch, we construct mini-batches with the same number of instances from each of the n tasks. We compute the total loss for each mini-batch as described in Equation 1, where Li=0 for all instances of other tasks j\u0338=i. We compute the gradient of the loss for each mini-batch and tune model parameters using the AdaDelta optimizer (Zeiler, 2012) with gradient clipping threshold of 5.0. We stop training the model when the development macro F1 score does not improve for \ufb01ve consecutive epochs. 3 Data We compare our results on two datasets from different scienti\ufb01c domains. While there has been a long history of studying citation intents, there are only a few existing publicly available datasets on \fIntent cateogry De\ufb01nition Example Background information The citation states, mentions, or points to the background information giving more context about a problem, concept, approach, topic, or importance of the problem in the \ufb01eld. Recent evidence suggests that co-occurring alexithymia may explain de\ufb01cits [12]. Locally high-temperature melting regions can act as permanent termination sites [6-9]. One line of work is focused on changing the objective function (Mao et al., 2016). Method Making use of a method, tool, approach or dataset Fold differences were calculated by a mathematical model described in [4]. We use Orthogonal Initialization (Saxe et al., 2014) Result comparison Comparison of the paper\u2019s results/\ufb01ndings with the results/\ufb01ndings of other work Weighted measurements were superior to T2-weighted contrast imaging which was in accordance with former studies [25-27] Similar results to our study were reported in the study of Lee et al (2010). Table 1: The de\ufb01nition and examples of citation intent categories in our SciCite. Dataset Categories (distribution) Source #papers #instances ACL-ARC Background (0.51) Extends (0.04) Uses (0.19) Motivation (0.05) Compare/Contrast (0.18) Future work (0.04) Computational Linguistics 186 1,941 SciCite Background (0.58) Method (0.29) Result comparison (0.13) Computer Science & Medicine 6,627 11,020 Table 2: Characteristics of SciCite compared with ACL-ARC dataset by Jurgens et al. (2018) the task of citation intent classi\ufb01cation. We use the most recent and comprehensive (ACL-ARC citations dataset) by Jurgens et al. (2018) as a benchmark dataset to compare the performance of our model to previous work. In addition, to address the limited scope and size of this dataset, we introduce SciCite, a new dataset of citation intents that addresses multiple scienti\ufb01c domains and is more than \ufb01ve times larger than ACL-ARC. Below is a description of both datasets. 3.1 ACL-ARC citations dataset ACL-ARC is a dataset of citation intents released by Jurgens et al. (2018). The dataset is based on a sample of papers from the ACL Anthology Reference Corpus (Bird et al., 2008) and includes 1,941 citation instances from 186 papers and is annotated by domain experts in the NLP \ufb01eld. The data was split into three standard strati\ufb01ed sets of train, validation, and test with 85% of data used for training and remaining 15% divided equally for validation and test. Each citation unit includes information about the immediate citation context, surrounding context, as well as information about the citing and cited paper. The data includes six intent categories outlined in Table 2. 3.2 SciCite dataset Most existing datasets contain citation categories that are too \ufb01ne-grained. Some of these intent categories are very rare or not useful in meta analysis of scienti\ufb01c publications. Since some of these \ufb01ne-grained categories only cover a minimal percentage of all citations, it is dif\ufb01cult to use them to gain insights or draw conclusions on impacts of papers. Furthermore, these datasets are usually domain-speci\ufb01c and are relatively small (less than 2,000 annotated citations). To address these limitations, we introduce SciCite, a new dataset of citation intents that is signi\ufb01cantly larger, more coarse-grained and generaldomain compared with existing datasets. Through examination of citation intents, we found out many of the categories de\ufb01ned in previous work such as motivation, extension or future work, can be considered as background information providing more context for the current research topic. More interesting intent categories are a direct use of a method or comparison of results. Therefore, our dataset provides a concise annotation scheme that is useful for navigating research topics and machine reading of scienti\ufb01c papers. We consider three intent categories outlined in Table 1: BACKGROUND, METHOD and RESULTCOMPARISON. Below we describe data collection and annotation details. 3.2.1 Data collection and annotation Citation intent of sentence extractions was labeled through the crowdsourcing platform Figure Eight.6 We selected a sample of papers from the Semantic Scholar corpus,7 consisting of papers in general computer science and medicine domains. Citation contexts were extracted using science6https://www.figure-eight.com/ platform/ 7https://semanticscholar.org/ \fparse.8 The annotators were asked to identify the intent of a citation, and were directed to select among three citation intent options: METHOD, RESULTCOMPARISON and BACKGROUND. The annotation interface also included a dummy option OTHER which helps improve the quality of annotations of other categories. We later removed instances annotated with the OTHER option from our dataset (less than 1% of the annotated data), many of which were due to citation contexts which are incomplete or too short for the annotator to infer the citation intent. We used 50 test questions annotated by a domain expert to ensure crowdsource workers were following directions and disqualify annotators with accuracy less than 75%. Furthermore, crowdsource workers were required to remain on the annotation page (\ufb01ve annotations) for at least ten seconds before proceeding to the next page. Annotations were dynamically collected. The annotations were aggregated along with a con\ufb01dence score describing the level of agreement between multiple crowdsource workers. The con\ufb01dence score is the agreement on a single instance weighted by a trust score (accuracy of the annotator on the initial 50 test questions). To only collect high quality annotations, instances with con\ufb01dence score of \u22640.7 were discarded. In addition, a subset of the dataset with 100 samples was re-annotated by a trained, expert annotator to check for quality, and the agreement rate with crowdsource workers was 86%. Citation contexts were annotated by 850 crowdsource workers who made a total of 29,926 annotations and individually made between 4 and 240 annotations. Each sentence was annotated, on average, 3.74 times. This resulted in a total 9,159 crowdsourced instances which were divided to training and validation sets with 90% of the data used for the training set. In addition to the crowdsourced data, a separate test set of size 1,861 was annotated by a trained, expert annotator to ensure high quality of the dataset. 3.3 Data for scaffold tasks For the \ufb01rst scaffold (citation worthiness), we sample sentences from papers and consider the sentences with citations as positive labels. We also remove the citation markers from those sentences 8https://github.com/allenai/ science-parse such as numbered citations (e.g., [1]) or name-year combinations (e.g, Lee et al (2012)) to not make the second task arti\ufb01cially easy by only detecting citation markers. For the second scaffold (citation section title), respective to each test dataset, we sample citations from the ACL-ARC corpus and Semantic Scholar corpus9 and extract the citation context as well as their corresponding sections. We manually de\ufb01ne regular expression patterns mappings to normalized section titles: \u201cintroduction\u201d, \u201crelated work\u201d, \u201cmethod\u201d, \u201cexperiments\u201d, \u201cconclusion\u201d. Section titles which did not map to any of the aforementioned titles were excluded from the dataset. Overall, the size of the data for scaffold tasks on the ACL-ARC dataset is about 47K (section title scaffold) and 50K (citation worthiness) while on SciCite is about 91K and 73K for section title and citation worthiness scaffolds, respectively. 4 Experiments 4.1 Implementation We implement our proposed scaffold framework using the AllenNLP library (Gardner et al., 2018). For word representations, we use 100-dimensional GloVe vectors (Pennington et al., 2014) trained on a corpus of 6B tokens from Wikipedia and Gigaword. For contextual representations, we use ELMo vectors released by Peters et al. (2018)10 with output dimension size of 1,024 which have been trained on a dataset of 5.5B tokens. We use a single-layer BiLSTM with a hidden dimension size of 50 for each direction11. For each of scaffold tasks, we use a single-layer MLP with 20 hidden nodes , ReLU (Nair and Hinton, 2010) activation and a Dropout rate (Srivastava et al., 2014) of 0.2 between the hidden and input layers. The hyperparameters \u03bbi are tuned for best performance on the validation set of the respective datasets using a 0.0 to 0.3 grid search. For example, the following hyperparameters are used for the ACL-ARC. Citation worthiness saffold: \u03bb2=0.08, \u03bb3=0, section title scaffold: \u03bb3=0.09, \u03bb2=0; both scaffolds: \u03bb2=0.1, \u03bb3=0.05. Batch size is 8 for ACL-ARC dataset and 32 for SciCite dataset (recall that SciCite is larger than ACL-ARC). We 9https://semanticscholar.org/ 10https://allennlp.org/elmo 11Experiments with other types of RNNs such as BiGRUs and more layers showed similar or slightly worst performance \fuse Beaker12 for running the experiments. On the smaller dataset, our best model takes approximately 30 minutes per epoch to train (training time without ELMo is signi\ufb01cantly faster). It is known that multiple runs of probabilistic deep learning models can have variance in overall scores (Reimers and Gurevych, 2017)13. We control this by setting random-number generator seeds; the reported overall results are average of multiple runs with different random seeds. To facilitate reproducibility, we release our code, data, and trained models.14 4.2 Baselines We compare our results to several baselines including the model with state-of-the-art performance on the ACL-ARC dataset. \u2022 BiLSTM Attention (with and without ELMo). This baseline uses a similar architecture to our proposed neural multitask learning framework, except that it only optimizes the network for the main loss regarding the citation intent classi\ufb01cation (L1) and does not include the structural scaffolds. We experiment with two variants of this model: with and without using the contextualized word vector representations (ELMo) of Peters et al. (2018). This baseline is useful for evaluating the effect of adding scaffolds in controlled experiments. \u2022 Jurgens et al. (2018). To make sure our results are competitive with state-of-the-art results on this task, we also compare our model to Jurgens et al. (2018) which has the best reported results on the ACL-ARC dataset. Jurgens et al. (2018) incorporate a variety of features, ranging from pattern-based features to topic-modeling features, to citation graph features. They also incorporate section titles and relative section position in the paper as features. Our implementation of this model achieves a macro-averaged F1 score of 0.526 using 10-fold cross-validation, which is in line with the highest reported results in Jurgens et al. (2018): 0.53 using leave-oneout cross validation. We were not able to use 12Beaker is a collaborative platform for reproducible research (https://github.com/allenai/beaker) 13Some CuDNN methods are non-deterministic and the rest are only deterministic under the same underlying hardware. See https://docs. nvidia.com/deeplearning/sdk/pdf/ cuDNN-Developer-Guide.pdf 14https://github.com/allenai/scicite Model macro F1 Baselines BiLSTM-Attn 51.8 BiLSTM-Attn w/ ELMo 54.3 Previous SOTA (Jurgens et al., 2018) 54.6 This work BiLSTM-Attn + section title scaffold 56.9 BiLSTM-Attn + citation worthiness scaffold 56.3 BiLSTM-Attn + both scaffolds 63.1 BiLSTM-Attn w/ ELMo + both scaffolds 67.9 Table 3: Results on the ACL-ARC citations dataset. leave-one-out cross validation in our experiments since it is impractical to re-train each variant of our deep learning models thousands of times. Therefore, we opted for a standard setup of strati\ufb01ed train/validation/test data splits with 85% data used for training and the rest equally split between validation and test. 4.3 Results Our main results for the ACL-ARC dataset (Jurgens et al., 2018) is shown in Table 3. We observe that our scaffold-enhanced models achieve clear improvements over the state-of-the-art approach on this task. Starting with the \u2018BiLSTM-Attn\u2019 baseline with a macro F1 score of 51.8, adding the \ufb01rst scaffold task in \u2018BiLSTM-Attn + section title scaffold\u2019 improves the F1 score to 56.9 (\u2206=5.1). Adding the second scaffold in \u2018BiLSTM-Attn + citation worthiness scaffold\u2019 also results in similar improvements: 56.3 (\u2206=4.5). When both scaffolds are used simultaneously in \u2018BiLSTM-Attn + both scaffolds\u2019, the F1 score further improves to 63.1 (\u2206=11.3), suggesting that the two tasks provide complementary signal that is useful for citation intent prediction. The best result is achieved when we also add ELMo vectors (Peters et al., 2018) to the input representations in \u2018BiLSTM-Attn w/ ELMo + both scaffolds\u2019, achieving an F1 of 67.9, a major improvement from the previous state-of-the-art results of Jurgens et al. (2018) 54.6 (\u2206=13.3). We note that the scaffold tasks provide major contributions on top of the ELMo-enabled baseline (\u2206=13.6), demonstrating the ef\ufb01cacy of using structural scaffolds for citation intent prediction. We note that these results were obtained without using hand-curated features or additional linguistic resources as used in Jurgens et al. (2018). We also experimented with adding features used in Jurgens et al. (2018) to our best model and not only we did not see any improvements, but we observed \fModel macro F1 Baselines BiLSTM-Attn 77.2 BiLSTM-Attn w/ ELMo 82.6 Previous SOTA (Jurgens et al., 2018) 79.6 This work BiLSTM-Attn + section title scaffold 77.8 BiLSTM-Attn + citation worthiness scaffold 78.1 BiLSTM-Attn + both scaffolds 79.1 BiLSTM-Attn w/ ELMo + both scaffolds 84.0 Table 4: Results on the SciCite dataset. at least 1.7% decline in performance. This suggests that these additional manual features do not provide the model with any additional useful signals beyond what the model already learns from the data. Table 4 shows the main results on SciCite dataset, where we see similar patterns. Each scaffold task improves model performance. Adding both scaffolds results in further improvements. And the best results are obtained by using ELMo representation in addition to both scaffolds. Note that this dataset is more than \ufb01ve times larger in size than the ACL-ARC, therefore the performance numbers are generally higher and the F1 gains are generally smaller since it is easier for the models to learn optimal parameters utilizing the larger annotated data. On this dataset, the best baseline is the neural baseline with addition of ELMo contextual vectors achieving an F1 score of 82.6 followed by Jurgens et al. (2018), which is expected because neural models generally achieve higher gains when more training data is available and because Jurgens et al. (2018) was not designed with the SciCite dataset in mind. The breakdown of results by intent on ACLARC and SciCite datasets is respectively shown in Tables 5 and 6. Generally we observe that results on categories with more number of instances are higher. For example on ACL-ARC, the results on the BACKGROUND category are the highest as this category is the most common. Conversely, the results on the FUTUREWORK category are the lowest. This category has the fewest data points (see distribution of the categories in Table 2) and thus it is harder for the model to learn the optimal parameters for correct classi\ufb01cation in this category. 4.4 Analysis To gain more insight into why the scaffolds are helping the model in improved citation intent classi\ufb01cation, we examine the attention weights assigned to inputs for our best proposed model this work A possible future direction would be to compare the query string to retrieved results using a method similar to that of Tsuruoka and Tsujii ( 2003 ) . baseline (a) Example from ACL-ARC: Correct label is FUTUREWORK. Our model correctly predicts it while baseline predicts COMPARE. this work Moreover , in our analyses , the antibody responses to vaccination were also analyzed separately and our 12-week follow up to record the immune response to vaccination was much longer than those reported from previous studies where reduction in baseline (b) Example from SciCite: Correct label is RESULTCOMPARISON; our model correctly predicts it, while baseline considers it as BACKGROUND. Figure 3: Visualization of attention weights corresponding to our best scaffold model compared with the best baseline neural baseline model without scaffolds. (\u2018BiLSTM-Attn w/ ELMo + both scaffolds\u2019) compared with the best neural baseline (\u2018BiLSTMAttn w/ ELMO\u2019). We conduct this analysis for examples from both datasets. Figure 3 shows an example input citation along with the horizontal line and the heatmap of attention weights for this input resulting from our model versus the baseline. For \ufb01rst example (3a) the true label is FUTUREWORK. We observe that our model puts more weight on words surrounding the word \u201cfuture\u201d which is plausible given the true label. On the other hand, the baseline model attends most to the words \u201ccompare\u201d and consequently incorrectly predicts a COMPARE label. In second example (3b) the true label is RESULTCOMPARISON. The baseline incorrectly classi\ufb01es it as a BACKGROUND, likely due to attending to another part of the sentence (\u201canalyzed seprately\u201d). Our model correctly classi\ufb01es this instance by putting more attention weights on words that relate to comparison of the results. This suggests that the our model is more successful in learning optimal parameters for representing the citation text and classifying its respective intent compared with the baseline. Note that the only difference between our model and the neural baseline is inclusion of the structural scaffolds. Therefore, suggesting the effectiveness the scaffolds in informing the main task of relevant signals for citation intent classi\ufb01cation. Error analysis. We next investigate errors made by our best model (Figure 4 plots classi\ufb01cation errors). One general error pattern is that the model has more tendency to make false positive errors in the BACKGROUND category likely due to this category dominating both datasets. It\u2019s interesting that for the ACL-ARC dataset some prediction \fCategory (# instances) Background (71) Compare (25) Extension (5) Future (5) Motivation (7) Use (26) Average (Macro) P R F1 P R F1 P R F1 P R F1 P R F1 P R F1 P R F1 BiLSTM-Attn 78.6 77.5 78.0 44.8 52.0 48.1 50.0 40.0 44.4 33.3 40.0 36.4 50.0 28.6 36.4 65.4 65.4 65.4 53.7 50.6 51.5 BiLSTM-Attn w/ ELMo 76.5 87.3 81.6 59.1 52.0 55.3 66.7 40.0 50.0 33.3 40.0 36.4 50.0 28.6 36.4 69.6 61.5 65.3 59.2 51.6 54.2 Previous SOTA (Jurgens et al., 2018) 75.6 87.3 81.1 70.6 48.0 57.1 66.7 40.0 50.0 50.0 20.0 28.6 75.0 42.9 54.6 51.6 61.5 56.1 64.9 49.9 54.6 BiLSTM-Attn+section title scaffold 77.2 85.9 81.3 53.8 56.0 54.9 100.0 40.0 57.1 33.3 40.0 36.4 50.0 28.6 36.4 81.8 69.2 75.0 66.0 53.3 56.9 BiLSTM-Attn+citation worthiness scaffold 77.1 90.1 83.1 59.1 52.0 55.3 100.0 40.0 57.1 28.6 40.0 33.3 50.0 28.6 36.4 81.0 65.4 72.3 66.0 52.7 56.3 BiLSTM-Attn+both scaffolds 77.6 93.0 84.6 65.0 52.0 57.8 100.0 60.0 75.0 40.0 40.0 40.0 75.0 42.9 54.5 72.7 61.5 66.7 71.7 58.2 63.1 BiLSTM-Attn+both scaffolds /w ELMo 75.9 93.0 83.5 80.0 64.0 71.1 75.0 60.0 66.7 75.0 60.0 66.7 100.0 28.6 44.4 81.8 69.2 75.0 81.3 62.5 67.9 Table 5: Detailed per category classi\ufb01cation results on ACL-ARC dataset. Category (# instances) Background (1,014) Method (613) Result (260) Average (Macro) P R F1 P R F1 P R F1 P R F1 BiLSTM-Attn 82.2 83.2 82.7 80.7 74.4 77.4 67.1 76.2 71.4 76.7 77.9 77.2 BiLSTM-Attn w/ ELMo 86.6 87 86.8 87.2 79.1 83.0 71.5 85.8 78.0 81.8 84.0 82.6 Previous SOTA (Jurgens et al., 2018) 77.9 92.9 84.7 91.5 63.1 74.7 79.1 77.3 78.2 82.8 77.8 79.2 BiLSTM-Attn + section title scaffold 81.3 86.0 83.6 85.3 68.8 76.2 66.8 81.9 73.6 77.8 78.9 77.8 BiLSTM-Attn + citation worthiness scaffold 82.9 84.8 83.8 84.6 73.2 78.5 65.4 80.0 72.0 77.6 79.3 78.1 BiLSTM-Attn + both scaffolds 85.4 80.8 83.0 78.6 80.4 79.5 69.8 80.8 74.9 77.9 80.7 79.1 BiLSTM-Attn w/ ELMo + both scaffolds 85.4 90.3 87.8 89.5 80.8 84.9 79.3 79.6 79.5 84.7 83.6 84.0 Table 6: Detailed per category classi\ufb01cation results on the SciCite dataset. Example True Prediction Our work is inspired by the latent left-linking model in (CITATION) and the ILP formulation from (CITATION). MOTIVATION USE ASARES is presented in detail in (CITATION) . USE BACKGROUND The advantage of tuning similarity to the application of interest has been shown previously by (CITATION). COMPARE BACKGROUND One possible direction is to consider linguistically motivated approaches , such as the extraction of syntactic phrase tables as proposed by (CITATION). FUTUREWORK BACKGROUND After the extraction, pruning techniques (CITATION) can be applied to increase the precision of the extraction. BACKGROUND USE Table 7: A sample of model\u2019s classi\ufb01cation errors on ACL-ARC dataset errors are due to the model failing to properly differentiate the USE category with BACKGROUND. We found out that some of these errors would have been possibly prevented by using additional context. Table 7 shows a sample of such classi\ufb01cation errors. For the citation in the \ufb01rst row of the table, the model is likely distracted by \u201cmodel in (citation)\u201d and \u201cILP formulation from (citation)\u201d deeming the sentence is referring to the use of another method from a cited paper and it misses the \ufb01rst part of the sentence describing the motivation. This is likely due to the small number of training instances in the MOTIVATION category, preventing the model to learn such nuances. For the examples in the second and third row, it is not clear if it is possible to make the correct prediction without additional context. And similarly in the last row the instance seems ambiguous without accessing to additional context. Similarly as shown in Figure 4a two of FUTUREWORK labels are wrongly classi\ufb01ed. One of them is illustrated in the forth row of Table 7 where perhaps additional context could have helped the model in identifying the correct label. One possible way to prevent this type of errors, is to provide the model with an additional input, modeling the extended surrounding context. We experimented with encoding the extended surrounding context using a BiLSTM and concatenating it with the main citation context vector (z), but it resulted in a large decline in overall performance likely due to the overall noise introduced by the additional context. A possible future work is to investigate alternative effective approaches for incorporating the surrounding extended context. 5 Related Work There is a large body of work studying the intent of citations and devising categorization systems (Stevens and Giuliano, 1965; Moravcsik and Murugesan, 1975; Garzone and Mercer, 2000; White, 2004; Ahmed et al., 2004; Teufel et al., 2006; Agarwal et al., 2010; Dong and Sch\u00a8 afer, 2011). Most of these efforts provide citation categories that are too \ufb01ne-grained, some of which rarely occur in papers. Therefore, they are hardly useful for automated analysis of scienti\ufb01c publications. To address these problems and to unify previous \fuse futr bckg extn comp motv Predicted label use futr bckg extn comp motv True label 1 7 0 0 0 0 1 0 1 0 2 0 0 2 1 1 0 1 0 0 0 0 5 1 1 2 0 3 0 0 (a) ACL-ARC (test size: 139) bckg mthd comp Predicted label bckg mthd comp True label 64 44 109 15 41 5 (b) SciCite (test size: 1,861) Figure 4: Confusion matrix showing classi\ufb01cation errors of our best model on two datasets. The diagonal is masked to bring focus only on errors. efforts, in a recent work, Jurgens et al. (2018) proposed a six category system for citation intents. In this work, we focus on two schemes: (1) the scheme proposed by Jurgens et al. (2018) and (2) an additional, more coarse-grained generalpurpose category system that we propose (details in \u00a73). Unlike other schemes that are domainspeci\ufb01c, our scheme is general and naturally \ufb01ts in scienti\ufb01c discourse in multiple domains. Early works in automated citation intent classi\ufb01cation were based on rule-based systems (e.g., (Garzone and Mercer, 2000; Pham and Hoffmann, 2003)). Later, machine learning methods based on linguistic patterns and other hand-engineered features from citation context were found to be effective. For example, Teufel et al. (2006) proposed use of \u201ccue phrases\u201d, a set of expressions that talk about the act of presenting research in a paper. Abu-Jbara et al. (2013) relied on lexical, structural, and syntactic features and a linear SVM for classi\ufb01cation. Researchers have also investigated methods of \ufb01nding cited spans in the cited papers. Examples include feature-based methods (Cohan et al., 2015), domain-speci\ufb01c knowledge (Cohan and Goharian, 2017), and a recent CNNbased model for joint prediction of cited spans and citation function (Su et al., 2018). We also experimented with CNNs but found the attention BiLSTM model to work signi\ufb01cantly better. Jurgens et al. (2018) expanded all pre-existing featurebased efforts on citation intent classi\ufb01cation by proposing a comprehensive set of engineered features, including boostrapped patterns, topic modeling, dependency-based, and metadata features for the task. We argue that we can capture necessary information from the citation context using a data driven method, without the need for handengineered domain-dependent features or external resources. We propose a novel scaffold neural model for citation intent classi\ufb01cation to incorporate structural information of scienti\ufb01c discourse into citations, borrowing the \u201cscaffold\u201d terminology from Swayamdipta et al. (2018) who use auxiliary syntactic tasks for semantic problems. 6" + }, + { + "url": "http://arxiv.org/abs/1806.05258v2", + "title": "SMHD: A Large-Scale Resource for Exploring Online Language Usage for Multiple Mental Health Conditions", + "abstract": "Mental health is a significant and growing public health concern. As language\nusage can be leveraged to obtain crucial insights into mental health\nconditions, there is a need for large-scale, labeled, mental health-related\ndatasets of users who have been diagnosed with one or more of such conditions.\nIn this paper, we investigate the creation of high-precision patterns to\nidentify self-reported diagnoses of nine different mental health conditions,\nand obtain high-quality labeled data without the need for manual labelling. We\nintroduce the SMHD (Self-reported Mental Health Diagnoses) dataset and make it\navailable. SMHD is a novel large dataset of social media posts from users with\none or multiple mental health conditions along with matched control users. We\nexamine distinctions in users' language, as measured by linguistic and\npsychological variables. We further explore text classification methods to\nidentify individuals with mental conditions through their language.", + "authors": "Arman Cohan, Bart Desmet, Andrew Yates, Luca Soldaini, Sean MacAvaney, Nazli Goharian", + "published": "2018-06-13", + "updated": "2018-07-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Mental health is a signi\ufb01cant challenge in healthcare. Mental disorders have the potential to tremendously affect the quality of life and wellness of individuals in society (Strine et al., 2008; Mowery et al., 2017a). Social media have become an increasingly important source of data related to mental health conditions (Cohan et al., 2017; Mowery et al., 2017b; Coppersmith et al., 2017; Yates et al., 2017), as it is now a prominent platform for individuals to engage in daily discussions, share information, seek advice and simply communicate with peers that have shared interests. In addition to its ubiquity and ease of access, the possibility to disclose mental health matters anonymously or pseudo-anonymously further drives users to online self-disclosure. At the same time, the close connection between language and mental health makes social media an invaluable resource of mental health-related data. Lack of data has been one of the key limitations to understanding and addressing the challenges the domain is facing (Coppersmith et al., 2014a). Data from social media can not only be used to potentially provide clinical help to users in need, but also to broaden our understanding of the various mental health conditions. Social media analysis has already been proven valuable for identifying depression (Coppersmith et al., 2014a; Yates et al., 2017), suicide ideation (Cohan et al., 2017; De Choudhury and K\u0131c\u0131man, 2017; Kshirsagar et al., 2017; Desmet and Hoste, 2018), and other conditions such as schizophrenia (Mitchell et al., 2015). While social media data is abundantly available, the amount of labeled data for studying mental health conditions is limited. This is due to the high cost of annotation and the dif\ufb01culty of access to experts. Prior research has investigated self-disclosure as a means of obtaining labeled data from social media. De Choudhury et al. (2013a) used it to identify new mothers and track post-partum changes in emotions. * Equal contribution. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/ arXiv:1806.05258v2 [cs.CL] 10 Jul 2018 \fCondition Twitter, (Coppersmith et al, 2015) Reddit, SMHD (ours) ADHD 102 10,098 Anxiety 216 8,783 Autism n/a 2,911 Bipolar 188 6,434 Borderline 101 n/a Depression 393 14,139 Eating 238 598 OCD 100 2,336 PTSD 403 2,894 Schizophrenia 172 1,331 Seasonal Affective 100 n/a Table 1: Comparison between the number of self-reported diagnosed users per condition in the dataset of Coppersmith et al. (2015a) and ours (SMHD). I was of\ufb01cially diagnosed with ADHD last year. I have a diagnosed history of PTSD. my dr just diagnosed me as schizo. Figure 1: Examples of self-reported diagnoses statements. Coppersmith et al. (2014a) speci\ufb01cally focused on self-reports of mental health diagnoses. In particular, Coppersmith et al. (2015a) constructed a dataset of various mental health conditions using Twitter statements. Finally, Yates et al. (2017) introduced a large dataset of depressed users obtained from Reddit. We extend the previous efforts on addressing the lack of large-scale mental health-related language data. Particularly, we propose improved data collection methods through which we can obtain highquality large-scale datasets of labeled diagnosed conditions paired with appropriate control users. Consequently, we introduce SMHD (Self-reported Mental Health Diagnoses), a large dataset of diverse mental health conditions that can provide further insight into the mental health-related language. We leverage self-reported diagnosis statements where a user declares to have been diagnosed with a mental health condition such as depression. Examples of self-reported diagnoses are shown in Figure 1. Our dataset can be used not only to develop methods for better identifying mental health conditions through natural language, but also allows us to investigate the characteristics of language usage within each condition. We hope the availability of this new resource will foster further research into these problems and enhance reproducibility of suggested approaches. Our work has the following signi\ufb01cant distinctions compared to existing social media datasets related to mental health. Previous work has studied self-reported diagnosis posts in Twitter (Coppersmith et al., 2015a), where the post length is limited to 140 characters.1 This makes the Twitter language use rather different from real life discussions. Instead, we use data from Reddit, an interactive discussioncentric forum without any length constraints. Our dataset contains up to two orders of magnitude more diagnosed individuals for each condition than the Twitter dataset by Coppersmith et al. (2015a), making it suitable for exploring more recent data-driven learning methods (see Table 1). We choose our control users in a systematic way that makes classi\ufb01cation experiments on the dataset realistic. We normalize language usage between the users: by removing speci\ufb01c mental health signals and discussions, we focus on patterns of language in normal (general) discussions. While our dataset creation method is close to Yates et al. (2017), we extend theirs by investigating multiple high-precision matching patterns to identify self-reported diagnoses for a range of conditions. Part of our patterns are obtained through synonym discovery. Considering relevant synonyms from reliable sources increases the variety of the diagnosed users and linguistic nuances. We also explore nine common mental health 1This limitation was doubled to 280 characters in late 2017. \fconditions while Yates et al. (2017) focus only on depression. We explore classi\ufb01cation methods for identifying mental health conditions through social media language and provide detailed analysis that helps us understand the differences in language usage between conditions, and between diagnosed users and controls. Our contributions are as follows: (i) We investigate the creation of high-precision matching patterns to identify self-reported diagnoses of nine different mental health conditions. (ii) We introduce a largescale dataset of nine mental health conditions that has signi\ufb01cant extensions to existing datasets and we make our data publicly available. Our dataset includes users who might suffer from more than one condition, thus allowing language study of interacting mental conditions. (iii) We investigate language characteristics of each mental health group. (iv) We explore classi\ufb01cation methods for detecting users with various mental health conditions. 2 Related work Social media offers a considerable amount of accessible common language, attracting the attention of those who study the language of individuals with mental health conditions. Twitter is a natural source, being a popular platform that enables users to share short messages publicly. Early work used crowdsourcing to identify Twitter users who report a depression diagnosis in a survey, and proposed features that are able to identify depressed users prior to the onset of depression (De Choudhury et al., 2013b). Others found that these characteristics hold in both English and Japanese tweets (Tsugawa et al., 2015), indicating similar cross-cultural tendencies. Due to the cost and bias introduced by relying on surveys, work shifted to identifying mental health conditions by examining the content shared by social media users. Coppersmith et al. (2014a) identi\ufb01ed approximately 1,200 Twitter users with 4 mental health conditions (bipolar, depression, PTSD, SAD) using diagnosis statements found in tweets (e.g., \u201cI was diagnosed with depression\u201d). Following this work, detailed studies were conducted on users experiencing PTSD (Coppersmith et al., 2014b), and schizophrenia (Mitchell et al., 2015; Ernala et al., 2017). The shared task at the 2nd Computational Linguistics and Clinical Psychology Workshop (CLPsych 2015) focused on identifying depression and PTSD users on Twitter (Coppersmith et al., 2015b). This included a set of approximately 1,800 Twitter users with self-identi\ufb01ed diagnoses. Leading submissions to CLPsych 2015 relied on the LIWC lexicon (Pennebaker et al., 2015), topic modeling, manual lexicons, and other domain-dependent features (Resnik et al., 2015; Preotiuc-Pietro et al., 2015). Coppersmith et al. (2015a) expands the research on Twitter to eleven self-identi\ufb01ed mental health conditions. (Benton et al., 2017) uses this dataset (and others) with a neural multi-task learning approach to identify language characteristics. Mowery et al. (2016) investigates speci\ufb01c symptoms of depression in tweets, including depressed mood, disturbed sleep, and loss of energy. While the abundant short texts of Twitter can provide some insight into language characteristics of those with mental health conditions, long-form content can provide additional linguistic insights. Some have investigated the language of users of an online crisis forum to identify posts of users who are at highest risk to allow for faster intervention (Milne et al., 2016; Cohan et al., 2017). Losada and Crestani (2016) applied the self-reported diagnosis strategy to identify approximately 150 Reddit users who suffer from depression, and paired them with 750 control users. Yates et al. (2017) also used selfreported diagnoses to identify clinically depressed users, but applied it to a larger set of Reddit, yielding the Reddit Self-reported Depression Diagnosis (RSDD) dataset of over 9,000 users with depression and over 100,000 control users (using an improved user control identi\ufb01cation technique). The corpus was also used to study the temporal aspects of self-reported diagnoses (MacAvaney et al., 2018). Others have used data sources beyond social media to examine the language of people with mental health conditions. Resnik et al. (2013) uses topic models to predict depression and neuroticism based on student-written essays, \ufb01nding clear clusters of words when students are asked to write about their feelings in a stream-of-consciousness setting. Althoff et al. (2016) uses text message conversations from a mental health crisis center to improve counseling techniques. This work addresses important limitations of previous efforts. Similar to RSDD (Yates et al., 2017), we \fbuild our corpus from Reddit using self-reported diagnoses. This results in a large amount of long-form post content that is not constrained by a character limit. Furthermore, because there are no character limits (as exist for Twitter), the mode of language is more typical of general writing. Unlike Yates et al. (2017), we investigate extended self-diagnoses matching patterns derived from mental health-related synonyms. We also focus on nine mental health conditions (rather than just a single condition). This results in a collection that can be used to compare and contrast the language characteristics of each condition. 3 Data In this section we describe the construction and characteristics of the Self-reported Mental Health Diagnoses (SMHD) dataset. The studied conditions correspond to branches in the DSM-5 (American Psychiatric Association, 2013), an authoritative taxonomy for psychiatric diagnoses. Six conditions are top-level DSM-5 disorders: schizophrenia spectrum disorders (schizophrenia), bipolar disorders (bipolar), depressive disorders (depression), anxiety disorders (anxiety), obsessive-compulsive disorders (ocd) and feeding and eating disorders (eating). The three other conditions are one rank lower: post-traumatic stress disorder (ptsd) is classi\ufb01ed under traumaand stress-related disorders, and autism spectrum disorders (autism) and attention-de\ufb01cit/hyperactivity disorder (adhd) under neurodevelopmental disorders. We use these lower-rank conditions to provide more de\ufb01nition when they are clearly distinguishable from sibling disorders. 3.1 Dataset construction The SMHD dataset was created by using high precision patterns to identify Reddit users who claimed to have been diagnosed with a mental health condition (diagnosed users) and using exclusion criteria to match these diagnosed users with control users who are unlikely to have one of the mental health conditions studied (control users). SMHD consists of user labels indicating the mental health condition(s) associated with each user and all Reddit posts made by each user between January 2006 and December 2017 (inclusive). Users and posts were extracted from a publicly available Reddit corpus2. This approach is based on the method used to create the Reddit Self-reported Depression Diagnosis (RSDD) dataset (Yates et al., 2017). SMHD expands on RSDD by incorporating synonyms in matching patterns and including diagnoses for eight new conditions in addition to depression.3 Diagnosed users were identi\ufb01ed using high precision diagnosis patterns in a manner similar to that used in prior work that studied depression on Reddit (Yates et al., 2017); we describe these patterns in more detail in section 3.2. After identifying candidate diagnosed users who matched a diagnosis pattern (see Figure 1), we removed any candidates who (1) matched a negative diagnosis pattern4 or (2) had fewer than 50 posts talking about topics other than mental health (mental health posts). This is done to ensure enough data remains for a diagnosed user after removing their mental health-related content. After removing these candidate diagnosed users, 36948 diagnosed users remain. Mental health posts were de\ufb01ned as posts that were either made to a subreddit (i.e., a subforum devoted to a speci\ufb01c topic) related to mental health or that included language related to mental health, such as the name of a condition (e.g., OCD) and general terms like diagnosis, mental illness, or suffering from. We constructed a list of subreddits related to mental health by beginning with lists from prior work studying depression on Reddit (Pavalanathan and De Choudhury, 2015; Yates et al., 2017) and expanding them to include discussion and support subreddits for each of the other mental health conditions. All mental health posts are removed for diagnosed users and control users alike. Classi\ufb01cation therefore happens on the set of posts that do not contain any of the mental health terms, and that have not been posted in any of the mental health-related subreddits. Our methodology does not guarantee, however, that all potentially relevant terms or subreddits have been excluded. Control users were chosen from a pool of candidate control users based on their similarity with 2https://files.pushshift.io/reddit/ 3Patterns are available from http://ir.cs.georgetown.edu/data/smhd/. 4e.g., I was never clinically diagnosed. \f100 101 102 Distance in characters 0 20 40 60 80 100 F0.5 F1 Precision Recall Figure 2: Precision, recall, F1 and F0.5 of the condition diagnosis patterns as a function of the maximum allowable distance (in characters) between a diagnosis and a condition keyword. Chosen threshold indicated at 40 characters. depression adhd anxiety bipolar ptsd autism ocd schizophrenia eating condition 0 5000 10000 users SMHD SMHD-RC Figure 3: Number of users per condition for SMHD (in blue) and SMHD-RC (in orange). 1 2 3 4 5 6 co-occurring conditions 101 102 103 104 users Figure 4: Number of users with a single or multiple co-occurring conditions. depression adhd anxiety bipolar ptsd autism ocd schizophrenia eating depression adhd anxiety bipolar ptsd autism ocd schizophrenia eating 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Figure 5: Concomitance of condition diagnoses. Rows contain relative co-occurences of the row condition with other conditions. the diagnosed users as measured by their number of posts and the subreddits they posted in. This is done to prevent biases between the control and diagnosed groups in the dataset and prevent making the task of identifying such users arti\ufb01cially easy. In more detail, every Reddit user who (1) had no mental health post5 and (2) had at least 50 posts became a candidate control user. Given this pool of candidate control users, we matched each diagnosed user with 9 candidate control users (on average) after excluding controls who had never posted in the same subreddit as the diagnosed user or who had more than twice as many posts or less than half as many posts as the target diagnosed user. The selection criteria for potential control users are stringent: users are removed if they do not have the required subreddit overlap or minimum post count, or if they use any of the mental health-related terms in any of their posts. The latter is necessary because we found that the prior probability of using an exclusion term in any given subreddit (e.g. depressed) is almost always higher for diagnosed users than for controls, even in popular subreddits on general topics (e.g. r/politics). As a result, the pool of candidate controls for matching to diagnosed users is signi\ufb01cantly diminished. Because control users are picked from the pool without replacement, we were unable to meet the target of 9 appropriate controls for some of the diagnosed users. We release the users who have at least 9 control users as SMHD, and use this 5When de\ufb01ning language related to mental health, we consider add to be a mental health term. However, this term is often used as a verb that does not refer to Attention De\ufb01cit Disorder. We therefore do not exclude candidate control users who have used this term. We instead remove any post containing the term add, which matches our treatment of the diagnosed users where all mental health posts are removed. \fdataset for all analyses in this paper. It contains 20,406 diagnosed users and 335,952 matched controls. A secondary dataset containing the remaining 16,542 diagnosed users with fewer than 9 controls will be made available as SMHD-RC (Relaxed Controls) for studies that require additional data for analysing differences between mental health conditions, rather than between diagnosed users and controls. 3.2 Diagnosis patterns To identify Reddit users who report a diagnosis for one or more mental health conditions, we developed detection patterns with a focus on high precision. The patterns consist of two components: one that matches a self-reported diagnosis, and another that maps relevant diagnosis keywords to the 9 mental health conditions. A user is included for a condition if one of the condition keywords occurs within a certain distance of the diagnosis pattern (as discussed below). For each condition, a seed list of diagnosis keywords was collected from the corresponding DSM-5 headings. To increase the likelihood of matching diagnostic posts, we expanded each set of medical expressions to include synonyms, common misspellings, vernacular terms and abbreviations. Our steps mirror the ones of Soldaini and Yom-Tov (2017), who were also interested in identifying self-diagnosed users, albeit on query logs. In particular, we leveraged two synonym mappings to generate alternative formulations of the disorders of interest: \u2022 MedSyn (Yates and Goharian, 2013) is a laypeople-oriented synonym mapping ontology. It was generated from a subset of UMLS6 \ufb01ltered to remove irrelevant semantic types. \u2022 Behavioral (Yom-Tov and Gabrilovich, 2013) maps expressions commonly used by laypeople to describe their medical condition to concepts in UMLS. The synonyms were generated by \ufb01rst identifying the most frequent search terms Yahoo! search used to retrieve Wikipedia medical pages. Then, frequent lexical af\ufb01nities (Carmel et al., 2002) to the aforementioned were added to synonyms lists. The resulting expansions were vetted manually to remove terms that did not describe the condition precisely (e.g., hypernyms) or that were ambiguous (e.g., \u201cadd\u201d), and missing items were added. Using the diagnosis patterns without condition keywords, all self-reported diagnosis posts were collected from the Reddit dataset, and 500 posts were randomly sampled for manual annotation. False positive matches were found in 18 of those, i.e. the precision for self-diagnosis detection was 96.4%. False positives included negations, hypotheticals (e.g., \u201cI\u2019m not sure if I was diagnosed\u201d), and diagnoses that are uncertain or wrong (\u201cI was diagnosed with autism, then undiagnosed.\u201d). Out of 500 diagnosis posts, 241 reported a mental health condition. The ones that were annotated as belonging to one of the 9 conditions were used to tune the condition term lists and the optimal distance between a diagnosis pattern and a condition term. Figure 2 plots the effect of this distance on precision, recall, F1 and F0.5 (which emphasizes precision). A maximum distance of 40 characters was chosen, where F0.5 score is highest. We thus achieve high precision (93%) with good recall (84%). Since the optimal distance threshold was tuned on this development set, it may over\ufb01t the data and the reported scores should be considered a ceiling performance. To validate the \ufb01nal diagnosis matching approach on a held-out set, 900 posts (corresponding to 9 samples of 100 diagnosis posts, one for each condition) was manually checked for false positives. We obtain high precision, with a minimum precision of 90% for anxiety and macro-averaged precision of 95.8%. Most false positives are caused by terms for a condition occurring close to a diagnosis for another condition (e.g. \u201cMy doctor diagnosed me with depression, and I also have an anxiety problem.\u201d). While a user might also suffer from these conditions, they are not explicitly reporting a diagnosis. 3.3 Dataset statistics Figure 3 shows the distribution of diagnosed users per condition in both datasets. Users who self-reported a diagnosis of depression, ADHD, anxiety or bipolar are most common. Interestingly, 26.7% of diagnosed users in the dataset reported more than one diagnosis (Figure 4). Such concomitant conditions are not uncommon, and were also reported in the work of Coppersmith et al. (2015a). As can be seen in 6Uni\ufb01ed Medical Language System \fcondition posts tokens characters per user total per post total per post control 310.0 (157.8) 115,669k 26.2 (48.3) 3,031.6M 133.9 (252.9) depression 162.2 (84.2) 1,272k 45.1 (80.0) 57.4M 227.5 (406.9) adhd 164.7 (83.6) 872k 46.5 (82.7) 40.5M 237.5 (433.5) anxiety 159.7 (83.0) 795k 46.4 (83.0) 36.9M 233.9 (422.8) bipolar 157.6 (82.4) 575k 45.5 (86.5) 26.2M 230.6 (447.0) ptsd 160.7 (84.7) 258k 53.1 (114.0) 13.7M 267.8 (581.7) autism 168.3 (84.5) 248k 46.5 (82.3) 11.6M 237.9 (434.0) ocd 158.8 (81.4) 203k 46.4 (90.1) 9.4M 234.2 (459.5) schizophrenia 157.3 (80.5) 123k 49.2 (105.6) 6.1M 253.8 (566.6) eating 161.4 (81.0) 53k 46.3 (73.7) 2.5M 232.6 (372.8) Table 2: Average (Stdev.) and count of posts, tokens and characters for diagnosed and control users. Figure 5, depression co-occurs with high frequency with most of the other conditions, almost 30% of users with depression, OCD or PTSD also suffer from anxiety, and schizophrenia is most likely to be diagnosed alongside bipolar disorder. An important characteristic of the SMHD dataset is its scale. Reddit allows its users to write long-form posts, so unlike datasets collected on Twitter, a large amount of text is available for each diagnosed user. Table 2 gives an overview of the average and total number of posts, tokens and characters per condition, and for controls. Control users post on average twice as many posts as diagnosed users, but these posts tend to be considerably shorter. Although this may be a valid signal for certain mental health conditions, it can be removed for classi\ufb01cation experiments by truncating the length and number of posts. This is common practice for technical reasons, and truncating post length has been shown in previous work to improve classi\ufb01cation performance (Yates et al., 2017). 3.4 Ethics and privacy Even though we rely on publicly available Reddit posts in our work, mental health is a sensitive matter and measures to prevent risk to individuals in social media research should always be considered (Hovy and Spruit, 2016; \u02c7 Suster et al., 2017). The risks associated with the data collection methods and our resulting SMHD dataset is minimal. We refrain from publicly posting any excerpts of the data, we made no attempt to contact users, and we made no attempt to identify or link users to other social media accounts. We further replace usernames with random identi\ufb01ers to prevent users\u2019 identities from being known without the use of external information. The SMHD dataset is available through a Data Usage Agreement (DUA)7 protecting the users\u2019 privacy. In particular, the DUA speci\ufb01es that no attempt should be made to publish portions of the dataset (which could result in users being identi\ufb01ed), contact users, identify them, or link them with other user information. 4 Analysis To investigate the differences between the language of mental health condition groups and the control group, we categorize language of users based on measures of psycholinguistic attributes through the LIWC lexicon (Pennebaker et al., 2015). These categories include variables that characterize linguistic style as well as psychological aspects of language (e.g., cognitive attributes and affective attributes). For each user, we obtain LIWC categories based on their posts and then compare these categories across users in each mental health condition group versus the control group using Welch\u2019s t-test (Welch, 1947). We adjust p-values with Bonferroni correction. To better see the differences between the categories, we also report the Cohen\u2019s d statistic (Cohen, 1988). Table 3 shows the results. In general, we observe a variety of differences in language use between the diagnosed and the control groups. The effect sizes range from a very small effect to medium (0.05