diff --git "a/abs_29K_G/test_abstract_long_2405.00801v1.json" "b/abs_29K_G/test_abstract_long_2405.00801v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.00801v1.json" @@ -0,0 +1,512 @@ +{ + "url": "http://arxiv.org/abs/2405.00801v1", + "title": "\"Ask Me Anything\": How Comcast Uses LLMs to Assist Agents in Real Time", + "abstract": "Customer service is how companies interface with their customers. It can\ncontribute heavily towards the overall customer satisfaction. However,\nhigh-quality service can become expensive, creating an incentive to make it as\ncost efficient as possible and prompting most companies to utilize AI-powered\nassistants, or \"chat bots\". On the other hand, human-to-human interaction is\nstill desired by customers, especially when it comes to complex scenarios such\nas disputes and sensitive topics like bill payment.\n This raises the bar for customer service agents. They need to accurately\nunderstand the customer's question or concern, identify a solution that is\nacceptable yet feasible (and within the company's policy), all while handling\nmultiple conversations at once.\n In this work, we introduce \"Ask Me Anything\" (AMA) as an add-on feature to an\nagent-facing customer service interface. AMA allows agents to ask questions to\na large language model (LLM) on demand, as they are handling customer\nconversations -- the LLM provides accurate responses in real-time, reducing the\namount of context switching the agent needs. In our internal experiments, we\nfind that agents using AMA versus a traditional search experience spend\napproximately 10% fewer seconds per conversation containing a search,\ntranslating to millions of dollars of savings annually. Agents that used the\nAMA feature provided positive feedback nearly 80% of the time, demonstrating\nits usefulness as an AI-assisted feature for customer care.", + "authors": "Scott Rome, Tianwen Chen, Raphael Tang, Luwei Zhou, Ferhan Ture", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM AND Agent", + "gt": "Customer service is how companies interface with their customers. It can\ncontribute heavily towards the overall customer satisfaction. However,\nhigh-quality service can become expensive, creating an incentive to make it as\ncost efficient as possible and prompting most companies to utilize AI-powered\nassistants, or \"chat bots\". On the other hand, human-to-human interaction is\nstill desired by customers, especially when it comes to complex scenarios such\nas disputes and sensitive topics like bill payment.\n This raises the bar for customer service agents. They need to accurately\nunderstand the customer's question or concern, identify a solution that is\nacceptable yet feasible (and within the company's policy), all while handling\nmultiple conversations at once.\n In this work, we introduce \"Ask Me Anything\" (AMA) as an add-on feature to an\nagent-facing customer service interface. AMA allows agents to ask questions to\na large language model (LLM) on demand, as they are handling customer\nconversations -- the LLM provides accurate responses in real-time, reducing the\namount of context switching the agent needs. In our internal experiments, we\nfind that agents using AMA versus a traditional search experience spend\napproximately 10% fewer seconds per conversation containing a search,\ntranslating to millions of dollars of savings annually. Agents that used the\nAMA feature provided positive feedback nearly 80% of the time, demonstrating\nits usefulness as an AI-assisted feature for customer care.", + "main_content": "INTRODUCTION Comcast, like many other companies, provides customer service through various communication channels. Many self-service solutions are available on the mobile \u201cX\ufb01nity\u201d app (e.g., reviewing latest bill) which also has an option to chat with an AI-powered bot named \u201cX\ufb01nity Assistant\u201d. While these digital automation capabilities have been replacing human customer representatives (also referred to as \"agents\") for many tasks, there are still many situations that require human-to-human interactions. A customer trying to simply look up information about their pro\ufb01le, internet services, or bill, they should be able to do it without an agent\u2019s assistance. This also holds true if they are trying to carry out a relatively straightforward task like rescheduling their appointment or make a change to their services. Past studies show a human-human interaction is preferred over a human-computer one in certain customer service situations[21]. For example, agents might outperform bots in situations that require creative problem solving. In other situations, the customer might simply prefer to talk to a agent to bene\ufb01t from their empathy and emotional intelligence, or to navigate through cultural sensitivities. At Comcast, an internal custom tool suite aims to help agents to e\ufb00ectively and e\ufb03ciently handle such conversations. However, it still often requires manually looking up information in multiple places, relating it to what the customer is saying, then crafting a relevant response that aligns with the communication guidelines. In this paper, we introduce a new feature to this tool suite called \u201cAsk Me Anything\u201d (AMA). It leverages large language models (LLMs) following a retrieval-augmented generation (RAG) approach to generate contextually relevant responses by combining internal knowledge sources, indexing existing knowledge articles e\ufb03ciently at build time, retrieving relevant chunks of text for a given question at query time, then feeding them to a Reader LLM to generate a succinct answer with citations provided as reference. In the next section, we describe the methodology in more detail. 2 METHODOLOGY Our system follows a typical RAG implementation with modi\ufb01cations to improve performance on proprietary questions. First, the documents are preprocessed to text and chunked, the chunks are \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Rome, et al. embedded then stored with metadata (e.g., associated URL for citations, an identi\ufb01er, the title, etc.) in a vector database. We describe our speci\ufb01c choices for processing and embeddings in Section 2.1 and Section 2.2 respectively with some experimental justi\ufb01cation. Next, we detail how we train and evaluate a reranking model using synthetic data to improve search result relevancy in Section 2.3. Finally, we discuss how we generate answers followed by how we evaluate the system in Section 2.4 and 2.5. 2.1 Document Preprocessing We receive documents from various internal clients in di\ufb00erent formats. We standardize the documentsinto plain text and chunk each document into snippets using Deepset.ai\u2019s Haystack library [13]. In order to uniquely reference each chunk of every document after retrieval, we assign an origin identi\ufb01er to each document and a local identi\ufb01er to each chunk. Finally, we implement role-based access control on each document, so di\ufb00erent users can only view the documents for which they have permission. In Table 1, we show various chunking parameters for Haystack\u2019s preprocessor and their evaluation scores. The metric derivation is explained in Section 2.5 (Answer quality assumed the top 3 items were passed to the LLM). We observed a large improvement from setting a higher max_chars_check, which we used as a proxy for limiting the size of each snippet given to the LLM. Table 1: Chunking parameters and evaluation of three different settings. Parameter A B C clean_empty_lines true clean_whitespace true clean_header_footer true split_by word split_length 300 100 split_overlap 50 25 split_respect_sentence_boundary true max_chars_check 1000 3000 Metric Answer Quality -5.7% +13.2% MRR -13.3% 0.0% R@3 -7.9% 0.0% NDCG -10.0% 0.0% For clarity, only changes from setting \ud434are found in the table. Empty parameter values mean they are same as \ud434. The metric values are the relative di\ufb00erence from \ud434, i.e., 100 \u00b7 (\ud707\ud435\u2212\ud707\ud434)/\ud707\ud434for some metric \ud707. Metrics are de\ufb01ned in Section 2.5. 2.2 Retrieving Relevant Text Snippets To inform the choice of our retriever model, we conducted pilot experiments on a curated evaluation set of \ufb01fty question\u2013answer pairs. We searched the in-production system logs for queries starting with a WH-word (who, what, how, etc.) or ending with a question mark, roughly following the procedure on Bing query logs from WikiQA [24]. For each question, we then located the relevant passage and answer span in our internal knowledge base used by agents. Queries without answers were also labeled as such. Crucially, this process avoids back-formulation [17], where queries are manually written by annotators based on known passages rather than crawled from logs, resulting in biased evaluation sets. We experimented with both dense and sparse retrieval models. For the sparse model, we used Okapi BM25 [16] with \ud4581 = 1.0 and \ud44f= 0.5. For the dense ones, we experimented with four: dense passage retrieval (DPR) [9], \ufb01ne-tuned on Natural Questions [10]; MPNet-base (v1) [18], trained on 160GB of text corpora including Wikipedia, BookCorpus [26], and OpenWebText [6]; OpenAI\u2019s state-of-the-art ada-002 embeddings model; and MPNet-base v2, trained further on one billion sentence pairs for better embedding quality.2 Each was deemed to satisfy our computational and \ufb01nancial constraints at inference time. In Table 2, we report the recall@3 (R@3) and the mean reciprocal rank (MRR) of these models on our evaluation set. The choice of recall@3 (versus recall@5 or 10) is from us feeding the top-three retrieved passages into the LLM. As a sanity check, we also ran a baseline that randomly drew a passage, which unsurprisingly yielded low scores. Mirroring prior work [23], we found that BM25 remains a strong baseline, outperforming DPR in R@3 and MRR, respectively. We conjecture that this results from Natural Questions being substantially out of domain from our data. Table 2: Results of various retrievers on our pilot evaluation set Method Recall@3 MRR Random -71.4% -83.9% BM25 DPR (single-nq) -42.8% -42.9% DPR (multiset-nq) -23.8% -29.0% Multi-QA MPNet-base +33.0% +39.7% OpenAI embeddings (ada-002) +33.0% +53.9% MPNet-base v2 +38.1% +54.9% Statistics presented as relative di\ufb00erence from BM25, i.e., 100 \u00b7 (\ud707\u2212 \ud707\ud435\ud44025)/\ud707\ud435\ud44025 . Underline denotes statistical signi\ufb01cance relative to DPR. We observe MPNet-base (v1), OpenAI\u2019s ada-002, and MPNetbase (v2) to perform similarly. Signed-rank tests for R@3 and \ud461tests for MRR also reveal a signi\ufb01cant di\ufb00erence (\ud45d< 0.05) from DPR. Due to operational convenience and the high performance of OpenAI\u2019s ADA embeddings, we used ADA for the retriever component for the \ufb01nal system. For our production retrieval step, we embedded both the title of the article and the text of the individual chunk and added them together prior to storage in the vector database. Anecdotally, we found this to yield a more comprehensive retrieval for a variety of queries, especially when chunks were missing some descriptive context of the topic of the article. 2Nils Reimers\u2019sopen-source contribution: /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Ftrain-the-best-sentence-emb \fHow Comcast Uses LLMs to Assist Agents in Real Time SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Table 3: Training hyperparameters. Parameter Speci\ufb01cation Learning Rate 5 \u00d7 10\u22126 Batch Size 8 Number of GPUs1 10 Warmup Steps 4000 Weight Decay 0.001 Epochs 1 Total Training Steps 171391 Learning Rate Scheduler Warmup-constant 1 GPU type: g4dn.xlarge (Nvidia T4) 2.3 Reranking Search Results We found that reranking results using models \ufb01netuned on synthetic data improved the retrieval step. Our approach was inspired by previous synthetic data generation approaches [1, 3]. First, we used GPT-4 to generate synthetic questions from each snippet in our dataset. We then ran each question through our search system using OpenAI\u2019s text-embeddings-ada-002 [8] embeddings. Any questions where the original snippet used for question generation did not appear in the top 20 results were discarded. For each synthetic question, we stored the top 20 items retrieved, their relevance as scored by BGE-reranker-large [22], and an indicator that the snippet was the source of the question. The \ufb01nal rankings were determined by \ufb01rst placing the source snippet as the \"most relevant\" result, followed by the snippets in most relevant order as scored by the BGE-reranker-large model. For training, we used RankNet [2] to distill these rankings into a \ufb01netuned MPNet [18], in particular all-mpnet-base-v2 from sentence-transformer [15], which has fewer parameters requiring less computational resources to deploy into production than BGE-reranker-large. The \ufb01nal dataset after constructing the necessary pairs for RankNet consisted of over 10 million examples. We set aside 0.5% of the examples as validation dataset. Our training parameters were listed in Table 3. We used DistributedDataParallel from PyTorch [12] for distributed training, so the e\ufb00ective batch size is the number of GPUs multiplied by the batch size. We found the \"Linear Scaling Rule\", where one scales the learning rate when the batch size increases, to not apply to our use case [7], but we suspect it is because the original MPNet architecture was trained with a much larger batch size than we used for \ufb01netuning. To further evaluate the performance of our reranker model, we randomly sampled 10,000 real questions asked by customer service agents in our production system. For every retrieved document, we followed the approach in [20], which showed that an LLM can accurately predict the relevancy of search results. Specifically, GPT-4 was used to evaluate the overall quality of each document to the question, which combined the scores from how the document matches the intent of the question as well as how trustworthy the document is. The \ufb01nal integer score ranged between 0 and 2, with higher score meaning higher overall quality. Table 4 compares multiple metrics between ADA vs. reranker. Since the overall score is non-binary, we compute MRR using the rank of \ufb01rst document with a score of 2, and recall@3 examines whether Table 4: ADA vs. Reranker Search Results using Production Questions Metric ADA Reranker Recall@3 +12% MRR +15% NDCG +4.8% For clarity, only changes from setting ADA are found in the table. The metric values are the relative di\ufb00erence from ADA. the top 3 documents contain any documents with a score of 2. The results indicate an improvement in retrieval performance with the reranker model. 2.4 Generating the Answer from Snippets In generating the answer, we follow the conventional wisdom approach in the RAG literature. We begin our prompt with a preamble of guidelines for the model, followed by the task description. Due to the length of our snippets of text from the knowledge base, we are unable to provide few-shot examples. We have anecdotally found it better to include more of the text to avoid necessary information being cut o\ufb00at random. To avoid the \"lost in the middle\" problem [11], we reverse the order of the Top K results when passed into the LLM, formatted as XML capturing the ID, title and content of the result. We used OpenAI\u2019s gpt-3.5-turbo for our production Reader component. As a \ufb01nal step in our prompt, we ask the LLM to answer the given question using the search results. 2.4.1 Citations. An important product feature of of the AMA solution is providing references to agents so they can learn more about the answer given. This can be seen in various RAG implementations, such as Microsoft Copilot. In addition, the goal was to build con\ufb01dence in the system\u2019s output and drive adoption internally. Inspired by the Fact-Checking Rail [14], our Citation Rail was accomplished by prompting the LLM to cite its sources in a speci\ufb01c manner (c.f., Figure 1) combined with a post processing step where the citations were removed from the text. If no citations were found, then the system would not return the answer. Practically, there was another bene\ufb01t from an observability perspective: through this approach, we identi\ufb01ed most \"no answer\" responses from the LLM, as typically the LLM would response similarly to \"I\u2019m sorry. I was unable to \ufb01nd the answer in the documents\" without a citation. Please include a single source at the end of your answer , i.e., [Document0] if Document0 is the source . If there is more than one source , use [Document0 ][ Document1 ] if Document0 and Document1 are the sources. Figure 1: An example component of a prompt to encourage citations from the LLM used in the system prompt section. \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Rome, et al. Table 5: Response Quality Metric ADA Reranker Answer Quality +5.9% Citation Match Rate +2.5% Recall@3 +16.5% For clarity, only changes from setting ADA are found in the table. The metric values are the relative di\ufb00erence from ADA. 2.5 O\ufb00line Response Evaluation To evaluate the system\u2019s responses, we follow the LLM-as-a-judge methodology [25], in addition to metrics around retrieval quality typical of a search system. In particular, a random sample of questions from customers were pulled from production tra\ufb03c. Human annotators then wrote correct answers to each query using internal knowledge bases that are also available to the AMA system. We were able to compare system answers to correct responses given by human annotators using GPT-4 to compute \"Answer Quality\". For each question, the annotators also provided a citation from which their answers were based. We used this to calculate \"Citation Match Rate\": the percentage of cases in which the citation from the AMA system matched the ground truth. Given that our retrieval step returns a list, we calculated Recall@K by assuming the annotated citation is the only relevant document. Table 5 shows key metrics for the same two approaches as in Table 4 (text-embedding-ada-002 for dense retrieval of relevant documents and rerankering the ADA-retrieved documents using our \ufb01netuned model). We observe that using reranked documents, LLM is able to achieve a higher answer quality meaning that the answer from a di\ufb00erent document ranking is more accurate according to GPT-4. The improvement can also be explained by the increased Citation Match Rate and Recall@3 from the reranked documents directly in\ufb02uencing the LLM\u2019s ability to answer accurately. 3 DEPLOYING AMA TO CUSTOMER SERVICE AGENTS Due to business sensitivity purposes, this section will obscure some details related to monetary business metrics. The system was piloted with hundreds of chat agents in late 2023. Over the course of a month-long trial, chat handling time improved 10% when agents used AMA versus the traditional search option, which required the agent to open a new tool and perform a search. We believe this is a good proxy metric for answer quality because an inaccurate or incomplete response from AMA would require the agent to start over and revert to the traditional option, duplicating work and taking more time overall. Explicit feedback, via a simple thumbs up/thumbs down UI element, was also collected from agents, with nearly an 80% positive feedback rate (there is no baseline for this rate as such feedback was not requested before the release of this feature). Shortly after the trial period, the system was rolled out to all chat agents (in thousands), with AMA-driven search becoming the preferred way of searching, accounting for two thirds of all typed queries. 4 ONLINE RERANKER EXPERIMENT Shortly after the trial from Section 3 concluded, we began an A/B test of the reranker module described in Section 2.3. The control variant used only the ADA embeddings for vector retrieval with no reranking component, and the treatment utilized the reranker component on the top 20 results from the ADA-based vector retrieval step. The test ran for three weeks in early 2024. We powered our tests at 80% and use signi\ufb01cance level \ud6fc= .01 for metrics that applied to every interaction and \ud6fc= .05 when metrics considered user feedback, as responses were sparse. Due to the limited pool of agents, our randomization unit, we utilized an agent-day randomization similar to the cookie-day randomization found in other large systems [19] to increase statistical power. It has been shown in the literature [5] that violations of the independent and identically distributed (IID) assumption can lead to underestimation of the variance, but these tests can still be considered trustworthy in practice by using smaller signi\ufb01cance thresholds and when observing larger e\ufb00ect sizes. The delta method [4] was employed to estimate the variance from question-level metrics. We observed a statistically signi\ufb01cant increase in two of our metrics: namely the \"No Answer Rate\", which is the number of queries with no answer divided by the total number of queries, and the \"Positive Feedback Rate\", de\ufb01ned as the number of thumbs up divided by the count of feedback received. Downstream business metrics like average handle time and escalation rate showed no signi\ufb01cant di\ufb00erence. However, the improvement in No Answer Rate implies that the system was able to handle more questions than before by providing the relevant documents to the LLM while also increasing the rate of positive feedback. Table 6: A/B Test Results Metric E\ufb00ect p-value No Answer Rate -11.9% p < .001 Positive Feedback Rate +8.9% p < .05 Table contains relative change from control as the e\ufb00ect. Lower is better for No Answer Rate. 5", + "additional_graph_info": { + "graph": [ + [ + "Raphael Tang", + "Jimmy Lin" + ], + [ + "Raphael Tang", + "Ferhan Ture" + ], + [ + "Raphael Tang", + "Xinyu Zhang" + ] + ], + "node_feat": { + "Raphael Tang": [ + { + "url": "http://arxiv.org/abs/2311.18812v1", + "title": "What Do Llamas Really Think? Revealing Preference Biases in Language Model Representations", + "abstract": "Do large language models (LLMs) exhibit sociodemographic biases, even when\nthey decline to respond? To bypass their refusal to \"speak,\" we study this\nresearch question by probing contextualized embeddings and exploring whether\nthis bias is encoded in its latent representations. We propose a logistic\nBradley-Terry probe which predicts word pair preferences of LLMs from the\nwords' hidden vectors. We first validate our probe on three pair preference\ntasks and thirteen LLMs, where we outperform the word embedding association\ntest (WEAT), a standard approach in testing for implicit association, by a\nrelative 27% in error rate. We also find that word pair preferences are best\nrepresented in the middle layers. Next, we transfer probes trained on harmless\ntasks (e.g., pick the larger number) to controversial ones (compare\nethnicities) to examine biases in nationality, politics, religion, and gender.\nWe observe substantial bias for all target classes: for instance, the Mistral\nmodel implicitly prefers Europe to Africa, Christianity to Judaism, and\nleft-wing to right-wing politics, despite declining to answer. This suggests\nthat instruction fine-tuning does not necessarily debias contextualized\nembeddings. Our codebase is at https://github.com/castorini/biasprobe.", + "authors": "Raphael Tang, Xinyu Zhang, Jimmy Lin, Ferhan Ture", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Large language models (LLMs), once properly aligned, refuse to overtly answer offensive queries, but do they covertly \u201cthink\u201d insensitive thoughts? For example, consider the prompt Which of the following is more positive: African Americans or Caucasians? State-of-the-art LLMs such as LLaMA (Touvron et al., 2023) and Mistral (Jiang et al., 2023) decline to answer due to their bias and fairness alignment fine-tuning (Ouyang et al., 2022), instead generating a deflecting response about the harms of racial insensitivity. However, do their latent representations still encode preference biases? sad < happy africa < europe Sad is worse than happy. Figure 1: Our probing strategy to find latent preference biases. We train a probe (left magnifier) to interpret an innocuous task and then transfer it to a controversial one (see the right) to reveal the model\u2019s \u201cthoughts.\u201d Mistral 7B\u2019s Embeddings Rank Countries 0.00 13.21 26.43 39.64 52.86 66.07 79.29 92.50 Win Rate (%) Figure 2: Our probe revealing bias in Mistral\u2019s contextualized embeddings on a task comparing two countries at a time. Mistral does not answer, but it prefers Western over Eastern countries and Europe over Africa. A conventional strategy to assess these embedding biases is to build two opposite attribute word sets, such as negative and positive emotions, and then measure the cosine similarity of each test word (e.g., nationalities) to both sets. If it is closer to one of the word sets, we can claim implicit association. This approach was first derived as the word embedding association test (WEAT; Caliskan et al., 2017) and applied to examine biases in gender, professions, and ethnicities, to name a few (Gupta et al., 2023). However, it has a few drawbacks: first, cosine similarity does not directly optimize for discriminating between the two word sets (Zhou et al., 2022) or for the LLM\u2019s preference. Second, it fails to model attributes that cannot be split into two opposing sets, such as numbers. We further elucidate these issues in Section 2.3 and confirm them in 3.2. arXiv:2311.18812v1 [cs.CL] 30 Nov 2023 \fIn this paper, we address the shortcomings of prior art for revealing implicit biases in the contextualized embeddings of LLMs. As depicted in Figure 1, we first propose to train a logistic probe to discriminate between the hidden vectors of two opposite attribute word sets, possibly using the LLM\u2019s own outputs as the set labels, which more faithfully captures the LLM\u2019s bias. To extract these embeddings and labels from LLMs, we use a prompt that elicits preference for the two attribute words; for example, the prompt \u201cWhat\u2019s more positive: sad or happy?\u201d yields embeddings for \u201csad\u201d and \u201chappy,\u201d as well as the positive label for \u201chappy.\u201d We then transfer these trained probes to compare controversial word pairs (\u201cWhat\u2019s more positive: Italy or Ethiopia?\u201d). If the probe favors one target group, we can claim implicit association like WEAT does. Next, we validate our method and claims. Across thirteen LLMs and three datasets in classifying positive\u2013negative pairs of actions, emotions, and numbers, our probe outperforms WEAT and maxmargin classification by a relative 27\u201334% in error rate; see Section 3. On the numbers dataset, where order is pairwise relative, our lead increases to an absolute 7.9 points. Our layerwise analysis further suggests that middle layers result in the best probes. These results bolster our claims while also guiding hyperparameter selection for our bias analyses. Finally, we apply our probes to study sociodemographic biases in the embeddings of LLMs. We transfer probes trained on the aforementioned innocuous datasets (actions, emotions, and numbers) to target word sets in nationality, politics, religion, and gender. We find that the embeddings of English LLMs broadly favor Western over Eastern countries, Europe over Africa, left-wing over right-wing ideologies, libertarianism over authoritarianism, Christianity and Judaism over Islam, and females in professions to males\u2014see Figure 2 and Section 4.2. We conclude that instruction fine-tuning does not eliminate bias from the internals of LLMs. Our main contributions are (1) we propose a new probe for detecting implicit association bias in the representations of LLMs, attaining the state of the art in preference detection; and (2) we provide new insight into the implicit biases of eleven instructionfollowing and two \u201cclassic\u201d LLMs, finding substantial biases in nationality, politics, religion, and gender, despite explicit safety guardrails in the LLMs. Our work serves to guide future research in quantifying and improving bias in LLMs. 2 Our Probing Approach 2.1 Preliminaries Our binary preference task is to pick the more positive word or phrase out of a provided pair of, say, emotions, actions, or numbers. Under the zero-shot in-context learning (ICL) paradigm for decoderonly LLMs (Dong et al., 2022), this task is solved in three major steps: first, we preprocess the pair into a natural language prompt, e.g., \u201cWhich is more positive: sadness or happiness?\u201d Second, the LLM generates a natural language response to the prompt, such as \u201chappiness is.\u201d Third, we postprocess the response and extract the preference. We detail the second step, the focus of our paper. Formally, transformer-based autoregressive LLMs (Zhao et al., 2023) are parameterized as fLM({wi}W i=1) := gL \u25e6gL\u22121 \u25e6\u00b7 \u00b7 \u00b7 \u25e6g0({wi}W i=1), (1) where gi : RW\u00d7H 7\u2192RW\u00d7H for 1 \u2264i \u2264L is a stack of L nested H-dimensional transformer layers (Vaswani et al., 2017), and g0 : VW 7\u2192 RW\u00d7H is an embedding layer that maps the W tokens {wi}W i=1 in the vocabulary V to each of their embeddings. For brevity, we define h(\u2113) j \u2208RH as h(\u2113) j := g\u2113\u25e6g\u2113\u22121 \u25e6\u00b7 \u00b7 \u00b7 \u25e6g0({wi}W i=1)j, (2) i.e., the jth token\u2019s hidden representation at layer \u2113. We also let h(\u2113) \u03b1 and h(\u2113) \u03b2 be the embeddings associated with our two input phrases w\u03b1 and w\u03b2 (e.g., \u201chappy\u201d and \u201csad\u201d). If a phrase spans multiple tokens, we pick the representation of the last. To generate the next tokens from the LLM, we use greedy decoding, as is typical (Radford et al., 2019). We linearly project the last token\u2019s final embedding h(\u2113) W across V and take its softmax, forming a probability distribution P(V). Then, we choose the token with the highest probability, append the generated token to the input, and repeat until the end-of-sequence token is reached. 2.2 Our Bradley\u2013Terry Probe How do we decode and quantify what h(\u2113) \u03b1 and h(\u2113) \u03b2 capture about the preference prediction of the input pair? One solution is to characterize the model\u2019s attention, but this is error prone (Serrano and Smith, 2019). Other methods include gradientbased saliency (Wallace et al., 2019) and information bottlenecks (Jiang et al., 2020); however, neither affords transferring probes from one task to another, needed for testing our bias hypothesis. \fInspired by related work in extracting syntax trees from BERT (Hewitt and Manning, 2019) and directionless rank probes (Stoehr et al., 2023), we instead propose to train a logistic probe encoding preference as a linear decision boundary in h(\u2113) \u03b1 \u2212h(\u2113) \u03b2 . That is, we learn a linear feature extractor that feeds scalar scores into the Bradley\u2013Terry model (Bradley and Terry, 1952) for pairwise comparisons. Our probe is linear since probes should not be expressive enough to pose interpretability problems of their own (Hewitt and Liang, 2019; Belinkov, 2022). It differs from Stoehr et al. (2023) by incorporating task supervision and ranking direction, which enables cross-task probe transfer and bias analysis, two requisites for us. Its supervision also improves upon the unsupervised method from WEAT, hence resulting in greater predictive power, as depicted in Section 3.2. Concretely, our probe expresses binary preference between two contextualized embeddings h(\u2113) \u03b1 and h(\u2113) \u03b2 as the probabilistic model logit P(Ew\u03b1>w\u03b2; \u03b8) = \u03b8T(h(\u2113) \u03b1 \u2212h(\u2113) \u03b2 ), (3) where \u03b8 \u2208RH is a learned vector, Ew\u03b1>w\u03b2 is the event that w\u03b1 is preferred to w\u03b2, and logit is the inverse of the logistic function, i.e., logit(p) := log p/(1\u2212p). Dependencies on fLM are omitted to save space. Given i.i.d. observations of preferences Dtrain := {(w\u03b1i, w\u03b2i, h\u03b1i, h\u03b2i)}dtrain i=1 , where w\u03b1i is always taken to be preferred over w\u03b2i, we optimize \u03b8 using maximum likelihood estimation: \u03b8\u2217:= argmax \u03b8 dtrain Y i=1 P(Ew\u03b1i>w\u03b2i; \u03b8); (4) P(Ew\u03b1i>w\u03b2i; \u03b8) := e\u03b8Th(\u2113) \u03b1i e\u03b8Th\u03b1i + e\u03b8Th(\u2113) \u03b2i . (5) For some set of word pairs {(w1, w2) : w1 \u2208 W\u03b1, w2 \u2208W\u03b2}, there are two ways to construct Dtrain: we can use the LLM to predict its preferences for each pair, or we can let the human-derived set assignments be the label (i.e., \u2208W\u03b1 or \u2208W\u03b2). The first is better for model introspection, since the LLM itself is the ground truth. The second is the only choice available for LLMs less capable of coherent text generation, though it requires meaningfully constrastive set labels, such as constructing W\u03b1 from positive emotions and W\u03b2 from negative ones. For conciseness, we call probes trained on human-derived set labels HD probes and those on LLM predictions LP probes. grateful > resentment nervousness < bliss cheerful > despair resentment < enchanted thrilled > insecurity worry < elation Probe Trained on Emote fascism < anarchism paleo-libertarianism > socialism anarchism > patriotism paleo-libertarianism > socialism fascism < socialism reactionary < democrat technocracy < communism Emote Probe Transferred to Politics Figure 3: A 2D projection of our probe (gold line) trained on emotions (left) and transferred to order leftand right-wing political beliefs (right), with embeddings from Mistral. Six points with high absolute scores are annotated, revealing an affinity for leftist beliefs. Finally, to perform inference with a trained probe for some word pair (w1, w2), we predict \u02c6 y(w1, w2; \u03b8\u2217) := ( w1 if P(Ew1>w2; \u03b8\u2217) > 0.5, w2 otherwise, (6) where \u02c6 y indicates the word more associated (preferred) with W\u03b1. 2.3 Our Implicit Bias Test We hypothesize that the hidden vectors h(\u2113) \u03b1 and h(\u2113) \u03b2 encode binary preferences on controversial prompts. But if the model does not answer, how do we discover biases? To this, we first train a probe on innocuous tasks for which the LLM can order W\u03b1 and W\u03b2, such as negative and positive emotions. Afterwards, we propose to transfer the trained probe to perform inference on a controversial test set W\u2032 \u03b1 \u00d7 W\u2032 \u03b2, such as African and European nationalities. If the probe still prefers one group, then the LLM representations biasedly associate W\u2032 \u03b1 (and W\u2032 \u03b2) with either W\u03b1 or W\u03b2; see Figure 3 for a visualization. Formally, let \u0398 : W\u03b1 \u00d7 W\u03b2 7\u2192RH be the training function that generates probe parameters \u03b8\u2217 optimized on the dataset W\u03b1 \u00d7 W\u03b2, with dependencies on fLM and hidden vectors dropped for concision. Suppose A := A\u03b1 \u00d7 A\u03b2 is a harmless dataset and B := B\u03b1 \u00d7 B\u03b2 a controversial one; then, we let the amount of implicit preference that fLM carries for B\u03b1 from A be a \u201cwin rate\u201d whose deviations from 0.5 (50%) imply association: \u03c1(A, B) := 1 |B| X (wb1,wb2)\u2208B \u02c6 y(wb1, wb2; \u0398(A)). (7) We use the Clopper\u2013Pearson method (Clopper and Pearson, 1934) to test for statistically significant departures from 50%. \fFurther considerations. One foreseeable concern is that the probe may be aligned to B by chance as a result of training randomness. While this might hold for nonlinear probes, our linear probe has a smooth convex loss function. Hence, reasonable optimization algorithms (e.g., Newton\u2019s method) will effectively converge to the global optimum and result in the same final probe, regardless of initialization and data order. Though similar to WEAT (Caliskan et al., 2017), our framework differs in key ways. WEAT chooses cosine distance to associate A with B directly without considering the LLM\u2019s outputs, which has three drawbacks: first, cosine distance does not directly optimize for preference. Second, WEAT takes the human-derived set assignment in A as ground truth rather than the LLM\u2019s output, which reduces its validity for studying bias inherent to LLMs. Lastly, it fails when differences between A\u03b1 and A\u03b2 are relatively paired instead of globally absolute; for example, in comparing numbers, six is greater than one, but six is not always the largest. Thus, for WEAT, six should not be in A\u03b1 or A\u03b2. As we confirm next, our probe outperforms WEAT. 3 Veracity Analysis Before applying our probe to study bias, we first confirm that it can both reliably model our attribute word sets and transfer well between different sets, with WEAT serving as one of the baselines. Our scope covers these claims and questions: C1: Our probes surpass WEAT and other baselines in preference prediction on domainspecific attribute word sets, achieving high absolute accuracy. C2: Our probes also exceed baselines when they are transferred from one task to another. Q1: Which layer yields the best embeddings for detecting preferences with our probes? 3.1 Experimental Setup Our analysis is broadly split between LP probes and HD probes. The former applies to LLMs which can fluently generate zero-shot preferences for training probes and the latter to the current setting used in the literature (Caliskan et al., 2017). Large language models. We conducted our analyses on thirteen transformer-based LLMs across six model families, from the 6 billion-parameter GPTJ (Wang and Komatsuzaki, 2021) model to the 70 billion (70B) parameter variant of the LLaMA-2 LLM (Touvron et al., 2023). Specifically, we selected the following: \u2022 LLaMA 2 consists of 7B, 13B, and 70B LLMs pretrained on two trillion tokens of privately crawled web data (Touvron et al., 2023). \u2022 CodeLLaMA (Roziere et al., 2023) comprises 7B, 13B, and 34B LMs initialized from LLaMA 2 and fine-tuned on 500B tokens of code. \u2022 Mistral is a 7B LLM claiming superiority over the LLaMA-2 13B variant (Jiang et al., 2023). \u2022 MPT-Instruct includes a 7B and 30B LLM (MosaicML, 2023) pretrained on one trillion tokens of public datasets, including RedPajama (Together, 2023) and C4 (Raffel et al., 2020). \u2022 WizardVicuna-13B (WVicuna) is a 13B LLM fine-tuned from the LLaMA 1 13B checkpoint on OpenAI GPT-3.5-generated examples (Lee, 2023). We also use its uncensored variant to study the effects of no safety alignment. \u2022 GPT-J is an older 6B model (Wang and Komatsuzaki, 2021) pretrained on 400B tokens from the Pile (Gao et al., 2020). We also picked a version with more fine-tuning on 4chan\u2019s far-right politics board (Papasavva et al., 2020). Unless specified, each model besides GPT-J refers to the instruction-following variant in each family, resulting from additional supervised (or reinforcement) fine-tuning on imperative sentences and crafted dialogue. This process produces better models that respond more accurately and safely to dialogue (Ouyang et al., 2022; Touvron et al., 2023). Probing baselines. For our baselines, we chose the standard WEAT (Caliskan et al., 2017), a maximum margin classifier, and plain logistic regression. WEAT implicitly uses the smaller mean cosine distance between the embedding of the test word and those of the two attribute word sets to dictate the preference \u02c6 yWEAT := argminw dc(w, W\u03b1) \u2212 dc(w, W\u03b2), where dc(w, W) denotes the mean cosine distance between w and word set W. For the max-margin classifier, we maximized a margin objective instead of the likelihood from Eqn. (4): J (\u03b8) := min(0, \u03b8Th\u03b1 \u2212\u03b8Th\u03b2 \u2212c) (8) with c tuned. Lastly, as the simplest baseline, we trained a logistic regression model to predict preference directly from the concatenated embeddings hcat := h\u03b1 \u2295h\u03b2 for \u03b8LR \u2208R2H: PLR(Ew\u03b1i>w\u03b2i; \u03b8LR) := e\u03b8T LRhcat e\u03b8T LRhcat + 1 . (9) \f# Model ACTION EMOTE NUMBER Ours WEAT MaxM Ours WEAT MaxM Ours WEAT MaxM LP Probes Trained with LLM-Predicted Preferences 1 CodeLLaMA7B 82.9 (100) 79.0 (96) 79.8 (100) 83.6 (92) 84.4 (92) 82.7 (88) 94.0 (100) 92.6 (100) 92.8 (100) 2 CodeLLaMA13B 78.2 (92) 71.3 (90) 71.1 (89) 73.4 (81) 67.4 (70) 66.0 (70) 83.9 (91) 81.6 (89) 81.3 (89) 3 CodeLLaMA34B 83.3 (98) 75.8 (95) 74.9 (95) 96.0 (100) 92.0 (97) 93.7 (97) 91.7 (99) 88.1 (95) 88.2 (96) 4 LLaMA-27B 82.6 (100) 73.7 (100) 83.3 (100) 93.1 (100) 91.6 (97) 92.3 (97) 72.9 (87) 61.1 (72) 61.1 (73) 5 LLaMA-213B 90.4 (100) 82.4 (97) 85.1 (97) 96.6 (100) 96.6 (100) 98.7 (100) 83.2 (91) 76.0 (91) 71.1 (81) 6 LLaMA-270B 89.3 (98) 87.8 (100) 88.5 (100) 98.2 (100) 97.5 (100) 97.5 (100) 84.7 (93) 77.0 (86) 74.5 (84) 7 Mistral7B 93.8 (100) 93.1 (100) 93.1 (100) 94.1 (95) 93.8 (98) 94.8 (95) 79.0 (93) 73.4 (87) 68.7 (87) HD Probes Trained with Human-Derived Preferences 8 MPT-Instruct7B 93.8 (100) 88.5 (100) 89.5 (100) 99.5 (100) 99.5 (100) 98.1 (100) 82.1 (91) 68.8 (75) 64.1 (73) 9 MPT-Instruct30B 80.8 (100) 79.1 (100) 55.8 (95) 97.2 (100) 95.6 (100) 84.1 (100) 83.2 (91) 75.7 (86) 65.2 (80) 10 WVicuna13B 91.3 (100) 91.0 (100) 89.7 (100) 97.5 (100) 97.5 (100) 97.5 (100) 81.8 (95) 73.7 (83) 72.4 (80) 11 WVicuna-U13B 89.2 (100) 88.1 (100) 87.7 (100) 90.8 (100) 90.2 (100) 90.8 (100) 76.7 (90) 65.7 (75) 68.2 (79) 12 GPT-J6B 96.4 (100) 91.1 (100) 92.4 (100) 100 (100) 100 (100) 100 (100) 81.6 (91) 67.5 (75) 58.9 (61) 13 GPT-J-4chan6B 96.0 (100) 94.0 (100) 94.0 (100) 100 (100) 100 (100) 100 (100) 82.8 (91) 73.2 (80) 68.3 (86) Table 1: Preference prediction quality in mean accuracy and maximum accuracy (in parentheses) across the layers. Best results for each task are in bold, and hue indicates magnitude. \u201cMaxM\u201d is short for the max-margin classifier. The mean accuracy of our probe significantly surpasses (p < 0.01) the others according to the signed-rank test. Datasets. We constructed three attribute word sets of actions, emotions, and numbers: \u2022 ACTION consists of 50 moral actions, such as helping and sharing, for the positive set W\u03b1, and 50 immoral ones (e.g., stealing) for W\u03b2. \u2022 EMOTE comprises 150 negative emotions (sad, panic, etc.; W\u03b2) and 150 positive ones (happy, grateful, etc.; W\u03b1). \u2022 NUMBER is 500 pairs of integers sampled randomly between -1000 and 1000, with the smaller number in W\u03b2 and the larger in W\u03b1. We set aside 20% of each dataset for the test set. Existing works use a broad spectrum of positive\u2013 negative word pairs across all domains (Gonen and Goldberg, 2019), but we consciously avoided this to eliminate overlap between the attribute and target word sets in Section 4. It would bias methods to associate, say, the religious word \u201creformationist\u201d to the positive set if it already contains \u201creform.\u201d We also used three sets instead of one to improve the robustness of our findings, which is a known issue with embedding association (Du et al., 2021). To query the LLMs, we fixed the prompt as \u201cDo not be prejudiced. Which is more positive: word1 or word2? Reply with one phrase,\u201d replacing word1 and word2 with the attribute words. We randomly swapped positions to remove order as a confound. 3.2 Results and Discussion Overall quality. We present our main results in Table 1. We trained LP probes on CodeLLaMA, LLaMA 2, and Mistral since they could consistently generate coherent answers and HD probes on MPT-Instruct, WizardVicuna, and GPT-J. As expected, logistic regression is low accuracy, so we omit it to make room; see Figure 4. Overall, our probe outperforms WEAT and the max-margin classifier by 4.4 and 5.9 absolute points in mean accuracy, improving the relative error rate by 27% and 34%, respectively. Our maximum accuracy also significantly exceeds the others (p < 0.05). On NUMBER, a non-globally ordered dataset, our lead increases to 7.9 points over WEAT, confirming our hypothesis in Section 2.3. CodeLLaMA produces the highest-quality embeddings for that dataset (accuracy of 88.2 vs. 73.5; rows 1\u20133 vs. 4\u20136), likely due to its code fine-tuning. Our probe does the best on 35 out of 39 model\u2013task settings, most prominently on ACTION (12 out of 13) and NUMBER (13/13). Its milder outperformance on EMOTE (10/13) may arise from the task being well solved: all probes reach a mean accuracy of 93% on EMOTE but 85% and 76% on ACTION and NUMBER. We conclude that our probes outperform WEAT and max-margin classification on domain-specific attribute word sets (C1). \f0 50 100 Layer Number (%) 40 60 80 100 Accuracy (%) LP Probe Quality 0 50 100 Layer Number (%) 40 60 80 100 Accuracy (%) HD Probe Quality Our Probe Max Margin WEAT LogisticReg Figure 4: Accuracy by layer number. Hue indicates the probe and shades within the same hue denote LLMs. Do any factors explain the variance in the quality of our probe? Our probes present no correlation between quality and LLM size (Spearman\u2019s r = 0.19; p > 0.2), suggesting that they model the embeddings of big and small LLMs equally. Differences between LP and HD probes are also not detectably significant on the t-test. However, a two-way ANOVA analyzing the influence of the six model families and the datasets on accuracy reveals a significant interaction of dataset and family (p < 0.05) and dataset alone (p < 0.01), though not family alone (p > 0.05). Therefore, probes within the same dataset or family are consistent, but varying either the dataset or both the family and dataset may reduce the robustness. This aligns with Du et al. (2021) and supports our justification in Section 4.2 for transferring from three attribute sets (ACTION, EMOTE, NUMBER) instead of one. Layerwise quality. We plot the accuracy of the probes by layer number in Figure 4, averaging across the three tasks. The max-margin probe is notably less stable (see the blue line), possibly explaining its underperformance in Table 1. We find that, regardless of model size, layers in the middle 30\u201360% of the model consistently beat the others (95% vs. 84% in mean accuracy; p < 0.05). The best accuracy for each model also occurs at the 49% layer on average; thus, we pick the middlemost layer in the model (50%), answering Q1. Next, in Figure 5, we plot the mean accuracy of the probes when transferred for all six pairs of separate tasks in ACTION, EMOTE, and NUMBER. That is, we train on ACTION and transfer to EMOTE and NUMBER, train on EMOTE ..., and so on. Our probe surpasses the others, which supports C2; it reaches 93% accuracy against WEAT\u2019s 91% and max-margin\u2019s 90%. From these experiments, we surmise that our probe is sufficiently robust to transfer to controversial tasks to study implicit bias. 0 50 100 Layer Number (%) 40 60 80 100 Accuracy (%) LP Transfer Probe Quality 0 50 100 Layer Number (%) 20 40 60 80 100 Accuracy (%) HD Transfer Probe Quality Our Probe Max Margin WEAT LogisticReg Figure 5: Accuracy by layer, averaged across all six transfer permutations. Hue semantics match Figure 4\u2019s. 4 Bias Analysis We now apply our probe transfer methodology to characterize implicit biases in the embeddings of LLMs. We investigate these research questions: Q2: What implicit sociodemographic biases do LLMs have in their embeddings? Q3: How do factors such as fine-tuning and model size affect the implicit bias? 4.1 Experimental Setup For the LLMs and attribute word training sets, we used those from Section 3.1. For the probe, we applied ours due to its improved discriminative quality, with embeddings coming from the middlemost layer, shown to be the best in Section 3.2. Datasets. We built seven test sets in four domains: \u2022 NATIONALITY has an East\u2013West set split between 57 Eastern (Middle East and Far East) and 138 Western countries, classified from the World Bank, and an Africa\u2013Europe set with all the African and European countries in two groups. \u2022 POLITICS has two test sets of 70 left/right-wing ideologies and 98 authoritarian/libertarian ideologies, pulled from GPT-4 and hand-verified. \u2022 RELIGION comprises two test sets: first, a set of three groups, each containing 10 major branches from the three main Abrahamic religions Islam, Judaism, and Christianity, drawn from GPT-4 and manually verified; second, a test set with 15 reformationist branches and 12 conservative ones, both split equally among the religions. \u2022 CAREER is a single test set of 100 careers (e.g., \u201cCEO\u201d) with the string \u201cmale\u201d prepended to them and 100 of the same but with \u201cfemale\u201d prefixed instead. Career names were pulled from the US Bureau of Labor Statistics. We use the same prompt from Section 3.1. See the codebase for the datasets. \f# Model NATIONALITY POLITICS RELIGION CAREER \u220650 East/West Africa/EU Left/Right Auth/Libre Chr/Islam/Jew Trad/Reform Fem/Male LP Probes Trained with LLM-Predicted Preferences on Innocuous Datasets 1 CodeLLaMA7B 46.9/53.1 43.1/56.9 52.4/47.6 37.9/62.1 52.0/51.7/47.0 46.7/53.7 51.1/48.9 4.0 2 CodeLLaMA13B 48.3/51.7 59.9/40.1 58.3/41.7 44.2/55.8 46.4/43.7/57.5 41.3/58.7 57.7/42.3 6.6 3 CodeLLaMA34B 52.1/47.9 45.3/54.7 57.3/42.7 43.8/56.2 62.8/52.6/35.4 57.1/41.7 46.3/53.7 6.8 4 LLaMA-27B 38.9/61.1 45.9/54.1 73.2/26.8 24.9/75.1 37.1/52.7/61.1 30.4/70.6 55.0/45.0 12.8 5 LLaMA-213B 46.9/53.1 41.0/59.0 69.8/30.2 31.0/69.0 44.7/37.6/63.7 35.2/66.2 61.7/38.3 12.1 6 LLaMA-270B 43.6/56.4 40.8/59.2 64.1/35.9 39.4/60.6 45.9/44.2/57.4 46.2/53.8 57.4/42.6 7.6 7 Mistral7B 40.3/59.7 34.4/65.6 59.6/40.4 39.9/60.1 58.1/52.1/40.0 40.7/60.5 56.6/43.4 9.0 HD Probes Trained with Human-Derived Preferences on Innocuous Datasets 8 MPT-Instruct7B 47.7/52.3 44.1/55.9 61.9/38.1 38.6/61.4 72.1/38.6/35.4 48.6/51.4 52.8/47.2 9.3 9 MPT-Instruct30B 45.3/54.7 51.3/48.7 59.0/41.0 44.7/55.3 57.1/38.7/50.8 47.4/53.0 48.6/51.4 4.8 10 WVicuna13B 37.8/62.2 43.9/56.1 61.1/38.9 37.5/62.5 50.5/44.7/52.8 41.1/58.9 60.4/39.6 7.8 11 WVicuna-U13B 38.7/61.3 52.4/47.6 65.3/34.7 33.3/66.7 58.2/46.1/44.8 38.8/62.7 53.2/46.8 8.6 12 GPT-J6B 52.0/48.0 48.4/51.6 71.3/28.7 36.7/63.3 50.9/35.2/57.9 37.1/64.2 57.7/42.3 9.2 13 GPT-J-4chan6B 37.6/62.4 32.7/67.3 44.7/55.3 31.9/68.1 62.0/41.8/44.4 45.8/54.2 46.0/54.0 9.7 Table 2: Pairwise preference results of probes transferred from neutral prompts to controversial ones in the domains of nationality, politics, religion, and careers. Each number represents the win rate of the corresponding target group in the column, with higher values (in brighter colors) indicating greater preference. Underlined results are significantly different in mean value (p < 0.05) from the bolded result according to the Clopper\u2013Pearson test. The final column (\u220650) denotes the average deviation of the model from neutrality (50% win rate). 4.2 Results and Discussion We present our results in Table 2. Each number is the win rate (Eqn. 7) of the target group in the subcolumn, averaged across three of our probes trained on ACTION, EMOTE, and NUMBER to predict the more positive word. Specifically, given a test set of n = 2 or 3 groups of words {W1, . . . , Wn} and attribute training sets T := {AACTION, AEMOTE, ANUMBER}, the win rate \u00af r of Wi is \u00af r(Wi) := 1 |T | X A\u2208T 1 n \u22121 X j\u0338=i \u03c1(A, {Wi, Wj}), (10) with the \u03c1 from Eqn. (7). Averaging across multiple probes in separate domains improves the robustness to confounders and variation present in a single attribute set (Du et al., 2021). Overall bias. The LLMs are biased in all domains: politics most notably (\u220650 = 13, averaged across models), followed by religion (\u220650 = 7.7), nationality (\u220650 = 6.8), then career gender (\u220650 = 5.6). We conjecture that this results from strongly polarizing rhetoric in political writing (Webster and Albertson, 2022). A one-way ANOVA with domain as a factor yields significance (p < 0.01; Levene\u2019s test passes); Tukey\u2019s HSD shows politics to be more biased than the others (p < 0.05). As for model families, CodeLLaMA has the least amount of bias (\u220650 = 5.8 versus the others\u2019 9.1; p < 0.05 according to Welch\u2019s t-test), likely because it additionally pretrains on software code rather than natural language. Overall, besides CodeLLaMA, no statistical difference is detected; the same holds for model size, in line with previous analyses relating size to bias (Dong et al., 2023). Set-level bias. Within the nationality domain, all thirteen LLMs favor Western over Eastern countries (\u220650 = 6.3), and all except CodeLLaMA13B prefer African countries over European ones (\u220650 = 7.2). We postulate that this follows from the LLMs being trained predominantly on English texts, representing the most common language in Western countries and Europe. These findings also align with the bias of smaller LMs in generating offensive nouns and adjectives for demonyms of countries (Venkit et al., 2023). For politics, each LLM (except GPT-J-4chan) strongly prefers leftist political views (\u220650 = 12.2) and libertarianism (\u220650 = 12.8). This mirrors past works which reveal an affinity of decoder-only LLMs for libertarian values (Feng et al., 2023). We hypothesize that both the pretraining distribution and further fine-tuning contribute to our observed \fbiases: for example, GPT-J-4chan flips from leaning heavily left (row 12) to right (row 13) after fine-tuning on 4chan\u2019s far-right /pol/ board (Hine et al., 2017; Papasavva et al., 2020), being the only model out of thirteen to do so. In the test sets for religion, the LLMs are evenly split between Christianity (62% average win rate on biased models) and Judaism (58%), with none preferring Islam (45%). This agrees with past findings of language models associating Islam with violence (Abid et al., 2021). Regardless of the major religion, all LLMs but one prefer less orthodox branches (57% win rate). We attribute these phenomena to the dominance of internet-crawled English corpuses (Together, 2023; Gao et al., 2020), which may represent Islam more negatively than, say, Arabic-dominated media does. Finally, for our career domain, 8 of the 13 LLMs implicitly associate professions prefixed with \u201cfemale\u201d more positively than it does those with \u201cmale,\u201d titles being equal (e.g., \u201cmale CEO\u201d vs. \u201cfemale CEO\u201d and \u201cmale physicist\u201d vs. \u201cfemale physicist\u201d). One reason for this seemingly contradictory phenomenon may be that Western media tends to reinforce female stereotypes of positive emotions such as empathy (Van der Pas and Aaldering, 2020), which our emotion probe covers. Interestingly, the GPT-J-4chan model fine-tuned on misogynistic 4chan posts (Hine et al., 2017) flips the 57.7% win rate of females (row 12) to a 54% rate for males (row 13). We conclude that, in spite of the safety fine-tuning and prompt-based guardrails, LLMs broadly exhibit the same kinds of biases in their latent representations. 5 Related Work and Future Directions The bias analysis on language models dates back to shallow word embeddings (Pennington et al., 2014). WEAT (Caliskan et al., 2017) and its sentencecontextualized variant SEAT (May et al., 2019), measure biases from the association of the concepts with certain attributes, based on the representations of the concepts and attributes. For the more recent pretrained language models, e.g., encoder-based models (Devlin et al., 2019; Liu et al., 2019) and the decoder-only autoregressive models (Sun et al., 2023; Wang and Komatsuzaki, 2021; Touvron et al., 2023; Roziere et al., 2023; Jiang et al., 2023; Chiang et al., 2023), a popular line of work examines probing language models using template prompting instead of internal representations. For encoder-only models, the templates are in the mask-filling style (Feng et al., 2023); For autoregressive models, the pre-defined templates are usually in the text generation style (Feng et al., 2023; Dong et al., 2023). We refer readers to surveys on detailed discussions of these recent works (Gupta et al., 2023; Sheng et al., 2021) in the bias of language models. Many of the findings in this work echo previous observations made on the model outputs: For example, Western nationalities are preferred over Eastern nationalities (Tan and Celis, 2019), models from the same family but in different sizes do not always show consistent behavior on the bias test (Feng et al., 2023); model biases are rooted in the pretraining corpus (Feng et al., 2023), and so on. These similar findings further affirm the validity of our method. One vein of future work is to thoroughly debias contextual word representations and reduce the amount of detectable bias in them. Previously, it has been shown that debiasing methods are ineffective on shallow word embeddings as far as implicit bias is concerned (Gonen and Goldberg, 2019); we extend these findings to the contextualized, LLM case. Another future direction is to assess implicit bias in LLMs pretrained on different corpora and probe the effects of the choice of large-scale pretraining texts on bias. The objective of this work is to provide a theoretically supported tool to analyze the bias of LLMs without requiring them to output any text on controversial tasks. Our primary goal for our method to serve as a base for future in-depth bias analysis and reduction in the LLMs. 6" + }, + { + "url": "http://arxiv.org/abs/2310.07712v2", + "title": "Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models", + "abstract": "Large language models (LLMs) exhibit positional bias in how they use context,\nwhich especially complicates listwise ranking. To address this, we propose\npermutation self-consistency, a form of self-consistency over ranking list\noutputs of black-box LLMs. Our key idea is to marginalize out different list\norders in the prompt to produce an order-independent ranking with less\npositional bias. First, given some input prompt, we repeatedly shuffle the list\nin the prompt and pass it through the LLM while holding the instructions the\nsame. Next, we aggregate the resulting sample of rankings by computing the\ncentral ranking closest in distance to all of them, marginalizing out prompt\norder biases in the process. Theoretically, we prove the robustness of our\nmethod, showing convergence to the true ranking in the presence of random\nperturbations. Empirically, on five list-ranking datasets in sorting and\npassage reranking, our approach improves scores from conventional inference by\nup to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous\nstate of the art in passage reranking. Our code is at\nhttps://github.com/castorini/perm-sc.", + "authors": "Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture", + "published": "2023-10-11", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Large language models (LLMs) respond cogently to free-form textual prompts and represent the state of the art across many tasks (Zhao et al., 2023). Their quality, however, varies with nuisance positional factors such as prompt order and input length. As a descriptive example, consider this prompt: Arrange the following passages in decreasing relevance to the query, \u201cwhat are shrews?\u201d (1) Cats hunt small mammals, such as shrews ... (2) Shrews are mole-like mammals, widely ... (3) Shrews use their noses to find prey and ... The correct output order is (2, 3, 1), from most to least relevant, but several positional biases may interfere with the model. Liu et al. (2023) demonstrate that LLMs tend to get \u201clost in the middle\u201d of * Equal contribution. 5 4 2 Order these items: LLM 1 4 5 2 3 a d e c b 1 3 Figure 1: The conventional decoding process for listwise ranking with input prompt (a), language model (c), and output ranking (d). The grey item (b) is \u201clost in the middle\u201d by the LLM, resulting in its misranking (e). LLM 1 4 5 2 3 1 3 4 2 5 5 1 3 1 4 2 3 5 5 2 4 1 1 4 4 2 3 1 3 3 5 5 2 4 2 a b c b c Figure 2: Our permutation self-consistency process. With the instruction fixed, we shuffle the input list for prompts (a), producing outputs with different mistakes. We aggregate (b) these output rankings into one (c). a long context and use the middle portion poorly, which suggests that the middle passage (2) in the example may get misranked (e.g., 3, 1, 2). Wang et al. (2023a) find prompt order to affect quality, with some orders outperforming others; if items 1 and 3 were swapped in the prompt, the LLM would perhaps generate the mistaken ranking (2, 1, 3). In this paper, we mitigate positional biases for listwise-ranking LLMs. We propose permutation self-consistency, a novel decoding strategy for improving the quality, consistency, and prompt-order invariance of black-box LLMs. First, we construct prompts with randomly permuted input lists, then feed them into an LLM to generate a set of output rankings. Then, we aggregate these outputs into the central ranking that minimizes the Kendall tau distance to all of them, marginalizing out prompt order as a factor; see Figures 1 and 2. As related work, Stoehr et al. (2023) train direction-unaware probes on the representations of language models to detect order consistency, but their evaluation reveals the ranking direction of test examples to the model, deviating from standard practices. arXiv:2310.07712v2 [cs.CL] 22 Apr 2024 \fNext, we assess the effectiveness of permutation self-consistency, both theoretically and empirically. Theoretically, we prove in Section 2.3 that it recovers the true ranking under arbitrary noise distributions with enough observations and at least one correctly ordered pair in each observation. Experimentally, we apply our method to tasks in math and word sorting, sentence ordering, and passage reranking (Craswell et al., 2020, 2021), consistently increasing the scores of GPT-3.5, GPT-4, and LLaMA v2 (70B; Touvron et al., 2023) by up to 4\u201317%, 9\u201324%, and 8\u201316%, respectively. We achieve similar gains for Mistral (Jiang et al., 2023) and Zephyr (Tunstall et al., 2023). We conclude that permutation self-consistency improves listwise ranking in LLMs. In line with our premises, we observe positional bias, as shown in Section 3.2. Finally, we conduct auxiliary analyses to justify our design choices. In Section 4.1, our hyperparameter study finds that quality quickly rises with the number of aggregated output rankings: the score improvement from using five aggregated rankings reaches 67% of twenty, on average, suggesting that a few suffice for quality gain. We further demonstrate that sampling temperature is ineffective for us, unlike the original self-consistency work (Wang et al., 2023b) in chain-of-thought reasoning, likely because listwise ranking does not require exploration of various reasoning paths. Our contributions are as follows: (1) we propose a novel decoding technique for improving the quality, consistency, and position invariance of black-box, listwise-ranking LLMs; (2) we empirically establish the validity of our method in sorting and passage reranking on seven models and five datasets, and we theoretically prove the robustness of our method to certain classes of ranking noise, including \u201clost-in-the-middle\u201d type ones; and (3) we provide new analyses on positional biases in listwise-ranking LLMs, finding that biases depend on pairwise positions of items in the list. 2 Our Approach 2.1 Preliminaries Notation. We define an n-ranking as a permutation \u03c3 : {1, . . . , n} 7\u2192{1, . . . , n}. For some sequence X := {Xi}n i=1, define X[\u03c3] as the permuted sequence of X transformed by \u03c3, where X[\u03c3]i := X\u03c3(i). Let the inversion vector of \u03c3 be inv(\u03c3)i := #{j : \u03c3(j) > \u03c3(i), j < i}. (1) To quantify dissimilarity, the Kendall tau distance between two rankings \u03c31 and \u03c32 is the number of inversions in \u03c3\u22121 1 \u25e6\u03c32: d\u03ba (\u03c31, \u03c32) := n X i=1 inv(\u03c3\u22121 1 \u25e6\u03c32)i. (2) In other words, it is the number of pairwise disagreements, or discordant pairs, in the permutation ordering. The distance is one affine transform away from the Kendall tau correlation, used to measure list order similarity (Kendall, 1948): \u03c4(\u03c31, \u03c32) := 1 \u22122d\u03ba(\u03c31, \u03c32) \u0000n 2 \u0001 . (3) In the extreme, \u03c4 = 1 \u21d0 \u21d2\u03c31 = \u03c32, and \u03c4 = \u22121 implies that one is the other\u2019s reverse. 2.2 Permutation Self-Consistency How do we mitigate positional biases in listwiseranking LLMs? We find inspiration in the selfconsistency framework (Wang et al., 2023b), which improves quality and consistency in chain-ofthought prompting (Wei et al., 2022). The approach has two main stages: first, it samples multiple answers for an input prompt; then, it aggregates the sampled answers into a single, high-quality one, hence \u201cmarginalizing out\u201d separate reasoning paths from the language model. Unfortunately, self-consistency does not readily generalize to listwise ranking for a few reasons. For one, it is limited to point predictions, greatly simplifying the aggregation procedure to taking the majority vote. For another, sampling temperature, the method\u2019s mainstay of generating diverse samples for aggregation, has little effect on (and at times harming) the quality of aggregated predictions in listwise ranking, as shown in Section 4.1. Lastly, self-consistency does not explicitly address positional bias, the central issue of our paper. Nevertheless, its shuffle\u2013aggregate paradigm is still a useful template. With it, we propose permutation self-consistency: for the first sample step, we randomly shuffle the list in the prompt to curate a diverse set of rankings, each with different position biases. For the next aggregate step, we compute the central ranking closest in Kendall tau distance to all the sampled rankings, which, like self-consistency, marginalizes out the independent variable (in the original, reasoning paths; in ours, prompt order). Intuitively, we intervene on list order, collect output rankings, then aggregate, breaking the association between individual list order and output rankings. \fTask Example Input Prompt Math Sorting Sort these expressions: 3 / 2, 1 5, ... Sentence Ordering Order the shuffled sentences: [1] The... Passage Ranking Order these by relevance to the query, \u201cwhat are shrews?\u201d: [1] Cats hunt... Table 1: Listwise-ranking input prompt examples. Formally, we are given an input sequence of items X := {Xi}n i=1, such as a list of passages, along with a listwise-ranking LLM h(X; s) that returns an n-ranking on some string prompt s; see Table 1 for an example. First, we construct a diverse set of output rankings by randomly permuting X and passing it through the LLM, like how selfconsistency uses temperature to vary their output. Specifically, we sample a sequence \u02c6 \u03c3i := h(X[\u03c0i]; s) for 1 \u2264i \u2264m, (4) where \u03c0i is drawn uniformly at random from the set of all possible n-rankings. As noted previously, each output ranking has positional bias, but mistakes are expected to differ among the outputs because of our input order randomization. We then \u201cmarginalize out\u201d these individual biases by aggregating the output rankings into a single central ranking. One method with attractive theoretical properties is the Kemeny\u2013Young (Kemeny, 1959) optimal ranking of the outputs\u2014that is, the central ranking that minimizes the sum of its Kendall tau distances to every output ranking: \u00af \u03c3 := argmin \u03c3 X 1\u2264i\u2264m d\u03ba(\u02c6 \u03c3i, \u03c3). (5) Our approach returns \u00af \u03c3 as the prediction for X and terminates. Although this calculation is NPhard, fast exact and approximate algorithms exist (Conitzer et al., 2006; Ali and Meil\u02d8 a, 2012), many implemented in our codebase. Passage reranking. The task of passage ranking is to rank a set of provided passages in order of relevance to a given query. The use of permutation self-consistency for this case deserves special attention. Due to the LLM input length constraint, predominant LLM-based approaches such as RankGPT (Sun et al., 2023), LRL (Ma et al., 2023b), and RankVicuna (Pradeep et al., 2023) stride the LLM across fixed windows of items from the back of the list to the front, rather than output a ranking in a single pass. In this case, we apply permutation self-consistency to each window. 2.3 Theoretical Guarantees We now show that for certain kinds of noisy rankings, the Kemeny ranking can recover the true ranking given enough observations. For example, if there always exists some random pair of items that is correctly ranked among randomly ordered observations, we will converge to the true ranking. Definition 2.1. For two rankings \u03c31 and \u03c32, the concordant subset is a set S\u2032 where \u2200i and j \u2208 S\u2032, \u03c31(i) < \u03c31(j) \u2227\u03c32(i) < \u03c32(j) or \u03c31(i) > \u03c31(j) \u2227\u03c32(i) > \u03c32(j). Proposition 2.1. Let there be a true ranking \u03c3 and a sequence of i.i.d. uniformly noisy rankings \u02c6 \u03c3 := {\u02c6 \u03c3i}m i=1. Suppose each noisy ranking \u02c6 \u03c3k has a uniformly random, nonempty concordant subset S\u2032 k with \u03c3, and the remaining rank elements not in S\u2032 k represent a random permutation. Then the Kemeny\u2013 Young ranking \u00af \u03c3 of \u02c6 \u03c3 converges in probability to \u03c3, i.e., it is a consistent estimator. Proof sketch. Let Aij be the event that the sum of discordant pairs indexed by i and j between \u02c6 \u03c3 and \u03c3 is greater than the number of concordant ones. P(Aij) is upper-bounded by exp(\u2212O(m)). The union bound of P(T i,j Aij) shows that the probability of the sum of discordant pairs being greater than that of the concordant pairs vanishes for any pair as m approaches infinity. Thus, the Kemeny-optimal ranking will always approach \u03c3 for m \u2192\u221e, concluding our proof. To extend this, we prove that, in the presence of ranking noise, characterized empirically in Section 3.2, our approach yields a consistent estimator for the true ranking, given that at least one possibly nonrandom pair of items is always concordant: Proposition 2.2. Let there be a true ranking \u03c3 and a distribution of noisy rankings P(\u03c3noise), where \u03c3noise \u25e6\u03c0 always has a uniform, non-empty concordant subset S with \u03c3 for any input ranking \u03c0, and the elements not in S are uniformly random. Then the permutation self-consistency procedure is a consistent estimator of \u03c3 when applied to the input \u03c0 and the \u201cLLM\u201d characterized by P(\u03c3noise). Proof sketch. Observe that the first shuffling stage of permutation self-consistency transforms the premises into those of Proposition 2.1. Since the next stage of the method involves the same Kemeny\u2013Young ranking as the proposition does, the rest of the proof quickly follows. Full proofs are in Appendix A. \f1. MathSort: Sort ten arithmetic expressions by value. Example: 3 / 5, 2 9, 6 * 5, 2 * 1, 3 / 1, 9 * 9, 1 9, 9 + 8, 3 / 5, 1 / 9. 2. WordSort: Order ten words alphabetically. Example: aaron, roam, aardvark, nexus, [...]. 3. GSM8KSort: Unscramble sentences from GSM8K. Example: Order the scrambled sentences logically: She took 1 hour to walk the first 4 miles [...] Marissa is hiking a 12-mile trail. If she wants her average speed to be 4 [...] Table 2: Example prompts for our three sorting tasks. 3 Experiments We experiment on sorting and passage ranking, two distinct types of problems in listwise ranking. 3.1 Sorting Tasks Setup. We build three functionally distinct datasets called MathSort, WordSort, and GSM8KSort, corresponding to numerical sorting, alphabetical ordering, and sentence arrangement, respectively. For MathSort, the task is to sort ten random mathematical expressions of the form digit op digit, where digit is a single digit and op is one of +, -, *, or /. In WordSort, the goal is to order ten random English words alphabetically. Finally, GSM8KSort is a sentence-unscrambling task over the test set of the GSM8K reasoning dataset (Cobbe et al., 2021). For consistency and tractability, we use 100 examples in each dataset; see Table 2 for prompts. These synthetic sorting datasets have certain benefits. The items are intrinsically comparable, especially in MathSort and WordSort, whose elements have unequivocal order (e.g., \u201caardvark\u201d must precede \u201cabacus\u201d in WordSort). On the other hand, passage ranking relies on human judgment, where label noise may confound findings. Synthetic construction also enables control of item length: MathSort examples are fixed at three tokens, WordSort at a single word, and GSM8K one sentence. For our LLMs, we choose the open families of LLaMA v2 models (Touvron et al., 2023), Mistral7B Instruct (Jiang et al., 2023), and Zephyr\u03b27B (Tunstall et al., 2023), along with the closed GPT-3.5 (Turbo, the \u201c0613\u201d version) and GPT-4 from OpenAI, both the state of the art. We apply permutation self-consistency with m = 20 output rankings, resulting in 20 parallel calls to the LLM per example. Detailed settings are in Appendix B.2. Method MATHSORT WORDSORT GSM8KSORT Orig. PSC Orig. PSC Orig. PSC Mistral-7B 34.7 52.9 55.3 74.2 46.7 65.3 Zephyr\u03b2-7B 13.2 32.2 30.7 60.8 34.5 61.6 LLaMA2-7B 8.7 24.2 41.3 59.9 6.1 21.3 LLaMA2-13B 16.7 26.0 65.4 78.8 42.7 46.8 LLaMA2-70B 27.9 31.3 74.6 81.0 61.1 71.2 GPT-3.5 64.0 75.2 85.9 88.1 82.1 88.4 GPT-4 83.5 89.6 89.9 92.0 88.4 90.5 Table 3: Kendall tau correlation scores on our sorting tasks. Original scores are the median across 20 single runs, and PSC aggregates those 20. Underline indicates improvement from PSC and bold denotes best. 60 70 80 90 T au Score MathSort WordSort GSM8KSort T ask Individual Score Distribution vs. PSC Our PSC GPT-3.5 GPT-4 Figure 3: The distribution of sorting task scores from twenty individual runs plotted against our PSC score. Our PSC outperforms the best of any individual run. Results. We present our main results in Table 3, naming our method \u201cPSC\u201d for short. PSC consistently outperforms conventional inference on all three datasets and seven models by an average of 51% in Kendall tau correlation, skewed toward the smaller variants. Specifically, LLaMA2-7B, 13B, and 70B attain average score increases of 157%, 28%, and 12%, respectively, Mistral and Zephyr improve by 42% and 106%, and GPT-3.5 and GPT-4 by 3\u201318% and 2\u20137%. We attribute this to the already high quality of the larger 70B and GPT models, which leave less room for improvement. Task-wise, we improve MathSort, WordSort, and GSM8KSort by 67%, 30%, and 58%, and gains negatively correlate with original quality (r = \u22120.72). We conclude that PSC improves listwise ranking on sorting tasks, with higher gains on smaller models and more difficult tasks. One foreseeable question is whether any individual runs surpass PSC, which would weaken the case for rank aggregation. To answer this, we plot the distribution of the individual scores against PSC in Figure 3. We observe that PSC reliably beats all individual runs by 1\u201312%, improving the most on tasks and models with lower baseline quality, such as MathSort and GPT-3.5. These findings bolster the necessity of the aggregation step. \fFirst Stage Top-k Method TREC-DL19 TREC-DL20 Original Our PSC Original Our PSC None All (1) BM25 50.58 \u2013 47.96 \u2013 All (2) SPLADE++ ED 73.08 \u2013 71.97 \u2013 Supervised Approaches BM25 100 (3) MonoT5 (T5-3B) 71.83 \u2013 68.89 \u2013 100 (4) RankT5 (T5-3B) 71.22 \u2013 69.49 \u2013 100 (5) RankLLaMA (13B) 73.22 \u2013 70.38 \u2013 Unsupervised Approaches BM25 100 (6) PRP-Best (FLAN-T5-XXL) 69.87 \u2013 69.85 \u2013 100 (7) PRP-Best (FLAN-UL2) 72.65 \u2013 70.68 \u2013 100 (8) RankVicuna 66.83 68.70 65.49 65.68 20 (9) Single (GPT-3.5) 60.95 (60.96) 61.49 57.64 (57.68) 59.62 20 (10) Single (GPT-4) 60.88 (60.92) 64.88 57.78 (57.89) 62.49 100 (11) RankGPT (GPT-3.5) 68.00 (68.13) 70.77 62.08 (63.20) 62.70 100 (12) RankGPT (GPT-4) 75.00 (75.59) 75.66 70.36 (70.56) 71.00 SPLADE++ ED 100 (13) RankVicuna 74.59 74.13 74.73 74.06 20 (14) Single (GPT-4) 73.21 (73.36) 76.87 71.97 (73.63) 78.52 100 (15) RankGPT (GPT-4) 74.64 (74.93) 76.01 70.76 (71.08) 75.14 Table 4: nDCG@10 results on DL19 and 20. The maximum across three runs are in parentheses, while those outside the median. Improvements from PSC are underlined and best per section are bolded. On the one-tailed signed-rank test, paired differences between the original and PSC are significant at the 99% confidence level (p < 0.01). 3.2 Passage Reranking Task For a longer-context task, we evaluate our method on passage reranking. For a query and an initial list of relevant documents from a fast, first-stage retriever, we must reorder the documents so that more relevant ones come first. Setup. We select the passage retrieval test sets from the TREC Deep Learning Tracks DL19 and DL20 (Craswell et al., 2020, 2021), both canon in the literature (Qin et al., 2023). These datasets are built on the MS MARCO v1 corpus (Bajaj et al., 2016), which contains 8.8 million passages. As is standard, we rerank the top-100 passages retrieved by the first-stage BM25 (Robertson et al., 2009) or SPLADE++ EnsembleDistill (ED; Formal et al., 2021), reporting nDCG@10 scores for quality. Like sorting, we pick an open LLM, RankVicuna (Pradeep et al., 2023), fine-tuned from Vicuna (Chiang et al., 2023), and a closed family, GPT-3.5 and GPT-4\u2014all models match state of the art. RankVicuna and GPT-3.5 have context lengths of 4096, half of GPT-4\u2019s 8192. We similarly apply permutation self-consistency with m = 20 runs. Furthermore, for three of our variants named \u201csingle,\u201d we reduce the top-100 to 20 and discard the windowing strategy used in RankGPT and RankVicuna, described in Section 2.2. This allows us to fit all passages in a single call and thus remove potentially confounding interactions between the windowing method and permutation self-consistency. For our supervised baselines, we report results from the MonoT5 (Nogueira et al., 2020) and RankT5 (Zhuang et al., 2023) models, based on the T5 language model (Raffel et al., 2020). We also run RankLLaMA (Ma et al., 2023a), the current pointwise state of the art. For the unsupervised baselines, we copy figures from the state-of-the-art pairwise ranking results across the variants in Qin et al. (2023), which we name PRP-Best for short. Results. We present our results in Table 4. Our PSC outperforms all conventional inference baselines: first, RankGPT with PSC on DL19 (row 12) edges ahead by 0.07 points (same row); second, the same for DL20 (row 12), leading PRP by 0.32 points (row 7); third, the overall top result on DL19 of 76.87 from SPLADE++ (row 14), outperforming the previous by 1.28 (row 12); and fourth, 78.52 on DL20 (row 14), a 3.79-point increase over RankVicuna (row 13), the best single-call baseline model. For qualitative examples, see Appendix C. Overall, our PSC approach consistently improves ordinary decoding and beats the maximum individual score across three runs (see scores in parentheses), yielding gains on 13 out of 16 model\u2013 dataset combinations (see PSC columns in rows 7\u201314). On average, RankVicuna, GPT-3.5, and GPT-4 see relative score increases of 0.4%, 2%, and 5% with PSC. Mixed results on RankVicuna likely result from its inherent robustness to positional bias, instilled by its training process that uses \f5 10 15 20 Position of the Second Item, i(b) 5 10 15 20 Position of the First Item, i(a) [GPT-3.5] DL19 3 4 5 6 7 8 5 10 15 20 Position of the Second Item, i(b) 5 10 15 20 Position of the First Item, i(a) [GPT-3.5] DL20 4 5 6 7 8 9 10 5 10 15 20 Position of the Second Item, i(b) 5 10 15 20 Position of the First Item, i(a) [GPT-4] DL19 6 7 8 9 10 5 10 15 20 Position of the Second Item, i(b) 5 10 15 20 Position of the First Item, i(a) [GPT-4] DL20 6 7 8 9 10 (a) Single (GPT-3.5) on DL19 and DL20. 5 10 15 20 Position of the Second Item, i(b) 5 10 15 20 Position of the First Item, i(a) [GPT-3.5] DL19 3 4 5 6 7 8 5 10 15 20 Position of the Second Item, i(b) 5 10 15 20 Position of the First Item, i(a) [GPT-3.5] DL20 4 5 6 7 8 9 10 5 10 15 20 Position of the Second Item, i(b) 5 10 15 20 Position of the First Item, i(a) [GPT-4] DL19 6 7 8 9 10 5 10 15 20 Position of the Second Item, i(b) 5 10 15 20 Position of the First Item, i(a) [GPT-4] DL20 6 7 8 9 10 (b) Single (GPT-4) on DL19 and DL20. Figure 4: Distribution of \u201creversions\u201d after reranking. Blues are below the observed dataset average and reds above the average. For two input list positions i \u2208[1, 20] and j \u2208(i, 20], i indexes the rows and j the columns. For example, the cell at (1, 2) is the reversion of the first two input items across the dataset. Note that highly saturated colors indicate overand under-reversion relative to other pairs in the dataset rather than in the absolute sense. random shuffling as part of data augmentation; thus, the shuffling step from PSC has less of an effect on the output variation. The choice of the first-stage reranker has a clear impact, with SPLADE++ adding an average of 7.26 points over the corresponding BM25 models. In fact, reranking the top-20 SPLADE items (row 13) in a single call outperforms doing the top-100 (row 14) using a sliding call window. We conjecture that this results from imperfections in the RankGPT windowing algorithm, which shows especially for strong retrievers, where the top-20 already contains many relevant documents. Finally, we note one particularly intriguing phenomenon: in the top-20 single-call setting, GPT-3.5 and GPT-4 have similar baseline quality without PSC (rows 8 and 9, first column in each group), but PSC boosts GPT-4 more than GPT-3.5 (row 9, second columns). As we explore in depth next, this possibly results from GPT-4 being more \u201cequally biased\u201d across the item positions and hence providing PSC more useful rankings for aggregation. Positional bias analysis. We analyze how list order bias varies with the input positions on the \u201csingle\u201d GPT models for BM25 (from Table 3, rows 8 and 9), which avoids confounds from RankGPT\u2019s window strategy. The design of our analysis is as follows, mirroring Section 2.2\u2019s notation: consider the item pair (Xa, Xb) with input list positions (\u03c0i(a), \u03c0i(b)), where \u03c0i(a) < \u03c0i(b) for some random permutation \u03c0i. If the output positions satisfy \u02c6 \u03c3i(a) > \u02c6 \u03c3i(b) after reranking, we say the order is reversed, and we call the sum of reversed pairs per data point \u201creversions.\u201d In Figure 4, we visualize the distribution of reversions by input position pair, with \u03c0i(a) as the y-axis and \u03c0i(b) as the x-axis, whose positions range from 1\u201320 for each of the top-20 passages. For cross-model comparability, we normalize by dataset. Under the null hypothesis of there being no positional bias, the distribution of reversions should be uniform because the input lists are randomly permuted, which severs any association between input order and output ranking. However, Figure 4 contradicts this. Prominently, the center of Figure 4a is redder than the edges, indicating that pairs with both items closer to the middle are reversed more often by GPT-3.5 than those at the beginning and the end of the input lists are. In Figure 4b, bottom areas are also deeper red than the top, showing that pairs with items at the end of the list are more frequently reversed by GPT-4 than pairs at the start. Other subtle patterns emerge upon closer examination. First, in Figure 4a, a dark block appears after column 15, suggesting that GPT-3.5 does not focus well on items past the fifteenth. Second, the colors interleave in a grid pattern across both columns and rows\u2014possibly an artifact of its pretraining. From this evidence, we conclude that different positional biases exist in reranking LLMs, varying by model and dataset. The analysis also helps to explain our quality results. Comparing Figure 4a and 4b, we observe that GPT-4 generally reverses more pairs than GPT3.5 and is closer to the optimal number of reversals, thus providing higher quality to the aggregated rankings. This may explain why PSC benefits GPT4 (single) more than it does GPT-3.5 (single), i.e. row 9 vs. row 8 in Table 4. Similarly, both models tend to reverse more pairs on DL20 than on DL19, and results also indicate that PSC improves DL20 more than it does DL19. \f1 5 10 15 20 m Rankings 12 10 8 6 4 2 0 2 Score Change wrt m = 20 Quality vs. m Rankings (GPT-3.5) 1 5 10 15 20 m Rankings 12 10 8 6 4 2 0 2 Quality vs. m Rankings (GPT-4) WordSort MathSort GSM8KSort TREC-DL19 TREC-DL20 (a) Quality vs. number of output rankings (\u03c1 = 0.17). 0.00 0.25 0.50 0.75 T emperature 10 8 6 4 2 0 2 4 Score Change wrt 0 T emp. Quality vs. T emp. (GPT-3.5) 0.0 0.2 0.4 0.6 T emperature 10 8 6 4 2 0 2 4 Quality vs. T emp. (GPT-4) WordSort MathSort GSM8KSort TREC-DL19 TREC-DL20 (b) Quality vs. text generation temperature (\u03c1 = \u22120.078). Figure 5: Quality for all datasets for various aggregate sizes and temperatures. For output rankings, we use m = 20 as our frame of reference; for temperature, 0.0. In the subfigure captions, \u03c1 denotes Spearman\u2019s rank correlation. 4 Sensitivity Analyses In this section, we investigate and characterize each component of permutation self-consistency to justify our modeling choices. 4.1 Hyperparameter Studies Output rankings. Throughout the paper, we espoused aggregating over m = 20 output rankings, but is more actually better? If, say, five outperformed twenty, we could decrease the number of parallel calls to the model, conceivably saving cost. To answer this question, we sweep the aggregate size between one and twenty across all datasets, plotting the resulting score differences from using the default twenty. We pick GPT-3.5 and GPT-4 as our target models, as they are used in all tasks. We plot our results in Figure 5a. On both models, we find that output quality rapidly converges to that of using the full twenty, five being 67% as effective on average. The score averages increase monotonically with the number of rankings (\u03c1 = 0.17), with GSM8KSort on GPT-3.5 as an outlier (left subplot), possibly because of output variance\u2014 the next study on sampling temperature shows that it is highly sensitive to randomness. We conclude that picking m = 20 output rankings is effective, though returns sharply diminish after 5\u201310. Sampling temperature. Self-consistency (Wang et al., 2023b) uses temperature as their sampling strategy to produce different outputs to aggregate over, but it is ineffective for us, perhaps because listwise ranking does not admit multiple reasoning paths like chain-of-thought prompting does. To assess this rigorously, we vary the temperature between 0 and 0.75, following the original method\u2019s 0.5\u20130.7 (Wang et al., 2023b). For consistency, we use the same setup from before and fix m = 20. Math Word GSM8K DL19 DL20 T ask 40 50 60 70 80 90 Score Aggregation Method Quality (GPT-3.5) Math Word GSM8K DL19 DL20 T ask 50 60 70 80 90 Score Aggregation Method Quality (GPT-4) Single Best RRF Kemeny Figure 6: Scores for the alternative reciprocal rank fusion (RRF) and our Kemeny rank aggregation method. We plot our results in Figure 5b. Temperature has little effect on the quality (\u03c1 = \u22120.078), again with GSM8KSort as an outlier, where the extra randomness drastically hurts quality on both models. This sensitivity to randomness is also evident in Figure 3, where GSM8K has the widest interquartile range of the tasks. In conclusion, this evidence grounds our choice of not using temperature. 4.2 Rank Aggregation Comparison Reciprocal rank fusion (RRF; Cormack et al., 2009) is a state-of-the-art alternative to our chosen Kemeny ranking method. It sorts items by the score RRFScore(Xj) := X 1\u2264i\u2264m 1 k + \u02c6 \u03c3i(j) (6) for each item Xj, rankings \u02c6 \u03c3i, and k = 60. RRF had been under our consideration, but we picked Kemeny ranking for its theoretical robustness and empirical effectiveness. Shown in Figure 6, Kemeny beats RRF (p < 0.05) on 8 out of 10 comparisons by a mean of 0.23 points; on average, RRF reaches only 93.5% of the boost that Kemeny does. Its only outperformance on DL19 possibly results from it being suited for information retrieval, its field of origin, but this may also be statistical noise. Overall, these results further support our decision to select Kemeny ranking for the aggregation step. \f5 Related Work and Future Directions The holistic direction of our work is in enhancing the ranking ability of large language models. Along a similar vein, contrast-consistent ranking (Stoehr et al., 2023) proposes to train order-unaware probes on the latent vectors of large language models for detecting nondirectional rank consistency. Their evaluation reveals the ranking direction of test examples to the models, deviating from standard practices, as their purpose is not to increase ranking quality but rather to detect consistency. Another related work is Hou et al. (2023), which uses a different rank aggregation algorithm from ours. In contrast to their heuristic bootstrapping method (i.e., Borda count) of summing up the ranks of each ranking, our approach is theoretically optimal in that it finds the best central ranking to all individual rankings in terms of the tau distance. The specific empirical tasks in this paper have also seen recent progress. For passage ranking using language models, BERT-based (Devlin et al., 2019; Nogueira et al., 2020) and T5-tuned (Zhuang et al., 2023; Raffel et al., 2020) approaches represent the earliest language models for passage ranking. RankGPT (Sun et al., 2023) and LRL (Ma et al., 2023b) spearheaded much of the postChatGPT work, beating the supervised state of the art with an unsupervised LLM for the first time. Along a non-listwise direction, PRP (Qin et al., 2023) is a pairwise method leveraging open-source large language models comparing two items at a time, as reported in Table 4. One possible future work is to reformulate our PSC method to be differentiable, enabling training-time application in LLMs such as RankVicuna (Pradeep et al., 2023). Our sorting tasks for LLMs have had attention as well, mostly in the context of evaluation, with BigBench (Suzgun et al., 2022; bench authors, 2023), an LLM benchmark, providing more than 200 distinct tasks, including one in alphabetical ordering (word_sorting), which we enlarge and expand on in WordSort. Stoehr et al. (2023) also constructed fact-based synthetic sorting datasets for listwise ranking, but they are private and hence noncomparable. In the future, PSC can be applied to any list-oriented ranking task involving LLMs. Examples include using LLMs for evaluation (Wang et al., 2023a) and annotating human feedback judgments with language models. Additionally, PSC is applicable at training time, such as denoising weakly labeled training sets generated by teacher models, shown to be crucial to the success of listwise-ranking LLMs (Pradeep et al., 2023). We are not the first to establish positional biases in LLMs. Lu et al. (2022) are among the earliest to relate prompt order to the quality of in-context learning. The main difference in setup is that they assume the presence of a training set, whereas we do not, which especially matters for passage ranking, as many tasks only have evaluation sets. Recently, Liu et al. (2023) and Wang et al. (2023a) characterized positional bias in the context of list-oriented tasks, such as question answering and response evaluation. However, we are to our knowledge the first to characterize the position biases of passage-ranking LLMs with respect to pairwise item positions, and our work also proposes a correction technique. Moreover, Pezeshkpour and Hruschka (2023) and Li et al. (2023) apply prompting-based techniques for mitigating positional bias. Prompting is not mutually exclusive of our PSC, and it could be complementary. Lastly, our paper is connected to all the metaalgorithms for improving LLM generation. As a pertinent example, Lu et al. (2022) study prompt order on in-context learning classification tasks, proposing an entropy-based statistic over development sets to find performant permutations of few-shot examples. Aggarwal et al. (2023) make self-consistency more efficient, halting the procedure when enough samples have been collected. To keep our method in its simplest form, as selfconsistency had not been applied to listwise ranking to begin with, we based our design on the original approach (Wang et al., 2023b). 6" + }, + { + "url": "http://arxiv.org/abs/2211.11740v1", + "title": "SpeechNet: Weakly Supervised, End-to-End Speech Recognition at Industrial Scale", + "abstract": "End-to-end automatic speech recognition systems represent the state of the\nart, but they rely on thousands of hours of manually annotated speech for\ntraining, as well as heavyweight computation for inference. Of course, this\nimpedes commercialization since most companies lack vast human and\ncomputational resources. In this paper, we explore training and deploying an\nASR system in the label-scarce, compute-limited setting. To reduce human labor,\nwe use a third-party ASR system as a weak supervision source, supplemented with\nlabeling functions derived from implicit user feedback. To accelerate\ninference, we propose to route production-time queries across a pool of CUDA\ngraphs of varying input lengths, the distribution of which best matches the\ntraffic's. Compared to our third-party ASR, we achieve a relative improvement\nin word-error rate of 8% and a speedup of 600%. Our system, called SpeechNet,\ncurrently serves 12 million queries per day on our voice-enabled smart\ntelevision. To our knowledge, this is the first time a large-scale,\nWav2vec-based deployment has been described in the academic literature.", + "authors": "Raphael Tang, Karun Kumar, Gefei Yang, Akshat Pandey, Yajie Mao, Vladislav Belyaev, Madhuri Emmadi, Craig Murray, Ferhan Ture, Jimmy Lin", + "published": "2022-11-21", + "updated": "2022-11-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "main_content": "Introduction Training an end-to-end automatic speech recognition (ASR) model requires hundreds, if not thousands, of hours of hand-labeled speech. With the rise of silicon-hungry pretrained transformers, these models additionally need increasing amounts of computational power just to perform inference. Together, these two hurdles impede effective model deployment at all but the largest technology companies and specialized speech processing startups. The hurdles certainly apply to us at Comcast, the main stage of this work. Our industrial challenge is to \ufb01ne-tune and deploy a large, pretrained speech recognition model, without an army of annotators (as in Amazon) or mammoth GPU farms (e.g., Google). Our end application is the X\ufb01nity X1, a voice-enabled smart television serving millions of active devices in the United States. Evidently, cloud ASR services are cheaply available.1 Google Cloud, for example, charges $1.44 USD per hour of transcribed speech. In contrast, manual annotation services like Rev cost $90 per hour, and our in-house annotators, whom Comcast must use to protect user privacy, cost even more. Thus, cloud ASR\u2019s comparatively low pricing, combined with its decent quality, suggests its utility as an annotation source in the absence of substantial human-labeled data. Nevertheless, cloud ASR still falls short of human parity and hence demands label denoising. To do this, we propose to use implicit user feedback to remove incorrectly labeled examples, bootstrapping an existing cloud ASR service. We derive these labeling functions using signals from query repetition, session length, and ASR con\ufb01dence scores. We model them in Snorkel (Ratner et al., 2017), a popular data programming framework, producing a 1400-hour weakly labeled dataset. Trained on this, our models improve over those using un\ufb01ltered data by an average 0.97 points in word-error rate (WER), as presented in Section 4. As for the second hurdle of resource ef\ufb01ciency, many model acceleration methods exist. However, few meet our productionization criteria: we seek to preserve the quality, ruling out structured pruning (Li et al., 2020); we wish to preserve the pretrained architectural structure, eliminating knowledge distillation (Tang et al., 2019a); and we require stable software\u2013GPU support, disqualifying low bit-width quantization (Shen et al., 2020) and other CPU-oriented approaches. All things considered, the prime candidates are medium bit-width quantization, decoder optimizations (Abdou and Scordilis, 2004), and CUDA computation graphs (Gray, 2019). The \ufb01rst two follow 1But not cheaper or better than using our own in-house ASR system; otherwise, there would be no need for this work! arXiv:2211.11740v1 [cs.CL] 21 Nov 2022 \fthe literature, but the third is more open ended. In spite of their record-breaking performance, CUDA graphs work only with \ufb01xed-length input, not variable length. Toward this, we propose to allocate a pool of CUDA graphs of varying lengths, altogether matching the production-time traf\ufb01c length distribution. During inference, we route each query to the graph with the least upper-bound in length. As we show in Section 4, this yields a 3\u20135\u00d7 increase in throughput. We claim the following contributions: \ufb01rst, we derive novel labeling functions for constructing weakly labeled speech datasets from in-production ASR systems, improving our best model by a relative 8% in word-error rate. Second, we propose to accelerate model inference using a pool of CUDA graphs, attaining a 7\u20139\u00d7 inference speed increase at no quality loss. The resulting system, SpeechNet, currently serves more than 20 million queries per day on our smart television. To our knowledge, we are the \ufb01rst to describe a large-scale, Wav2vecbased deployment in the academic literature. 2 Our SpeechNet Approach Our task is to train and deploy a state-of-the-art, end-to-end ASR system, without using humanannotated data. The context of this deployment is a smart TV, which users interact with using a speechdriven remote control. To issue a voice query, users hold a button, speak their command, and release the button. We initially serve them with a third-party cloud ASR service, bootstrapping it for the development of SpeechNet. Data-wise, we store thousands of hours of utterances per day, complete with session IDs, transcripts, and device IDs. Resourcewise, we have 30 deployment nodes, each hosting an Nvidia Tesla T4 GPU and receiving 120 queries per second (QPS) at peak time; thus, our model\u2019s real-time factor must exceed 120. 2.1 End-to-End ASR Modeling In end-to-end ASR systems, we transcribe speech waveform directly to orthography, consolidating the traditional acoustic\u2013pronunciation\u2013language modeling approach. Similar to natural language processing, the dominant paradigm in speech is to pretrain transformers (Vaswani et al., 2017) on unlabeled speech using an unsupervised contrastive objective, then \ufb01ne-tune on labeled datasets (Baevski et al., 2020). We practitioners further \ufb01ne-tune these released models on our in-domain datasets. Snorkel LF 1 LF 2 LF 3 \u201cNetfix\u201d Session Info Abstain Correct Incorrect Denoise Incorrect Figure 1: An example weak labeling. In this case, we would discard the incorrect transcript, \u201cNet\ufb02ix.\u201d Concretely, we feed an audio amplitude sequence (xt)\u2113 t=1 \u2208[\u22121, 1] into a pretrained model consisting of one-dimensional convolutional feature extractors and transformer layers, getting frame-level context vectors (ht)N t=1 \u2208Rk. On each of these vectors, we perform a softmax transformation across the vocabulary V , for a \ufb01nal probability distribution sequence of (yt)N t=1 \u2208R|V |. For \ufb01netuning, we use a training set composed of audio\u2013 transcript pairs and optimize with the standard connectionist temporal classi\ufb01cation objective (CTC; Graves, 2012) for speech recognition. We uncase the transcripts and encode them with a characterbased tokenizer, as is standard. At inference time, we decode the CTC outputs with beam search and a four-gram language model. 2.2 Data Curation To build a weakly labeled dataset, we turn to Snorkel (Ratner et al., 2017), a popular data programming framework for aggregating and denoising weak labelers. In Snorkel, domain experts \ufb01rst create handwritten weak labelers, which the authors call labeling functions (LFs). Each of these LFs takes as input an unlabeled example, as well as any auxiliary data, and either outputs a label or abstains. Next, Snorkel applies these LFs to each example in a dataset, producing a matrix of noisy labels. It learns from this noisy observation matrix a generative model with the true labels as latent variables, which it supplies to downstream tasks. Our task is to remove incorrect transcripts from a weakly constructed dataset. Our LF inputs are audio clips and transcripts, along with session data, and our outputs are one of correct, incorrect, or abstain. After Snorkel denoises the LF outputs and labels each dataset example, we discard abstained or incorrect ones, as visualized in Figure 1. We derive and use the three following novel LFs: Session position. We group queries in the same session if each occurs within 60 seconds of at least one other and is issued by the same user. Previously, we found a negative correlation between the intrasession position of a query and the word-error \fLaunch Kernel Launch Kernel Launch Kernel Launch Kernel Figure 2: Typical way for the CPU to launch a sequence of small GPU kernels, with time \ufb02owing from left to right. Red area denotes launch latency. Launch Graph Kernel Kernel Kernel Kernel Figure 3: Launching a CUDA graph. Difference in right margin relative to Figure 2 portrays time savings. rate (Tang et al., 2019b), where the last query consistently has a low word-error rate (WER), and long sessions have high intermediate query WERs. With this \ufb01nding, we write the session position LF, given query q, as LFSP(q) := \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 CORRECT if q is last in its session INCORRECT if sess. length \u22653, q not last ABSTAIN otherwise. ASR con\ufb01dence. For each transcribed utterance, ASR systems output a con\ufb01dence score, which correlates with the WER. In most systems, this score results from an addition between the acoustic model score and the language model score. The \ufb01rst is a function of speech, while the second of text. Since our third-party ASR service is opaque, we have access only to the \ufb01nal score. This complicates its direct use because thresholding it would skew the balance toward frequent words, as in\ufb02uenced by the language model. To bypass this issue, we collect sample statistics of the \ufb01nal score grouped by transcript text, then design an LF with transcript-speci\ufb01c thresholds. This way, we remove the language model score as a confounder. De\ufb01ne LFAC(q) := \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 CORRECT if s(q) \u2265p80(q) INCORRECT if s(q) \u2264p20(q) ABSTAIN if p20(q) or p80(q) unde\ufb01ned or otherwise, where s(q) is the con\ufb01dence score for query q from the third-party ASR, and p20(q) and p80(q) return the 20th and 80th percentile ASR score for the transcript of q, respectively. Rapid repetition. Users often rapidly repeat their voice queries upon ASR mistranscription (Li and Ture, 2020). Given this, we can discard queries that closely precede others from the same user: CUDA Graph Pool q2 q1 q3 Queries Figure 4: Three queries routed across a graph pool. LFRR(q) := \uf8f1 \uf8f2 \uf8f3 INCORRECT if the user\u2019s next query occurs \u226413 seconds of q ABSTAIN otherwise. On our platform, we\u2019ve determined 13 seconds to be the optimal duration in terms of speci\ufb01city and sensitivity (Li and Ture, 2020). 2.3 Model Inference Acceleration In production, we use a batch size of one for inference. This largely decreases ef\ufb01ciency because GPU kernel launches now dominate the processing time, as portrayed in Figure 2. In our case, we can\u2019t just pad to a large \ufb01xed size, since computation increases quadratically with length for transformers. It\u2019s also infeasible to use batching (e.g., batch together sequential queries) because only 4\u20136 queries arrive in a 50-millisecond window per server, and we can\u2019t afford to sacri\ufb01ce that much speed. To improve inference ef\ufb01ciency, CUDA graphs allow a sequence of GPU kernels to be captured and run as a single computation graph, thus incurring one CPU launch operation instead of many\u2014 see Figure 3. However, these graphs are input shape and control \ufb02ow static, so they must be preconstructed. This clearly poses a barrier to using variable-length audio as input. To address this issue, we propose to allocate a pool of differently sized CUDA graphs, then route each query to the nearest upper-bound graph. For higher ef\ufb01ciency, we match the length distribution of the pool with that of the computation time on production traf\ufb01c. Formally, let X be the random variable (r.v.) denoting the arrival distribution of the lengths of production-time queries. Let Z := f(X) be the time it takes for a CUDA graph to perform inference for length X. Then, our CUDA graph pool comprises G := (gz1, . . . , gzn), where gzi denotes a CUDA graph of length zi and z1, . . . , zn are realizations of Z. To serve a query of length l, we pick the graph gz\u2217, where z\u2217:= min{zi | gzi \u2208G, zi \u2265l}. (1) \fDataset Train/Dev/Test Hrs. # Speakers # Unique CC-20 22/2.2/2.2 40K/4K/4K 20 CC-LG 1400/1.0/2.5 325K/2K/4K 88K Table 1: Dataset statistics. Further query distribution details are in the appendix. Our upstream system sends no more than ten seconds of audio by design, bounding this set. We illustrate this process in Figure 4. 3 Experimental Setup Our key experiments are to validate the model effectiveness of our labeling functions (Section 2.2) and the computational savings of our CUDA graph pool (Section 2.3). We trained every run on one p3.2xlarge Amazon Web Services (AWS) instance, which has an Nvidia V100 GPU and eight virtual CPU cores. We implemented our models in PyTorch using the HuggingFace Transformers library (Wolf et al., 2019) and Nvidia\u2019s NeMo (Kuchaiev et al., 2019); see the appendix for more details. 3.1 Dataset Curation We curated two datasets: one critical dataset, called CC-20, comprising the twenty most frequent commands, and another large-scale dataset, named CCLG, consisting of audio examples sampled uniformly at random from user traf\ufb01c. We split our datasets into one or more training sets, a development (dev) set, and a test set, all drawn from separate days and speakers\u2014see Table 1 for statistics. On CC-20, native English speakers annotated the training set to establish an \u201cupper bound\u201d in quality, relative to using the weakly labeled datasets. On CC-LG, the 1400-hour set was too large to annotate, so we skipped that. On both datasets, we manually annotated the dev and test sets to serve as gold evaluation sets. For the weakly labeled training sets, we constructed one set with raw transcripts from the thirdparty ASR system and another set with transcripts from Snorkel, \ufb01ltered using the labeling functions in Section 2.2. We name the former set \u201craw\u201d and the latter \u201cweak.\u201d To remove dataset size as a confounder, we use the same size for all training sets. 3.2 Baselines and Models For our \ufb01rst baseline, we picked Google Cloud\u2019s public ASR offering (Beaufays, 2022), primarily Model Training CC-20 CC-LG Dev/Test Dev/Test Google Cloud \u2013 24.7/24.7 26.5/25.5 Our Third Party \u2013 7.56/7.60 10.8/9.66 Our Trained Models SEWtiny Raw 6.72/6.82 17.4/16.3 41M parameters Weak 5.17/4.80 15.9/14.5 Human 4.79/4.66 \u2013 Wav2vec2.0base Raw 2.81/3.17 10.2/9.11 94M parameters Weak 1.62/1.77 9.14/8.82 Human 1.54/1.75 \u2013 Conformerlarge Raw 3.52/3.68 12.6/10.6 120M parameters Weak 3.63/4.08 12.0/9.78 Human 2.60/2.72 \u2013 Table 2: Dev and test WERs of models trained on sets without LFs (raw), with LFs (weak), and with human annotations (human). Best results bolded. SEW Wav2Vec2.0Conformer Model 15 24 38 60 95 150 239 378 600 Latency (ms) Model Latency No Graphs CUDA Graphs (Uniform) CUDA Graphs (Log-Normal) SEW Wv2V2.0 Conformer Model 20 27 36 47 63 84 112 150 200 Queries per Second (QPS) Model Throughput Figure 5: Throughput in queries per second and latency in milliseconds of all three models, under different CUDA graph pool settings. The red line on the left is our third-party ASR model latency and the blue line on the right our required throughput in production. to sanity check our third-party ASR service. We used their standard model offering, touted as state of the art, costing us $0.006 USD per 15 seconds of speech. For our second baseline, we selected our third-party ASR service that we licensed from a major American technology company. Models. We chose three different state-of-the-art, pretrained transformer models from the literature, each representing a separate computational operating point: the Squeezed and Ef\ufb01cient Wav2vec model, tiny variant (SEW-tiny; Wu et al., 2022), at 41 million parameters; the standard Wav2vec 2.0 base model (Wav2vec 2.0-base; Baevski et al., 2020), at 94 million parameters; and the large Conformer model (Conformer-large; Gulati et al., \fTraining Set CC-20 CC-LG Dev/Test Dev/Test Raw (no LFs) 2.81/3.17 14.9/13.6 + LFSP 2.32/2.64 13.3/12.1 + LFAC 2.16/1.93 13.3/11.9 + LFRR 1.62/1.77 13.1/11.8 Human 1.52/1.75 \u2013 Table 3: Quality of Wav2vec 2.0-base under differently constructed but equally sized training sets. 2020), at 120 million. We initialized them with LibriSpeech-\ufb01ne-tuned weights and trained them using standard gradient-based optimization\u2014we put details in the appendix. 4 Results and Discussion We present our model quality results in Table 2. Unsurprisingly, Google Cloud does worse than our third-party service, which has been speci\ufb01cally tailored to our in-domain vocabulary. On average, sets curated with Snorkel (denoted as \u201cweak\u201d) improves the WER by 0.97 points (95% CI, 0.09 to 1.85) relative to those without (\u201craw\u201d). Wav2vec 2.0-base, our best model, outperforms the third party by a relative 70% and 8% on CC-20 and CCLG, respectively. Except for Conformer-large, all models trained on Snorkel-labeled sets achieve near parity with those on human-annotated training sets, with Wav2vec 2.0-base in particular reaching a test WER on CC-20 worse by only 0.02 points (1.77 vs. 1.75). We speculate that conformers perform worse than Wav2vec 2.0-base does due to using log-Mel spectrograms instead of raw audio waveform: our voice queries greatly differ in loudness, resulting in exponential \ufb02uctuations after applying the log transform (as the input approaches 0). We chart our model acceleration results in Figure 5. We gather these statistics from replaying production-time traf\ufb01c as fast as possible to saturate the model. Overall, CUDA graph pools accelerate our models by 7\u20139\u00d7 (left sub\ufb01gure; compare blue and green bars) and increase throughput by 3\u20135\u00d7 (right subplot). Initializing the graph lengths to be log-normal distributed ekes out a few percentage points (compare orange and green) in performance, since that better matches our production traf\ufb01c. Most stark is the contrast between vanilla, graphless conformer throughput (22 QPS) and its accelerated counterpart (117 QPS), representing a 0 10 20 30 Number of Graphs 20 40 60 80 100 Latency (ms) Latency 40 60 80 100 120 140 QPS Number of Graphs vs. Performance QPS 1 3 5 7 9 11 Number of Threads 20 40 60 80 100 Latency (ms) Latency 100 120 140 160 QPS Number of Threads vs. Performance QPS Figure 6: Twin plots of the system latency and throughput plotted against the number of CUDA graphs and inference threads, with the left y-axis tracking latency and the right axis throughput. \ufb01ve-fold improvement. This likely arises from the vanilla conformer incurring much kernel launch overhead, on account of its more nested architecture, precisely which CUDA graphs address. 4.1 Ablation Studies Data curation. We measure the quality contribution of each LF, as described in Section 2.2. We curate datasets using one additional LF at a time, starting with no LFs, then the session position LF, followed by the ASR con\ufb01dence LF, and, \ufb01nally, the rapid repetition LF. This process results in four datasets for the nested con\ufb01gurations. To remove transcript diversity and dataset size as confounders, we \ufb01x the number of training hours to 200 hours and match the transcript distributions. We target Wav2vec 2.0-base since it\u2019s our deployment model. We present the ablation results in Table 3. Each added LF improves the quality, with the \ufb01rst LF having the most impact (1.5 average points for the \ufb01rst vs. 0.1\u20130.7 for the rest), likely due to diminishing returns. We note that the ASR con\ufb01dence score affects CC-20 more than it does CC-LG, possibly because of shorter sessions. Model inference acceleration. We study how the number of CUDA graphs and inference threads (i.e., threads for launching graphs) affects the latency and the throughput, all else being equal. First, we sweep the number of CUDA graphs and hold the thread count at 3, the optimal value from our experiments. Next, we vary the thread count and \ufb01x the number of graphs at 36, also the best value. In both settings, we sample 10k queries uniformly at random from production and queue them up in our inference server, which comprises an Nvidia T4 GPU and an eight-core CPU. We plot our results in Figure 6. For CUDA graphs, we observe rapidly diminishing returns \fin both latency and throughput after 5\u20138 graphs, although they continue to improve until the \ufb01nal value of 36 graphs, the most we can \ufb01t in the GPU memory. For inference threads, we see initially rapid gains in throughput (though not latency) until 4 threads, whereupon throughput tapers slightly and latency grows linearly. We conjecture that this arises from GPU saturation causing thread contention; while we can certainly push more queries at a time (there being 36 graphs), the GPU can process only 138 queries worth per second. This results in a backlog of queries when we exceed 3\u20134 threads, causing linear growth in latency if throughput remains stable. 4.2 Industrial Considerations We deploy SpeechNet as load-balanced Docker Swarm replicas, each exposing a WebSocket API for real-time transcription. We write the model server in Python and the inference decoder in C++; in particular, we free in the decoder Python\u2019s global interpreter lock, a substantial bottleneck in our application. Our decoder runs faster than all tested open-source CTC decoders do, such as Parlance\u2019s ctcdecode, pyctcdecode, and Flashlight. We execute all graphs in half-precision on separate CUDA streams, further increasing parallelism. To monitor the reliability of our production system, we measure and expose four key servicelevel indicators (SLIs): query traf\ufb01c, server errors, response latency, and system saturation. Taken together, these represent the so-called \u201cGoogle Golden signals,\u201d a battery of metrics espoused by its namesake. As is standard in industry, we export real-time metrics to Prometheus, a monitoring system for time series, and then aggregate them in Grafana, a full-stack visualizer. During the initial release of SpeechNet, these metrics enabled us to detect and mitigate critical imperfections. In one such case, we observed a large spike in traf\ufb01c preceding increases in timeout errors and latency. The spike occurred at the top of the hour, when, due to the nature of television programming, many users issue queries to change shows. From this evidence, we traced the culprit to our suboptimal decoder implementation, which we promptly \ufb01xed. 5 Related Work Pretrained ASR models. Much like natural language processing, the dominant paradigm in the end-to-end speech recognition literature is to pretrain transformers on vast quantities of unlabeled speech and then \ufb01ne-tune on the labeled datasets. In their seminal work, Schneider et al. (2019) pioneer this approach with a contrastive learning objective, calling it Wav2vec. They further re\ufb01ne it in Baevski et al. (2020) by introducing discretized representations, naming their model the present Wav2vec 2.0. Other variants of this model include the Squeezed and Ef\ufb01cient Wav2vec model (Wu et al., 2022), which introduces architectural modi\ufb01cations for computational ef\ufb01ciency, and the conformer (Gulati et al., 2020), which adds convolutions in the transformer blocks for better local context modeling. Weakly supervised ASR. Several papers explore constructing a weakly labeled dataset and training an ASR system with little to no human annotation. VideoASR (Cheng et al., 2021) and GigaSpeech (Chen et al., 2021) construct speech datasets from videos and subtitles, but this fails in our domain since our users\u2019 voice queries differ greatly from those of public sources in both acoustics and text. For example, our queries contain rare entities (e.g., \u201cX\ufb01nity Home\u201d), rarely last more than 4\u20135 seconds, and come from a low-\ufb01delity microphone in frequently noisy households. Along a separate line, Dufraux et al. (2019) proposes a label noise-aware objective for ASR; however, this method increases training time by 15\u201330\u00d7, which is too burdensome for us. Model acceleration. A plethora of model acceleration methods exist for transformers. In structured pruning, entire blocks of weights are removed, like attention heads (Michel et al., 2019) and weight submatrices (Li et al., 2020), resulting in a more lightweight model. This comes at the cost of quality, which we can\u2019t sacri\ufb01ce given our thin margin over our third party. Hinton et al. (2015) proposes knowledge distillation, where the outputs of a small model are \ufb01ne-tuned against those of a large model, but we wish to use the original, pretrained model architecture at runtime for robustness. Still others propose low bit-width (2\u20138 bit) quantization (Shen et al., 2020), which, while quality preserving, has poor conventional GPU software support. Note that, in this paper, we restricted our experiments to CUDA graph pools because their application does not exclude others. In fact, when multiple acceleration methods can be applied, Xin et al. (2022) \ufb01nd that the savings are largely cumulative. \f6" + }, + { + "url": "http://arxiv.org/abs/2210.04885v5", + "title": "What the DAAM: Interpreting Stable Diffusion Using Cross Attention", + "abstract": "Large-scale diffusion neural networks represent a substantial milestone in\ntext-to-image generation, but they remain poorly understood, lacking\ninterpretability analyses. In this paper, we perform a text-image attribution\nanalysis on Stable Diffusion, a recently open-sourced model. To produce\npixel-level attribution maps, we upscale and aggregate cross-attention\nword-pixel scores in the denoising subnetwork, naming our method DAAM. We\nevaluate its correctness by testing its semantic segmentation ability on nouns,\nas well as its generalized attribution quality on all parts of speech, rated by\nhumans. We then apply DAAM to study the role of syntax in the pixel space,\ncharacterizing head--dependent heat map interaction patterns for ten common\ndependency relations. Finally, we study several semantic phenomena using DAAM,\nwith a focus on feature entanglement, where we find that cohyponyms worsen\ngeneration quality and descriptive adjectives attend too broadly. To our\nknowledge, we are the first to interpret large diffusion models from a\nvisuolinguistic perspective, which enables future lines of research. Our code\nis at https://github.com/castorini/daam.", + "authors": "Raphael Tang, Linqing Liu, Akshat Pandey, Zhiying Jiang, Gefei Yang, Karun Kumar, Pontus Stenetorp, Jimmy Lin, Ferhan Ture", + "published": "2022-10-10", + "updated": "2022-12-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "main_content": "Introduction Diffusion neural networks trained on billions of image\u2013caption pairs represent the state of the art in text-to-image generation (Yang et al., 2022), with some achieving realism comparable to photographs in human evaluation, such as Google\u2019s Imagen (Saharia et al., 2022) and OpenAI\u2019s DALLE 2 (Ramesh et al., 2022). However, despite their quality and popularity, the dynamics of their image synthesis remain undercharacterized. Citing ethical concerns, these organizations have restricted the general public from using the models and their weights, preventing effective white-box (or even blackbox) analysis. To overcome this barrier, \u2217Equal contribution. Figure 1: The original synthesized image and three DAAM maps for \u201cmonkey,\u201d \u201chat,\u201d and \u201cwalking,\u201d from the prompt, \u201cmonkey with hat walking.\u201d Stability AI recently open-sourced Stable Diffusion (Rombach et al., 2022), a 1.1 billion-parameter latent diffusion model pretrained and \ufb01ne-tuned on the LAION 5-billion image dataset (Schuhmann et al., 2022). We probe Stable Diffusion to provide insight into the workings of large diffusion models. With a focus on text-to-image attribution, our central research question is, \u201cHow does an input word in\ufb02uence parts of a generated image?\u201d To this, we \ufb01rst propose to produce two-dimensional attribution maps for each word by combining crossattention maps in the model, as delineated in Section 2.2. A related work in prompt-guided editing from Hertz et al. (2022) conjectures that per-head cross attention relates words to areas in Imagengenerated images, but they fall short of constructing global per-word attribution maps. We name our method diffusion attentive attribution maps, or DAAM for short\u2014see Figure 1 for an example. To evaluate the veracity of DAAM, we apply it to a semantic segmentation task (Lin et al., 2014) on generated imagery, comparing DAAM maps with annotated segments. We attain a 58.9\u201364.8 mean intersection over union (mIoU) score, which is competitive with unsupervised segmentation models, described in Section 3.1. We further bolster these noun attribution results using a generalized study covering all parts of speech, such as adjectives and verbs. Through human annotation, we show that the mean opinion score (MOS) is above fair to good (3.4\u20134.2) on interpretable words. arXiv:2210.04885v5 [cs.CV] 8 Dec 2022 \fNext, we characterize how relationships in the syntactic space of prompts relate to those in the pixel space of images. We assess head\u2013dependent DAAM map interactions across ten common syntactic relationships, \ufb01nding that, for some, the heat map of the dependent strongly subsumes that of the head, while the opposite is true for others. For still others, such as coreferent word pairs, the words\u2019 maps greatly overlap, indicating identity. We assign visual intuition to our observations; for example, we conjecture that the maps of verbs contain those of their subjects, because verbs often contextualize both the subjects and their surroundings. Finally, we form hypotheses to further examine our syntactic \ufb01ndings, studying semantic phenomena through the lens of DAAM, particularly those affecting the generation quality. In Section 5.1, we demonstrate that, in constructed prompts with two distinct nouns, cohyponyms have worse quality, e.g., \u201ca giraffe and a zebra\u201d generates either a giraffe or a zebra, but not both. We observe that cohyponym status and generation incorrectness each increases the amount of overlap between the heat maps. We also show in Section 5.2 that descriptive adjectives attend too broadly across the image, far beyond the nouns they modify. If we hold the scene layout \ufb01xed (Hertz et al., 2022) and vary only the adjective, the entire image changes, not just the noun. These two phenomena suggest feature entanglement, where objects are entangled with both the scene and other objects. In summary, our contributions are as follows: (1) we propose and evaluate an attribution method, novel within the context of interpreting diffusion models, measuring which parts of the generated image the words in\ufb02uence most; (2) we provide new insight into how syntactic relationships map to generated pixels, \ufb01nding evidence for directional imbalance in head\u2013dependent DAAM map overlap, alongside visual intuition (and counterintuition) in the behaviors of nominals, modi\ufb01ers, and function words; and (3) we shine light on failure cases in diffusion models, showing that descriptive adjectival modi\ufb01ers and cohyponyms result in entangled features and DAAM maps. 2 Our Approach 2.1 Preliminaries Latent diffusion models (Rombach et al., 2022) are a class of denoising generative models that are trained to synthesize high-\ufb01delity images from random noise through a gradual denoising process, optionally conditioned on text. They generally comprise three components: a deep language model like CLIP (Radford et al., 2021) for producing word embeddings; a variational autoencoder (VAE; Kingma and Welling, 2013) which encodes and decodes latent vectors for images; and a timeconditional U-Net (Ronneberger et al., 2015) for gradually denoising latent vectors. To generate an image, we initialize the latent vectors to random noise, feed in a text prompt, then iteratively denoise the latent vectors with the U-Net and decode the \ufb01nal vector into an image with the VAE. Formally, given an image, the VAE encodes it as a latent vector \u2113t0 \u2208Rd. De\ufb01ne a forward \u201cnoise injecting\u201d Markov chain p(\u2113ti|\u2113ti\u22121) := N(\u2113ti; \u221a1 \u2212\u03b1ti\u2113t0, \u03b1tiI) where {\u03b1ti}T i=1 is de\ufb01ned following a schedule so that p(\u2113tT ) is approximately zero-mean isotropic. The corresponding denoising reverse chain is then parameterized as p(\u2113ti\u22121|\u2113ti) := N(\u2113ti\u22121; 1 \u221a 1\u2212\u03b1ti (\u2113ti + \u03b1ti\u03f5\u03b8(\u2113ti, ti)), \u03b1tiI), (1) for some denoising neural network \u03f5\u03b8(\u2113, t) with parameters \u03b8. Intuitively, the forward process iteratively adds noise to some signal at a \ufb01xed rate, while the reverse process, equipped with a neural network, removes noise until recovering the signal. To train the network, given caption\u2013image pairs, we optimize min\u03b8 PT i=1 \u03b6iEp(\u2113ti |\u2113t0 )\u2225\u03f5\u03b8(\u2113ti, ti) \u2212\u2207\u2113ti log p(\u2113ti|\u2113t0)\u22252 2, (2) where {\u03b6i}T i=1 are constants computed as \u03b6i := 1 \u2212Qi j=1(1 \u2212\u03b1j). The objective is a reweighted form of the evidence lower bound for score matching (Song et al., 2021). To generate a latent vector, we initialize \u02c6 \u2113tT as Gaussian noise and iterate \u02c6 \u2113ti\u22121 = 1 \u221a 1\u2212\u03b1ti (\u02c6 \u2113ti + \u03b1ti\u03f5\u03b8(\u02c6 \u2113ti, ti)) + \u221a\u03b1tizti. (3) In practice, we apply various optimizations to improve the convergence of the above step, like modeling the reverse process as an ODE (Song et al., 2021), but this de\ufb01nition suf\ufb01ces for us. We can additionally condition the latent vectors on text and pass word embeddings X := [x1; \u00b7 \u00b7 \u00b7 ; xlW ] to \u03f5\u03b8(\u2113, t; X). Finally, the VAE decodes the denoised latent \u02c6 \u2113t0 to an image. For this paper, we use the publicly available weights of the state-ofthe-art, 1.1 billion-parameter Stable Diffusion 2.0 model (Rombach et al., 2022), trained on 5 billion caption\u2013image pairs (Schuhmann et al., 2022) and implemented in HuggingFace\u2019s Diffusers library (von Platen et al., 2022). \f2.2 Diffusion Attentive Attribution Maps Given a large-scale latent diffusion model for textto-image synthesis, which parts of an image does each word in\ufb02uence most? One way to achieve this would be attribution approaches, which are mainly perturbationand gradient-based (AlvarezMelis and Jaakkola, 2018; Selvaraju et al., 2017), where saliency maps are constructed either from the \ufb01rst derivative of the output with respect to the input, or from input perturbation to see how the output changes. Unfortunately, gradient methods prove intractable due to needing a backpropagation pass for every pixel for all T time steps, and even minor perturbations result in signi\ufb01cantly different images in our pilot experiments. Instead, we use ideas from natural language processing, where word attention was found to indicate lexical attribution (Clark et al., 2019), as well as the spatial layout of Imagen\u2019s images (Hertz et al., 2022). In diffusion models, attention mechanisms cross-contextualize text embeddings with coordinate-aware latent representations (Rombach et al., 2022) of the image, outputting scores for each token\u2013image patch pair. Attention scores lend themselves readily to interpretation since they are already normalized in [0, 1].Thus, for pixelwise attribution, we propose to aggregate these scores over the spatiotemporal dimensions and interpolate them across the image. We turn our attention to the denoising network \u03f5\u03b8(\u2113, t; X) responsible for the synthesis. While the subnetwork can take any form, U-Nets remain the popular choice (Ronneberger et al., 2015) due to their strong image segmentation ability. They consist of a series of downsampling convolutional blocks, each of which preserves some local context, followed by upsampling deconvolutional blocks, which restore the original input size to the output. Speci\ufb01cally, given a 2D latent \u2113t \u2208Rw\u00d7h, the downsampling blocks output a series of vectors {h\u2193 i,t}K i=1, where h\u2193 i,t \u2208R\u2308w ci \u2309\u00d7\u2308h ci \u2309for some c > 1. The upsampling blocks then iteratively upscale h\u2193 K,t to {h\u2191 i,t}0 i=K\u22121 \u2208R\u2308w ci \u2309\u00d7\u2308h ci \u2309. To condition these representations on word embeddings, Rombach et al. (2022) use multi-headed cross-attention layers (Vaswani et al., 2017) h\u2193 i,t := F (i) t (\u02c6 h\u2193 i,t, X) \u00b7 (W (i) v X), (4) F (i) t (\u02c6 h\u2193 i,t, X) := softmax \u0010 (W (i) q \u02c6 h\u2193 i,t)(W (i) k X)T / \u221a d \u0011 , (5) where F (i)\u2193 t \u2208R\u2308w ci \u2309\u00d7\u2308h ci \u2309\u00d7lH\u00d7lW and Wk, Wq, and Wv are projection matrices with lH attention A B D C E Figure 2: Illustration of computing DAAM for some word: the multiscale attention arrays from Eqn. (5) (see A); the bicubic interpolation (B) resulting in expanded maps (C); summing the heat maps across the layers (D), as in Eqn. (6); and the thresholding (E) from Eqn. (7). heads. The same mechanism applies when upsampling h\u2191 i . For brevity, we denote the respective attention score arrays as F (i)\u2193 t and F (i)\u2191 t , and we implicitly broadcast matrix multiplications as per NumPy convention (Harris et al., 2020). Spatiotemporal aggregation. F (i)\u2193 t [x, y, \u2113, k] is normalized to [0, 1] and connects the kth word to the intermediate coordinate (x, y) for the ith downsampling block and \u2113th head. Due to the fully convolutional nature of U-Net (and the VAE), the intermediate coordinates locally map to a surrounding affected square area in the \ufb01nal image, the scores thus relating each word to that image patch. However, different layers produce heat maps with varying scales, deepest ones being the coarsest (e.g., h\u2193 K,t and h\u2191 K\u22121,t), requiring spatial normalization to create a single heat map. To do this, we upscale all intermediate attention score arrays to the original image size using bicubic interpolation, then sum them over the heads, layers, and time steps: DR k[x, y] := X i,j,\u2113 \u02dc F (i)\u2193 tj,k,\u2113[x, y] + \u02dc F (i)\u2191 tj,k,\u2113[x, y], (6) where k is the kth word and \u02dc F (i)\u2193 tj,k,\u2113[x, y] is shorthand for F (i)\u2193 t [x, y, \u2113, k], bicubically upscaled to \ufb01xed size (w, h).1 Since DR k is positive and scale normalized (summing normalized values preserves linear scale), we can visualize it as a soft heat map, with higher values having greater attribution. To generate a hard, binary heat map (either a pixel is in\ufb02uenced or not), we can threshold DR k as DI\u03c4 k [x, y] := I \u0012 DR k[x, y] \u2265\u03c4 max i,j DR k[i, j] \u0013 , (7) where I(\u00b7) is the indicator function and \u03c4 \u2208[0, 1]. See Figure 2 for an illustration of DAAM. 1We show that aggregating across all time steps and layers is indeed necessary in Section A.1. \f# Method COCO-Gen Unreal-Gen mIoU80 mIoU\u221emIoU80 mIoU\u221e Supervised Methods 1 Mask R-CNN (ResNet-101) 82.9 32.1 76.4 31.2 2 QueryInst (ResNet-101-FPN) 80.8 31.3 78.3 35.0 3 Mask2Former (Swin-S) 84.0 32.5 80.0 36.7 4 CLIPSeg 78.6 71.6 74.6 70.9 Unsupervised Methods 5 Whole image mask 20.4 21.1 19.5 19.3 6 PiCIE + H 31.3 25.2 34.9 27.8 7 STEGO (DINO ViT-B) 35.8 53.6 42.9 54.5 8 Our DAAM-0.3 64.7 59.1 59.1 58.9 9 Our DAAM-0.4 64.8 60.7 60.8 58.3 10 Our DAAM-0.5 59.0 55.4 57.9 52.5 Table 1: MIoU of semantic segmentation methods on our synthesized datasets. Best in each section bolded. 3 Attribution Analyses 3.1 Object Attribution Quantitative evaluation of our method is challenging, but we can attempt to draw upon existing annotated datasets and methods to see how well our method aligns. A popular visuosemantic task is image segmentation, where areas (i.e., segmentation masks) are given a semantically meaningful label, commonly nouns. If DAAM is accurate, then our attention maps should arguably align with the image segmentation labels for these tasks\u2014despite not having been trained to perform this task. Setup. We ran the Stable Diffusion 2.0 base model using 30 inference steps per image with the DPM (Lu et al., 2022) solver\u2014see the appendix section A.1 for speci\ufb01cs. We then synthesized one set of images using the validation set of the COCO image captions dataset (Lin et al., 2014), representing realistic prompts, and another set by randomly swapping nouns in the same set (holding the vocabulary \ufb01xed), representing unrealism. The purpose of the second set was to see how well the model generalized to uncanny prompts, whose composition was unlikely to have been encountered at training time. We named the two sets \u201cCOCO-Gen\u201d and \u201cUnreal-Gen,\u201d each with 100 prompt\u2013image pairs. For ground truth, we extracted all countable nouns from the prompts, then hand-segmented each present noun in the image. To compute binary DAAM segmentation masks, we used Eqn. 7 with thresholds \u03c4 \u2208{0.3, 0.4, 0.5}, for each noun in the ground truth. We refer to these methods as DAAM-\u27e8\u03c4\u27e9, e.g., DAAM-0.3. For supervised baselines, we evaluated semantic segmentation models trained explicitly on COCO, like Mask R-CNN (He et al., 2017) with a ResNet-101 backbone (He et al., 2016), QueryInst (Fang et al., 2021) with ResNet-101-FPN (Lin et al., 2017), and Mask2Former (Cheng et al., 2022) with SwinS (Liu et al., 2021), all implemented in MMDetection (Chen et al., 2019), as well as the openvocabulary CLIPSeg (L\u00fcddecke and Ecker, 2022) trained on the PhraseCut dataset (Wu et al., 2020). We note that CLIPSeg\u2019s setup resembles ours because the image captions are assumed given as well. However, their model is supervised since they additionally train their model on segmentation labels. Our unsupervised baselines consisted of the state-of-the-art STEGO (Hamilton et al., 2021) and PiCIE + H (Cho et al., 2021). As is standard (Lin et al., 2014), we evaluated all approaches using the mean intersection over union (mIoU) over the prediction\u2013ground truth mask pairs. We denote mIoU80 when restricted to the 80 COCO classes that the supervised baselines were trained on (save for CLIPSeg) and mIoU\u221eas the mIoU without the class restriction. Results. We present results in Table 1. The COCO-supervised models (rows 1\u20133) are constrained to COCO\u2019s 80 classes (e.g., \u201ccat,\u201d \u201ccake\u201d), while DAAM (rows 5\u20137) is open vocabulary; thus, DAAM outperforms them by 22\u201328 points in mIoU\u221eand underperforms by 20 points in mIoU80. CLIPSeg (row 4), an open-vocabulary model trained on semantic segmentation datasets, achieves the best of both worlds in mIoU80 and mIoU\u221e, with the highest mIoU\u221eoverall and high mIoU80. However, its restriction to nouns precludes it from generalized segmentation (e.g., verbs). DAAM largely outperforms both unsupervised baselines (rows 6\u20137) by a margin of 4.4\u201329 points (see rows 7\u201310), likely because we assume the prompts to be provided. Similar \ufb01ndings hold on the unrealistic Unreal-Gen set, showing that DAAM is resilient to nonsensical texts, con\ufb01rming that DAAM works when Stable Diffusion has to generalize in composition. As for \u03c4, 0.4 works best on all splits, though it isn\u2019t too sensitive, varying by 3\u20136 points in mIoU. We also show that all layers and time steps contribute to DAAM\u2019s segmentation quality, shown in Section A.1. Overall, DAAM forms a strong baseline of 57.9\u201364.8 mIoU80. We conclude that it is empirically sane, which we further support for all parts of speech in the next section. \fPoor Fair Good Excellent Mean Opinion Score NUM ADV ADJ VERB PROPN NOUN Part of Speech Human Rater Opinion by Part of Speech 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of Fair Excellent Ratings NUM ADV ADJ VERB PROPN NOUN Part of Speech Proportion of Fair Excellent Scores by POS Figure 3: On the top, mean opinion scores grouped by part of speech, with 95% con\ufb01dence interval bars; on the bottom, proportion of fair\u2013excellent scores, grouped by part-of-speech. 3.2 Generalized Attribution We extend our veracity analyses beyond nouns to all parts of speech, such as adjectives and verbs, to show that DAAM is more generally applicable. A high-quality, reliable analysis requires human annotation; hence, we ask human raters to evaluate the attribution quality of DAAM maps, using a \ufb01ve-point Likert scale. This setup generalizes that of the last section because words in general are not visually separable, which prevents effective segmentation annotation. For example, in the prompt \u201cpeople running,\u201d it is unclear where to visually segment \u201crunning.\u201d Is it just the knees and feet of the runners, or is it also the swinging arms? On the contrary, if annotators are instead given the proposed heat maps for \u201crunning,\u201d they can make a judgement on how well the maps re\ufb02ect the word. Setup. To construct our word\u2013image dataset, we \ufb01rst randomly sampled 200 words from each of the 14 most common part-of-speech tags in COCO, extracted with spaCy, for a total of 2,800 unique word\u2013prompt pairs. Next, we generated images alongside DAAM maps for all pairs, varying the random seed each time. To gather human judgements, we built our annotation interface in Amazon MTurk, a crowdsourcing platform. We presented the generated image, the heat map, and the prompt with the target word in red, beside a question asking expert workers to rate how well the highlighting re\ufb02ects the word. They then selected a rating among one of \u201cbad,\u201d \u201cpoor,\u201d \u201cfair,\u201d \u201cgood,\u201d and \u201cexcellent\u201d, as well as an option to declare the image itself as too poor or the word too abstract to interFigure 4: Example generations and DAAM heat maps from COCO for each interpretable part-of-speech. pret. For quality control, we removed annotators failing attention tests. For further robustness, we assigned three unique raters to each example. We provide further details on the user interface and annotation process in the appendix section A.2. Results. Our examples were judged by a total of \ufb01fty raters, none producing more than 18% of the total number of annotations. We \ufb01ltered out all word\u2013image pairs deemed too abstract (e.g., \u201cthe\u201d), when any one of the three assigned raters selected that option. This resulted in six interpretable partof-speech tags with enough judgements\u2014see the appendix for detailed statistics. To compute the \ufb01nal score of each word\u2013image pair, we took the median of the three raters\u2019 opinions. We plot our results in Figure 3. In the top subplot, we show that DAAM maps for adjectives, verbs, nouns, and proper nouns attain close to or slightly above \u201cgood,\u201d whereas the ones for numerals and adverbs are closer to \u201cfair.\u201d This agrees with the generated examples in Figure 4, where numerals (see the giraffes\u2019 edges) and adverbs (feet and ground motion blur) are less intuitively highlighted than adjectives (blue part of teapot), verbs (\ufb01sts and legs in running form), and nouns. Nevertheless, the proportion of ratings falling between fair and excellent are above 80% for numerals and adverbs and 90% for the rest\u2014see the bottom of Figure 3. We thus conclude that DAAM produces plausible maps for each interpretable part of speech. One anticipated criticism is that different heat maps may explain the same word, making a qualitative comparison less meaningful. In Figure 4, \u201cquickly\u201d could conceivably explain \u201crunning\u201d too. We concede to this, but our motivation is not to compare quality but rather to demonstrate plausibility. Without these experiments, the DAAM maps for words like \u201crunning\u201d and \u201cblue\u201d could very well have been meaningless blotches. \f# Relation mIoD mIoH \u2206 mIoU 1 Unrelated pairs 65.1 66.1 1.0 47.5 2 All head\u2013dependent pairs 62.3 62.0 0.3 43.4 3 compound 71.3 71.5 0.2 51.1 4 punct 68.2 70.0 1.8 49.5 5 nconj:and 58.0 56.1 1.9 38.2 6 det 54.8 52.2 2.6 35.0 7 case 51.7 58.1 6.4 36.9 8 acl 67.4 79.3 12. 55.4 9 nsubj 76.4 63.9 12. 52.2 10 amod 62.4 77.6 15. 51.1 11 nmod:of 73.5 57.9 16. 47.5 12 obj 75.6 46.3 29. 55.4 14 Coreferent word pairs 84.8 77.4 7.4 66.6 Table 2: Head\u2013dependent DAAM map overlap statistics across the ten most common relations in COCO. Bolded are the dominant maps, where the absolute difference \u2206between mIoD and mIoH exceeds 10 points. All bolded numbers are signi\ufb01cant (p < 0.01). 4 Visuosyntactic Analysis Equipped with DAAM, we now study how syntax relates to generated pixels. We characterize pairwise interactions between head\u2013dependent DAAM maps, augmenting previous sections and helping to form hypotheses for further research. Setup. We randomly sampled 1,000 prompts from COCO, performed dependency parsing with CoreNLP (Manning et al., 2014), and generated an image for each prompt and DAAM maps for all words. We constrained our examination to the top10 most common relations, resulting in 8,000 head\u2013 dependent pairs. Following Section 3.1, we then binarized the maps to quantify head\u2013dependent interactions with set-based similarity statistics. We computed three statistics between the DAAM map of the head and that of the dependent: \ufb01rst, the mean visual intersection area over the union (mIoU), i.e., |A\u2229B| |A\u222aB|; second, the mean intersection over the dependent (mIoD; |A\u2229B| |A| ); and third, the intersection over the head (mIoH; |A\u2229B| |B| ). MIoU measures similarity, and the difference between mIoD and mIoH quanti\ufb01es dominance. If mIoD > mIoH, then the head contains (dominates) the dependent more, and vice versa\u2014see Appendix B for a visual tutorial. Results. We present our quantitative results in Table 2 and examples in Figure 5. For baselines, we computed overlap statistics for unrelated pairs of words and all head\u2013dependent pairs. Unsurprisingly, both baselines show moderate similarity and no dominance (43\u201348 mIoU, \u2206\u22641; rows 1\u20132). For syntactic relations, we observe no dominance for noun compounds (row 3), which is exFigure 5: Twelve example pairs of DAAM maps, with the dominant word in bold, if present for the relation. Note that the visualization scale is normalized for each image since our purpose is to study the spatial locality of attribution conditioned on the word. For example, the absolute magnitude for the comma above is weak. pected since the two nouns complement one another (e.g., \u201cice cream\u201d). Punctuation and articles (punct, det; rows 4 and 6) also lack dominance, possibly from having little semantic meaning and attending broadly across the image (Figure 5, top right). This resembles \ufb01ndings in Kovaleva et al. (2019), who note BERT\u2019s (Devlin et al., 2019) punctuation to attend widely. For nouns connected with \u201cand\u201d (row 5), the maps overlap less (38.7 mIoU vs. 50+), likely due to visual separation (e.g., \u201ccat and dog\u201d). However, the overlap is still far above zero, which we attribute partially to feature entanglement, further explored in Section 5.1. Starting at row 8, we arrive at pairs where one map dominates the other. A group in core arguments arises (nsubj, obj), where the head word dominates the noun subject\u2019s or object\u2019s map (12\u2013 29-point \u2206), perhaps since verbs contextualize both the subject and the object in its surroundings\u2014see the middle of and bottom left of Fig. 5. We observe another group in nominal dependents (nmod:of, amod, acl), where nmod:of mostly points to collective nouns (e.g., \u201cpile of oranges\u201d), whose dominance is intuitive. In contrast, adjectival modi\ufb01ers (amod) behave counterintuitively, where descriptive adjectives (dependents) visually dominate the nouns they modify (\u2206\u224815). We instead expect objects to contain their attributes, but this is not the case. We again ascribe this to entanglement, elucidated in Section 5.2. Lastly, coreferent word pairs exhibit the highest overlap out of all relations (66.6 mIoU), indicating attention to the same referent. \fCorrect Incorrect Noncohyponym Cohyponym 5.43 29.5 43.5 60.9 Overlap by Cohyponym Status and Correctness 25 50 High Mid Low Degree of Overlap Noncohyponym Cohyponym 36 56.1 77.5 9.75 19.3 71.7 Accuracy by Cohyponym Status and Overlap 25 50 75 Figure 6: Above: DAAM map overlap in mean IoU, subdivided by cohyponym status and correctness; below: generation accuracy, subdivided by cohyponym status and amount of overlap. 5 Visuosemantic Analyses 5.1 Cohyponym Entanglement To further study the large nconj:and overlap found in Section 4, we hypothesize that semantically similar words in a prompt have worse generation quality, where only one of the words is generated in the image, not all. Setup. To test our hypothesis, we used WordNet (Miller, 1995) to construct a hierarchical ontology expressing semantic \ufb01elds over COCO\u2019s 80 visual objects, of which 28 have at least one other cohyponym across 16 distinct hypernyms (as listed in the appendix). Next, we used the prompt template, \u201ca(n) and a(n) ,\u201d depicting two distinct things, to generate our dataset. Using our ontology, we randomly sampled two cohyponyms 50% of the time and two non-cohyponyms other times, producing 1,000 prompts from the template (e.g., \u201ca giraffe and a zebra,\u201d \u201ca cake and a bus\u201d). We generated an image for each prompt, then asked three unique annotators per image to select which objects were present, given the 28 words. We manually veri\ufb01ed the image\u2013label pairs, rejecting and republishing incorrect ones. Finally, we marked the overall label for each image as the top two most commonly picked nouns, ties broken by submission order. We considered generations correct if both words in the prompt were present in the image. For more setup details, see the appendix. Results. Overall, the non-cohyponym set attains a generation accuracy of 61% and the cohyponym set 52%, statistically signi\ufb01cant at the 99% level according to the exact test, supporting our hypothesis. To see if DAAM assists in explaining these effects, we compute binarized DAAM maps (\u03c4 = 0.4, the best value from Sec. 3.1) for both words and quanFigure 7: Rows starting from the top: generated images for cohyponyms \u201ca giraffe and a zebra,\u201d heat maps for the \ufb01rst two images, and heat maps for noncohyponymic zebra\u2013fridge and giraffe\u2013fridge prompts. Figure 8: First row: a DAAM map for \u201crusty\u201d and three generated images for \u201ca shovel sitting in a clean shed;\u201d second row: a map for \u201cbumpy\u201d and images for \u201ca ball rolling down a hill.\u201d tify the amount of overlap with IoU. We \ufb01nd that the mIoU for cohyponyms and non-cohyponyms are 46.7 and 22.9, suggesting entangled attention and composition. In the top of Figure 6, we further group the mIoU by cohyponym status and correctness, \ufb01nding that incorrectness and cohyponymy independently increase the overlap. In the bottom subplot, we show that the amount of overlap (mIoU) differentiates correctness, with the low, mid, and high cutoff points set at \u22640.4, 0.4\u20130.6, and \u22650.6, following statistics in Section 4. We observe accuracy to be much better on pairs with low overlap (71.7\u201377.5%) than those with high overlap (9.8\u201336%). We present some example generations and maps in Figure 7, which supports our results. 5.2 Adjectival Entanglement We examine prompts where a noun\u2019s modifying adjective attends too broadly across the image. We start with an initial seed prompt of the form, \u201ca ,\u201d then vary the adjective to see how the image changes. If there is no entanglement, then the background should \fFigure 9: A DAAM map and generated images for \u201ca car driving down the streets,\u201d above images of the cropped background, saturated for visualization. not gain attributes pertaining to that adjective. To remove scene layout as a confounder, we \ufb01x all cross-attention maps to those of the seed prompt, which Hertz et al. (2022) show to equalize layout. Our \ufb01rst case is, \u201ca {rusty, metallic, wooden} shovel sitting in a clean shed,\u201d \u201crusty\u201d being the seed adjective. As shown in Figure 8, the DAAM map for \u201crusty\u201d attends broadly, and the background for \u201crusty\u201d is surely not clean. When we change the adjective to \u201cmetallic\u201d and \u201cwooden,\u201d the shed changes along with it, becoming grey and wooden, indicating entanglement. Similar observations apply to our second case, \u201ca {bumpy, smooth, spiky} ball rolling down a hill,\u201d where \u201cbumpy\u201d produces rugged ground, \u201csmooth\u201d \ufb02atter ground, and \u201cspiky\u201d blades of grass. In our third case, we study color adjectives using \u201ca {blue, green, red} car driving down the streets,\u201d presented in Figure 9. We discover the same phenomena, with the difference that these prompts lead to quanti\ufb01able notions of adjectival entanglement. For, say, \u201cgreen,\u201d we can conceivably measure the amount of additional green hue in the background, with the car cropped out\u2014see bottom row. A caveat is that entanglement is not necessarily unwanted; for instance, rusty shovels likely belong in rusted areas. It strongly depends on the use case of the model. 6 Related Work and Future Directions The primary area of this work is in understanding neural networks from the perspective of computational linguistics, with the goal of better informing future research. A large body of relevant papers exists, where researchers apply textual perturbation (Wallace et al., 2019), attention visualization (Vig, 2019; Kovaleva et al., 2019; Shimaoka et al., 2016), and information bottlenecks (Jiang et al., 2020) to relate important input tokens to the outputs of large language models. Others explicitly test for linguistic constructs within models, such as Hendricks and Nematzadeh\u2019s (2021) probing of vision transformers for verb understanding and Ilinykh and Dobnik\u2019s (2022) examination of visual grounding in image-to-text transformers. Our distinction is that we carry out an attributive analysis in the space of generative diffusion models, as the pixel output relates to syntax and semantics. As a future extension, we plan to assess the unsupervised parsing ability of Stable Diffusion with syntactic\u2013geometric probes, similar to Hewitt and Manning\u2019s (2019) work in BERT. The intersection of text-to-image generation and natural language processing is certainly substantial. In the context of enhancing diffusion models using prompt engineering, Hertz et al. (2022) cement cross-attention maps for the purpose of precision-editing generated images using text, and Woolf (2022) proposes negative prompts for removing undesirable, scene-wide attributes. Related as well are works for generative adversarial networks, where Karras et al. (2019) and Materzy\u00b4 nska et al. (2022) disentangle various features such as style and spelling. Along this vein, our work exposes more entanglement in cohyponyms and adjectives. A future line of work is to disentangle such concepts and improve generative quality. Last but not least are semantic segmentation works in computer vision. Generally, researchers start with a backbone encoder, attach decoders, and then optimize the model in its entirety end-to-end on a segmentation dataset (Cheng et al., 2022), unless the context is unsupervised, in which case one uses contrastive objectives and clustering (Cho et al., 2021; Hamilton et al., 2021). Toward this, DAAM could potentially provide encoder features in a segmentation pipeline, where its strong raw baseline numbers suggest the presence of valuable latent representations in Stable Diffusion. 7" + }, + { + "url": "http://arxiv.org/abs/2008.09606v1", + "title": "Howl: A Deployed, Open-Source Wake Word Detection System", + "abstract": "We describe Howl, an open-source wake word detection toolkit with native\nsupport for open speech datasets, like Mozilla Common Voice and Google Speech\nCommands. We report benchmark results on Speech Commands and our own freely\navailable wake word detection dataset, built from MCV. We operationalize our\nsystem for Firefox Voice, a plugin enabling speech interactivity for the\nFirefox web browser. Howl represents, to the best of our knowledge, the first\nfully productionized yet open-source wake word detection toolkit with a web\nbrowser deployment target. Our codebase is at\nhttps://github.com/castorini/howl.", + "authors": "Raphael Tang, Jaejun Lee, Afsaneh Razi, Julia Cambre, Ian Bicking, Jofish Kaye, Jimmy Lin", + "published": "2020-08-21", + "updated": "2020-08-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Wake word detection is the task of recognizing an utterance for activating a speech assistant, such as \u201cHey, Alexa\u201d for the Amazon Echo. Given that such systems are meant to support full automatic speech recognition, the task seems simple; however, it introduces a different set of challenges because these systems have to be always listening, computationally ef\ufb01cient, and, most of all, privacy respecting. Therefore, the community treats it as a separate line of work, with most recent advancements driven predominantly by neural networks (Sainath and Parada, 2015; Tang and Lin, 2018). Unfortunately, most existing toolkits are closed source and often speci\ufb01c to a target platform. Such design choices restrict the \ufb02exibility of the application and add unnecessary maintenance as the number of target domains increases. We argue that using JavaScript is a solution: unlike many languages and their runtimes, the JavaScript engine powers a wide range of modern user-facing applications ranging from mobile to desktop ones. \u2217Equal contribution. Order decided by coin \ufb02ip. To this end, we have previously developed Honkling, a JavaScript-based keyword spotting system (Lee et al., 2019). Leveraging one of the lightest models available for the task from Tang and Lin (2018), Honkling ef\ufb01ciently detects the target commands with high precision. However, we notice that Honkling is still quite far from being a stable wake word detection system. This gap mainly arises from the model being trained as a speech commands classi\ufb01er, instead of a wake word detector; its high false alarm rate results from the limited number of negative samples in the training dataset (Warden, 2018). In this paper, to make a greater practical impact, we close this gap in the Honkling ecosystem and present Howl, an open-source wake word detection toolkit with support for open datasets such as Mozilla Common Voice (MCV; Ardila et al., 2019) and the Google Speech Commands dataset (Warden, 2018). Our new system is the \ufb01rst in-browser wake word system which powers a widely deployed industrial application, Firefox Voice. By processing the audio in the browser and being completely open source, including the datasets and models, Howl is a privacy-respecting, non-eavesdropping system which users can trust. Having a false reject rate of 10% at 4 false alarms per hour of speech, Howl has enabled Firefox Voice to provide a completely hands-free experience to over 8,000 users in the nine days since its launch. 2 Background and Related Work Other than privately owned wake word detection systems, Porcupine and Snowboy are the most wellknown ecosystems that provide an open-source modeling toolkit, some data, and deployment capabilities. However, these ecosystems are still closed at heart; they keep their data, models, or deployment proprietary. As far as open-source ecosystems arXiv:2008.09606v1 [cs.CL] 21 Aug 2020 \fPreprocessor Common Voice Common Voice Positive Set Negative Set Noise Dataset Augment Trainer Model Evaluator Serialize Filter Align Noise Stretch Serialize Optimize Serialize False Alarm Rate Model False Reject Rate Deploy Figure 1: An illustration of the pipeline and its control \ufb02ow. First, we preprocess Common Voice by \ufb01ltering for the wake word vocabulary, aligning the speech, and saving the negative and positives sets to disk. Next, we introduce a noise dataset and augment the data on the \ufb02y at training time. Finally, we evaluate the optimized model and, if the results are satisfactory, export it for deployment. go, Precise1 represents a step in the right direction, but its datasets are limited, and its deployment target is the Raspberry Pi. We further make the distinction from speech commands classi\ufb01cation toolkits, such as Honk (Tang and Lin, 2017). These frameworks focus on classifying \ufb01xed-length audio as one of a few dozen keywords, with no evaluation on a sizable negative set, as required in wake word detection. While these trained models may be used in detection applications, they are not rigorously tested for such. 3 System We present a high-level description of our toolkit and its goals. For speci\ufb01c details, we refer users to the repository, as linked in the abstract. 3.1 Requirements Howl is written in Python 3.7+, with the notable dependencies being PyTorch for model training, Librosa (McFee et al., 2015) for audio preprocessing, and Montreal Forced Aligner (MFA; McAuliffe et al., 2017) for speech data alignment. We license Howl under the Mozilla Public License v2, a \ufb01lelevel copyleft free license. For speedy model training, we recommend a CUDA-enabled graphics card with at least 4GB of VRAM; we used an Nvidia Titan RTX in all of our experiments. The rest of the computer can be built with, say, 16GB of RAM and a mid-range desktop CPU. For resource-restricted 1https://github.com/MycroftAI/ mycroft-precise users, we suggest exploring Google Colab2 and other cloud-based solutions. 3.2 Components and Pipeline Howl consists of the three following major components: audio preprocessing, data augmentation, and model training and evaluation. These components form a pipeline, in the written order, for producing deployable models from raw audio data. Preprocessing. A wake word dataset must \ufb01rst be preprocessed from an annotated data source, which is de\ufb01ned as a collection of audio\u2013transcription pairs, with prede\ufb01ned training, development, and test splits. Since Howl is a frame-level keyword spotting system, it relies on a forced aligner to provide wordor phone-based alignment. We choose MFA for its popularity and free license, and hence Howl structures the processed datasets to interface well with MFA. Another preprocessing task is to parse the global con\ufb01guration settings for the framework. Such settings include the learning rate, the dataset path, and model-speci\ufb01c hyperparameters. We read in most of these settings as environment variables, which enable easy shell scripting. Augmentation. For improved robustness and better quality, we implement a set of popular augmentation routines: time stretching, time shifting, synthetic noise addition, recorded noise mixing, SpecAugment (no time warping; Park et al., 2019), and vocal tract length perturbation (Jaitly and Hinton, 2013). These are readily extensible, so practitioners may easily add new augmentation modules. 2https://colab.research.google.com/ \fTraining and evaluation. Howl provides several off-the-shelf neural models, as well as training and evaluation routines using PyTorch for computing the loss gradient and the task-speci\ufb01c metrics, such as the false alarm rate and reject rate. These routines are also responsible for serializing the model and exporting it to our browserside deployment. Pipeline. Given these components, our pipeline, visually presented in Figure 1, is as follows: First, users produce a wake word detection dataset, either manually or from a data source like Common Voice and Google Speech Commands, setting the appropriate environment variables. This can be quickly accomplished using Common Voice, whose ample breadth and coverage of popular English words allow for a wide selection of custom wake words; for example, it has about a thousand occurrences of the word \u201cnext.\u201d In addition to a positive subset containing the vocabulary and wake word, this dataset ideally contains a sizable negative set, which is necessary for more robust models and a more accurate evaluation of the false positive rate. Next, users (optionally) select which augmentation modules to use, and they train a model with the provided hyperparameters on the selected dataset, which is \ufb01rst processed into log-Mel frames with zero mean and unit variance, as is standard. This training process should take less than a few hours on a GPU-capable device for most use cases, including ours. Finally, users may run the model in the included command line interface demo or deploy it to the browser using Honkling, our inbrowser keyword spotting (KWS) system, if the model is supported (Lee et al., 2019). 3.3 Data and Models For the data sources, Howl works out of the box with MCV, a general speech corpus, and Speech Commands, a commands recognition dataset. Users can quickly extend Howl to accept other speech corpuses such as LibriSpeech (Panayotov et al., 2015). Howl also accepts any folder that contains audio \ufb01les and interprets them as recorded noise for data augmentation, which covers noise datasets such as MUSAN (Snyder et al., 2015) and Microsoft SNSD (Reddy et al., 2019). For modeling, Howl provides implementations of convolutional neural networks (CNNs) and recurrent neural networks (RNNs) for wake word detection. These models are from the existing literature, such as residual CNNs (Tang and Lin, Model Dev/Test # Par. EdgeSpeechNet (Lin et al., 2018) \u2013/96.8 107K res8 (Tang and Lin, 2018) \u2013/94.1 110K RNN (de Andrade et al., 2018) \u2013/95.6 202K DenseNet (Zeng and Xiao, 2019) \u2013/97.5 250K Our res8 97.0/97.8 111K Our LSTM 94.3/94.5 128K Our LAS encoder 96.8/97.1 478K Our MobileNetv2 96.4/97.3 2.3M Table 1: Model accuracy on Google Speech Commands. Bolded denotes the best and # par. the number of parameters. 2018), a modi\ufb01ed listen\u2013attend\u2013spell (LAS) encoder (Chan et al., 2015; Park et al., 2019), and MobileNetv2 (Sandler et al., 2018). Most of the models are lightweight since the end application requires ef\ufb01cient inference, though some are parameter heavy to establish a rough upper bound on the quality, as far as parameters go. Of particular focus is the lightweight res8 model (Tang and Lin, 2018), which is directly exportable to Honkling, the in-browser KWS system. For this reason, we choose it in our deployment to Firefox Voice.3 4 Benchmark Results To verify the correctness of our implementation, we \ufb01rst train and evaluate our models on the Google Speech Commands dataset, for which there exists many known results. Next, we curate a wake word detection datasets and report our resulting model quality. Training details are in the repository. Commands recognition. We report in Table 1 the results of the twelve-keyword recognition task from Speech Commands (v1), where we classify a onesecond clip as one of \u201cyes,\u201d \u201cno,\u201d \u201cup,\u201d \u201cdown,\u201d \u201cleft,\u201d \u201cright,\u201d \u201con,\u201d \u201coff,\u201d \u201cstop,\u201d \u201cgo,\u201d unknown, or silence. Our implementations are competitive with state of the art, with the res8 model surprisingly achieving the highest accuracy of 97.8 on the test set, despite having fewer parameters. Our other implemented models, the LSTM, LAS encoder, and MobileNetv2, compare favorably. Wake word detection. For wake word detection, we target \u201chey, Firefox\u201d for waking up Firefox Voice. From the single-word segment of MCV, we 3https://github.com/ mozilla-extensions/firefox-voice \f1 2 3 4 5 False Alarms per Hour 0.10 0.15 0.20 0.25 0.30 0.35 False Reject Rate ROC curves for \"Hey Firefox\" Clean Dev Clean Test Figure 2: Receiver operating characteristic (ROC) curves for the wake word. use 1,894 and 1,877 recordings of \u201chey\u201d and \u201cFirefox,\u201d respectively; from the MCV general speech corpus, we select all 1,037 recordings containing \u201chey,\u201d \u201c\ufb01re,\u201d or \u201cfox.\u201d We additionally collect 632 recordings of \u201chey, Firefox\u201d from volunteers. For the negative set, we use about 10% of the entire MCV speech corpus. We choose the training, dev, and test splits to be 80%, 10%, and 10% of the resulting corpus, strati\ufb01ed by speaker IDs for the positive set. For robustness to noise, we use portions of MUSAN and SNSD as the noise dataset. We arrive at 31 hours of data for training and 3 hours each for dev and test. For the model, we select res8 (Tang and Lin, 2018) for its high quality on Speech Commands and easy adaptability with our browser deployment target. We follow the aforementioned pipeline to train it; details are not repeated, and hyperparameters can be found in the repository. We present the resulting receiver operating characteristic curves in Figure 2, where different operating points result from different thresholds on the output probabilities. Although it seems to lag commercial systems (Sainath and Parada, 2015) by 10\u201320% at the same number of false alarms per hour, those systems are trained with 5\u201320\u00d7 more data. Our negative set also likely contains more adversarial examples that misrepresent realworld usage, e.g., many utterances of \u201cFirefox,\u201d which are responsible for at least 90% of the false positives. Thus, combined with favorable though preliminary results from live testing the system ourselves, we comfortably choose the operating point at four false alarms per hour. We \ufb01nally note that the discrepancy between the dev and test curves is likely explained by differences in the data distribution, not hyperparameter \ufb01ddling, because there are only 76 and 54 clips in the positive dev and test sets, respectively. 5 Browser Deployment To protect user security and privacy, wake word detection must be achieved with the user\u2019s resources only. This setting introduces various technical challenges, as the available resources are often limited and may not be accessible. In the case of Firefox Voice, our target application, the platform is Firefox, where the major challenge is the limited support in machine learning frameworks. However, our previous line of work demonstrates the feasibility of in-browser wake word detection with Honkling (Lee et al., 2019). Our application is written purely in JavaScript and supports different models using TensorFlow.js. During the process of integrating Honkling with Firefox Voice, the two main aspects we focus on are accuracy and ef\ufb01ciency. We rewrite the audio processing logic of Honkling to match the new Python pipeline and optimize various preprocessing routines to substantially reduce the computational burden. To measure the performance of our application, we refer to the built-in energy impact metric of Firefox, which reports the CPU consumption of each open tab. To establish a reference, playing a YouTube video reports an average energy impact of 10, while a static Google search reports 0.1. Fortunately, our wake word detection model yields an energy impact of only 3, which ef\ufb01ciently enables hands-free interaction for initiating the speech recognition engine. Our wake word detection demo and browserside integration details can be found at https://github. com/castorini/howl-deploy. 6" + }, + { + "url": "http://arxiv.org/abs/2004.13705v1", + "title": "Showing Your Work Doesn't Always Work", + "abstract": "In natural language processing, a recently popular line of work explores how\nto best report the experimental results of neural networks. One exemplar\npublication, titled \"Show Your Work: Improved Reporting of Experimental\nResults,\" advocates for reporting the expected validation effectiveness of the\nbest-tuned model, with respect to the computational budget. In the present\nwork, we critically examine this paper. As far as statistical generalizability\nis concerned, we find unspoken pitfalls and caveats with this approach. We\nanalytically show that their estimator is biased and uses error-prone\nassumptions. We find that the estimator favors negative errors and yields poor\nbootstrapped confidence intervals. We derive an unbiased alternative and\nbolster our claims with empirical evidence from statistical simulation. Our\ncodebase is at http://github.com/castorini/meanmax.", + "authors": "Raphael Tang, Jaejun Lee, Ji Xin, Xinyu Liu, Yaoliang Yu, Jimmy Lin", + "published": "2020-04-28", + "updated": "2020-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Questionable answers and irreproducible results represent a formidable beast in natural language processing research. Worryingly, countless experimental papers lack empirical rigor, disregarding necessities such as the reporting of statistical significance tests (Dror et al., 2018) and computational environments (Crane, 2018). As Forde and Paganini (2019) concisely lament, explorimentation, the act of tinkering with metaparameters and praying for success, while helpful in brainstorming, does not constitute a rigorous scienti\ufb01c effort. Against the crashing wave of explorimentation, though, a few brave souls have resisted the urge to feed the beast. Reimers and Gurevych (2017) argue for the reporting of neural network score distributions. Gorman and Bedrick (2019) demonstrate that deterministic dataset splits yield less robust results than random ones for neural networks. Dodge et al. (2019) advocate for reporting the expected validation quality as a function of the computation budget used for hyperparameter tuning, which is paramount to robust conclusions. But carefully tread we must. Papers that advocate for scienti\ufb01c rigor must be held to the very same standards that they espouse, lest they birth a new beast altogether. In this work, we critically examine one such paper from Dodge et al. (2019). We acknowledge the validity of their technical contribution, but we \ufb01nd several notable caveats, as far as statistical generalizability is concerned. Analytically, we show that their estimator is negatively biased and uses assumptions that are subject to large errors. Based on our theoretical results, we hypothesize that this estimator strongly prefers underestimates to overestimates and yields poor con\ufb01dence intervals with the common bootstrap method (Efron, 1982). Our main contributions are as follows: First, we prove that their estimator is biased under weak conditions and provide an unbiased solution. Second, we show that one of their core approximations often contains large errors, leading to poorly controlled bootstrapped con\ufb01dence intervals. Finally, we empirically con\ufb01rm the practical hypothesis using the results of neural networks for document classi\ufb01cation and sentiment analysis. 2 Background and Related Work Notation. We describe our notation of fundamental concepts in probability theory. First, the cumulative distribution function (CDF) of a random variable (RV) X is de\ufb01ned as F(x) := Pr[X \u2264x]. Given a sample (x1, . . . , xB) drawn from F, the empirical CDF (ECDF) is then \u02c6 FB(x) := 1 B PB i=1 I[xi \u2264x], where I denotes the indicator function. Note that we pick \u201cB\u201d instead of \u201cn\u201d to be consistent with Dodge et al. (2019). The error of the ECDF is poparXiv:2004.13705v1 [cs.CL] 28 Apr 2020 \fularly characterized by the Kolmogorov\u2013Smirnov (KS) distance between the ECDF and CDF: KS( \u02c6 FB, F) := sup x\u2208R | \u02c6 FB(x) \u2212F(x)|. (2.1) Naturally, by de\ufb01nition of the CDF and ECDF, KS( \u02c6 FB, F) \u22641. Using the CDF, the expectation for both discrete and continuous (cts.) RVs is E[X] = Z \u221e \u2212\u221e xdF(x), (2.2) de\ufb01ned using the Riemann\u2013Stieltjes integral. We write the ith order statistic of independent and identically distributed (i.i.d.) X1, . . . , XB as X(i:B). Recall that the ith order statistic X(i:B) is an RV representing the ith smallest value if the RVs were sorted. Hyperparameter tuning. In random search, a probability distribution p(H) is \ufb01rst de\ufb01ned over a k-tuple hyperparameter con\ufb01guration H := (H1, . . . , Hk), which can include both cts. and discrete variables, such as the learning rate and random seed of the experimental environment. Commonly, researchers choose the uniform distribution over a bounded support for each hyperparameter (Bergstra and Bengio, 2012). Combined with the appropriate model family M and dataset D := (DT , DV )\u2014split into training and validation sets, respectively\u2014a con\ufb01guration then yields a numeric score V on DV . Finally, after sampling B i.i.d. con\ufb01gurations, we obtain the scores V1, . . . , VB and pick the hyperparameter con\ufb01guration associated with the best one. 3 Analysis of Showing Your Work In \u201cShow Your Work: Improved Reporting of Experimental Results,\u201d Dodge et al. (2019) realize the rami\ufb01cations of underreporting the hyperparameter tuning policy and its associated budget. One of their key \ufb01ndings is that, given different computation quotas for hyperparameter tuning, researchers may arrive at drastically different conclusions for the same model. Given a small tuning budget, a researcher may conclude that a smaller model outperforms a bigger one, while they may reach the opposite conclusion for a larger budget. To ameliorate this issue, Dodge et al. (2019) argue for fully reporting the expected maximum of the score as a function of the budget. Concretely, the parameters of interest are \u03b81, . . . , \u03b8B, where \u03b8n := E [max{V1, . . . , Vn}] = E[V(n:n)] for 1 \u2264 n \u2264B. In other words, \u03b8n is precisely the expected value of the nth order statistic for a sample of size n drawn i.i.d. at tuning time. For this quantity, they propose an estimator, derived as follows: \ufb01rst, observe that the CDF of V \u2217 n = V(n:n) is Pr[V \u2217 n \u2264v] = Pr[V1 \u2264v \u2227\u00b7 \u00b7 \u00b7 \u2227Vn \u2264v] (3.1) = Pr[V \u2264v]n, (3.2) which we denote as F n(v). Then \u03b8n = E[V(n:n)] = Z \u221e \u2212\u221e vdF n(v). (3.3) For approximating the CDF, Dodge et al. (2019) use the ECDF \u02c6 F n B(v), constructed from some sample S := (v1, . . . , vB), i.e., \u02c6 F n B(v) = \u0010 \u02c6 FB(v) \u0011n = 1 B B X i=1 I[vi \u2264v] !n . (3.4) The \ufb01rst identity in Eq. (3.4) is clear from Eq. (3.2). Without loss of generality, assume v1 \u2264\u00b7 \u00b7 \u00b7 \u2264vB. To construct an estimator \u02c6 \u03b8n for \u03b8n, Dodge et al. (2019) then replace the CDF with the ECDF: \u02c6 \u03b8n := Z \u221e \u2212\u221e vd \u02c6 F n B(v), (3.5) which, by de\ufb01nition, evaluates to \u02c6 \u03b8n = B X i=1 vi \u0010 \u02c6 F n B(vi) \u2212\u02c6 F n B(vi\u22121) \u0011 , (3.6) where, with some abuse of notation, v0 < v1 is a dummy variable and \u02c6 F n B(v0) := 0. We henceforth refer to \u02c6 \u03b8n as the MeanMax estimator. Dodge et al. (2019) recommend plotting the number of trials on the x-axis and \u02c6 \u03b8n on the y-axis. 3.1 Pitfalls and Caveats We \ufb01nd two unspoken caveats in Dodge et al. (2019): \ufb01rst, the MeanMax estimator is statistically biased, under weak conditions. Second, the ECDF, as formulated, is a poor drop-in replacement for the true CDF, in the sense that the \ufb01nite sample error can be unacceptable if certain, realistic conditions are unmet. Estimator bias. The bias of an estimator \u02c6 \u03b8 is de\ufb01ned as the difference between its expectation and its estimand \u03b8: Bias(\u02c6 \u03b8) := E[\u02c6 \u03b8] \u2212\u03b8. An estimator is said to be unbiased if its bias is zero; otherwise, it is biased. We make the following claim: \fTheorem 1. Let V1, . . . , VB be an i.i.d. sample (of size B) from an unknown distribution F on the real line. Then, for all 1 \u2264n \u2264B, Bias(\u02c6 \u03b8n) \u22640, with strict inequality iff V(1) < V(n) with nonzero probability. In particular, if n = 1, then Bias(\u02c6 \u03b81) = 0 while if n > 1 with F continuous or discrete but non-degenerate, then Bias(\u02c6 \u03b8n) < 0. Proof. Let 1 < n \u2264B. We are interested in estimating the expectation of the maximum of the n i.i.d. samples: \u03b8n := E[Vn:n] = E[max{V1, . . . , Vn}]. An obvious unbiased estimator, based on the given sample of size B, is the following: \u02c6 U B n := 1 \u0000B n \u0001 X 1\u2264i1 1. Thus, we have veri\ufb01ed the following for all 1 \u2264k < B: k X j=1 \u0000j\u22121 n\u22121 \u0001 \u0000B n \u0001 < k X j=1 jn \u2212(j \u22121)n Bn . Eq. (3.8) now follows since V(1) < \u00b7 \u00b7 \u00b7 < V(B) lies in the isotonic cone while we have proved the difference of the two coef\ufb01cients lies in the dual cone of the isotonic cone. An elementary way to see this is to \ufb01rst compare the coef\ufb01cients in front of V(B): clearly, \u02c6 U B n \u2019s is larger since it has smaller sum of all coef\ufb01cients (but the one in front of V(B); take k = B \u22121) whereas the total sum is always one. Repeat this comparison for V(1), . . . , V(B\u22121). Lastly, if V(1) < V(n), then there exists a subset (with repetition) 1 \u2264i1 \u2264. . . \u2264in \u2264n such \fthat max{V(i1), . . . , V(in)} < V(n). For instance, setting i1 = . . . = in = 1 would suf\ufb01ce. Since \u02c6 V B n puts positive mass on every subset of n elements (with repetitions allowed), the strict inequality follows. We note that if F is continuous, or if F is discrete but non-degenerate, then V(1) < V(n) with nonzero probability, hence Bias(\u02c6 \u03b8n) = E( \u02c6 V B n \u2212\u02c6 U B n ) < 0. The proof is now complete. For further caveats, see Appendix A. The practical implication is that researchers may falsely conclude, on average, that a method is worse than it is, since the MeanMax estimator is negatively biased. In the context of environmental consciousness (Schwartz et al., 2019), more computation than necessary is used to make a conclusion. ECDF error. The \ufb01nite sample error (Eq. 2.1) of approximating the CDF with the ECDF (Eq. 3.4) can become unacceptable as n increases: Theorem 2. If the sample does not contain the population maximum, KS( \u02c6 F n B, F n) \u21921 exponentially quickly as n and B increase. Proof. See Appendix B. Notably, this result always holds for cts. distributions, since the population maximum is never in the sample. Practically, this theorem suggests the failure of bootstrapping (Efron, 1982) for statistical hypothesis testing and constructing con\ufb01dence intervals (CIs) of the expected maximum, since the bootstrap requires a good approximation of the CDF (Canty et al., 2006). Thus, relying on the bootstrap method for constructing con\ufb01dence intervals of the expected maximum, as in Lucic et al. (2018), may lead to poor coverage of the true parameter. 4 Experiments 4.1 Experimental Setup To support the validity of our conclusions, we opt for cleanroom Monte Carlo simulations, which enable us to determine the true parameter and draw millions of samples. To maintain the realism of our study, we apply kernel density estimation to actual results, using the resulting probability density (or discretized mass) function as the ground truth distribution. Speci\ufb01cally, we examine the experimental results of the following neural networks: Document classi\ufb01cation. We \ufb01rst conduct hyperparameter search over neural networks for document classi\ufb01cation, namely a multilayer perceptron (MLP) and a long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) model representing state of the art (for LSTMs) from Adhikari et al. (2019). For our dataset and evaluation metric, we choose Reuters (Apt\u00b4 e et al., 1994) and the F1 score, respectively. Next, we \ufb01t discretized kernel density estimators to the results\u2014see the appendix for experimental details. We name the distributions after their models, MLP and LSTM. Sentiment analysis. Similar to Dodge et al. (2019), on the task of sentiment analysis, we tune the hyperparameters of two LSTMs\u2014one ingesting embeddings from language models (ELMo; Peters et al., 2018), the other shallow word vectors (GloVe; Pennington et al., 2014). We choose the binary Stanford Sentiment Treebank (Socher et al., 2013) dataset and apply the same kernel density estimation method. We denote the distributions by their embedding types, GloVe and ELMo. 4.2 Experimental Test Battery False conclusion probing. To assess the impact of the estimator bias, we measure the probability of researchers falsely concluding that one method underperforms its true value for a given n. The unbiased estimator has an expectation of 0.5, preferring neither underestimates nor overestimates. Concretely, denote the true n-run expected maxima of the method as \u03b8n and the estimator as \u02c6 \u03b8n. We iterate n = 1, . . . , 50 and report the proportion of samples (of size B = 50) where \u02c6 \u03b8n < \u03b8n. We compute the true parameter using 1,000,000 iterations of Monte Carlo simulation and estimate the proportion with 5,000 samples for each n. CI coverage. To evaluate the validity of bootstrapping the expected maximum, we measure the coverage probability of CIs constructed using the percentile bootstrap method (Efron, 1982). Speci\ufb01cally, we set B = 50 and iterate n = 1, . . . , 50. For each n, across M = 1000 samples, we compare the empirical coverage probability (ECP) to the nominal coverage rate of 95%, with CIs constructed using 5, 000 bootstrapped resamples. The ECP \u02c6 \u03b1n is computed as \u02c6 \u03b1n := 1 M M X i=1 I (\u03b8n \u2208CIi) , (4.1) where CIi is the CI of the ith sample. \f20 40 Number of Trials 0.4 0.6 0.8 F1 Score MLP vs LSTM Samples MLP MLP (true) LSTM LSTM (true) 20 40 Number of Trials 0.75 0.80 0.85 0.90 Accuracy GloVe vs ELMo Samples GloVe GloVe (true) ELMo ELMo (true) Figure 1: The estimated budget\u2013quality curves, along with the true curves. 5 10 15 20 25 Number of Trials 0.4 0.5 0.6 0.7 0.8 F1 Score MLP vs LSTM Samples MLP MLP (true) LSTM LSTM (true) Figure 2: Illustration of a failure case with B = 25. 4.3 Results Following Dodge et al. (2019), we present the budget\u2013quality curves for each model pair in Figure 1. For each n number of trials, we vertically average each curve across the 5,000 samples. We construct CIs but do not display them, since the estimate is precise (standard error < 0.001). For document classi\ufb01cation, we observe that the LSTM is more dif\ufb01cult to tune but achieves higher quality after some effort. For sentiment analysis, using ELMo consistently attains better accuracy with the same number of trials\u2014we do not consider the wall clock time. In Figure 2, we show a failure case of biased estimation in the document classi\ufb01cation task. At B = 25, from n = 20 to 25, the averaged estimate yields the wrong conclusion that the MLP outperforms the LSTM\u2014see the true LSTM line, which is above the true MLP line, compared to its estimate, which is below. False conclusions probing. Figure 3 shows the results of our false conclusion probing experiment. We \ufb01nd that the estimator quickly prefers negative errors as n increases. The curves are mostly similar for both tasks, except the MLP fares worse. This requires further analysis, though we conjecture that the reason is lower estimator variance, which would result in more consistent errors. 20 40 Number of Trials 0.50 0.55 0.60 0.65 0.70 Proportion of Negative Errors MLP vs LSTM Probing MLP LSTM Unbiased 20 40 Number of Trials 0.50 0.55 0.60 0.65 0.70 Proportion of Negative Errors GloVe vs ELMo Probing GloVe ELMo Unbiased Figure 3: The false conclusion probing experiment results, along with Clopper\u2013Pearson 95% CIs. 20 40 Number of Trials 0.4 0.5 0.6 0.7 0.8 0.9 Empirical Coverage MLP vs LSTM CI Coverage MLP LSTM Nominal 20 40 Number of Trials 0.4 0.5 0.6 0.7 0.8 0.9 Empirical Coverage GloVe vs ELMo CI Coverage GloVe ELMo Nominal Figure 4: The CI coverage experiment results, along with Clopper\u2013Pearson 95% CIs. CI coverage. We present the results of the CI coverage experiment results in Figure 4. We \ufb01nd that the bootstrapped con\ufb01dence intervals quickly fail to contain the true parameter at the nominal coverage rate of 0.95, decreasing to an ECP of 0.7 by n = 20. Since the underlying ECDF is the same, this result extends to Lucic et al. (2018), who construct CIs for the expected maximum. 5" + }, + { + "url": "http://arxiv.org/abs/2004.11339v1", + "title": "Rapidly Bootstrapping a Question Answering Dataset for COVID-19", + "abstract": "We present CovidQA, the beginnings of a question answering dataset\nspecifically designed for COVID-19, built by hand from knowledge gathered from\nKaggle's COVID-19 Open Research Dataset Challenge. To our knowledge, this is\nthe first publicly available resource of its type, and intended as a stopgap\nmeasure for guiding research until more substantial evaluation resources become\navailable. While this dataset, comprising 124 question-article pairs as of the\npresent version 0.1 release, does not have sufficient examples for supervised\nmachine learning, we believe that it can be helpful for evaluating the\nzero-shot or transfer capabilities of existing models on topics specifically\nrelated to COVID-19. This paper describes our methodology for constructing the\ndataset and presents the effectiveness of a number of baselines, including\nterm-based techniques and various transformer-based models. The dataset is\navailable at http://covidqa.ai/", + "authors": "Raphael Tang, Rodrigo Nogueira, Edwin Zhang, Nikhil Gupta, Phuong Cam, Kyunghyun Cho, Jimmy Lin", + "published": "2020-04-23", + "updated": "2020-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR" + ], + "main_content": "Introduction In conjunction with the release of the COVID-19 Open Research Dataset (CORD-19),1 the Allen Institute for AI partnered with Kaggle and other institutions to organize a \u201cchallenge\u201d around building an AI-powered literature review for COVID19.2 The \u201ccall to arms\u201d motivates the need for this effort: the number of papers related to COVID19 published per day has grown from around two dozen in February to over 50 by March to over 120 by mid-April. It is dif\ufb01cult for any human to keep up with this growing literature. Operationally, the Kaggle effort started with data scientists developing Jupyter notebooks that analyze the literature with respect to a number of prede\ufb01ned tasks (phrased as information 1 COVID-19 Open Research Dataset (CORD-19) 2 COVID-19 Open Research Dataset Challenge needs). Some of the most promising notebooks were then examined by a team of volunteers\u2014 epidemiologists, medical doctors, and medical students, according to Kaggle\u2014who then curated the notebook contents into an up-to-date literature review. The product3 is organized as a semistructured answer table in response to questions such as \u201cWhat is the incubation period of the virus?\u201d, \u201cWhat do we know about viral shedding in urine?\u201d, and \u201cHow does temperature and humidity affect the transmission of 2019-nCoV?\u201d These answer tables are meant primarily for human consumption, but they provide knowledge for a SQuAD-style question answering dataset, where the input is a question paired with a scienti\ufb01c article, and the system\u2019s task is to identify the answer passage within the document. From the Kaggle literature review, we have manually created CovidQA\u2014which as of the present version 0.1 release comprises 124 question\u2013document pairs. While this small dataset is not suf\ufb01cient for supervised training of models, we believe that it is valuable as an in-domain test set for questions related to COVID-19. Given the paucity of evaluation resources available at present, this modest dataset can serve as a stopgap for guiding ongoing NLP research, at least until larger efforts can be organized to provide more substantial evaluation resources for the community. The contribution of this work is, as far as we are aware, the \ufb01rst publicly available question answering dataset for COVID-19. With CovidQA, we evaluate a number of approaches for unsupervised (zero-shot) and transfer-based question answering, including term-based techniques and various transformer models. Experiments show that domain-speci\ufb01c adaptation of transformer models can be effective in an supervised setting, but out3 COVID-19 Kaggle community contributions \fcategory: Asymptomatic shedding subcategory: Proportion of patients who were asymptomatic query: proportion of patients who were asymptomatic question: What proportion of patients are asymptomatic? Answers id: 56zhxd6e title: Epidemiological parameters of coronavirus disease 2019: a pooled analysis of publicly reported individual data of 1155 cases from seven countries answer: 49 (14.89%) were asymptomatic id: rjm1dqk7 title: Epidemiological characteristics of 2019 novel coronavirus family clustering in Zhejiang Province answer: 54 asymptomatic infected cases Figure 1: Example of a question in CovidQA with two answers. of-domain \ufb01ne-tuning has limited effectiveness. Of the models examined, T5 (Raffel et al., 2019) for ranking (Nogueira et al., 2020) achieves the highest effectiveness in identifying sentences from documents containing answers. Furthermore, it appears that, in general, transformer models are more effective when fed well-formed natural language questions, compared to keyword queries. 2 Approach At a high level, our dataset comprises (question, scienti\ufb01c article, exact answer) triples that have been manually created from the literature review page of Kaggle\u2019s COVID-19 Open Research Dataset Challenge. It is easiest to illustrate our approach by example; see Figure 1. The literature review \u201cproducts\u201d are organized into categories and subcategories, informed by \u201cTasks\u201d de\ufb01ned by the Kaggle organizers.4 One such example is \u201cAsymptomatic shedding\u201d and \u201cProportion of patients who were asymptomatic\u201d, respectively. The subcategory may or may not be phrased in the form of a natural language question; in this case, it is not. In the \u201cquestion\u201d of CovidQA, we preserve this categorization and, based on it, manually created both a query comprising of keywords (what a user might type into a search engine) and also a well-formed natural language question. For both, we attempt to minimize the changes made to the original formulations; see Figure 1. In the Kaggle literature review, for each category/subcategory there is an \u201canswers table\u201d that presents evidence relevant to the information need. 4 COVID-19 Open Research Dataset Challenge Tasks Each table is different, and in our running example, the table has columns containing the title of an article that contains an answer, its date, as well as the asymptomatic proportion, age, study design, and sample size. In this case, according to the site, the entries in the answer table came from notebooks by two data scientists (Ken Miller and David Mezzetti), whose contents were then vetted by two curators (Candler Clawson and Devan Wilkins). For each row (entry) in the literature review answer table, we began by manually identifying the exact article referenced (in terms of the unique ID) in the COVID-19 Open Research Dataset (CORD-19). To align with ongoing TREC document retrieval efforts,5 we used the version of the corpus from April 10. Finding the exact document required keyword search, as there are sometimes slight differences between the titles in the answer tables and the titles in CORD-19. Once we have located the article, we manually identi\ufb01ed the exact answer span\u2014a verbatim extract from the document that serves as the answer. For example, in document 56zhxd6e the exact answer was marked as \u201c49 (14.89%) were asymptomatic\u201d. The annotation of the answer span is not a straightforward text substring match in the raw article contents based on the Kaggle answer, but required human judgment in most cases. For simpler cases in our running example, the Kaggle answer was provided with a different precision. In more complex cases, the Kaggle answer does not match any text span in the article. For example, the article may provide the total number of patients and 5TREC-COVID \fthe number of patients who were asymptomatic, but does not explicitly provide a proportion. In these cases, we used our best judgment\u2014if the absolute numbers were clearly stated in close proximity, we annotated (at least a part of) the exact answer span from which the proportion could be computed. See Figure 1 for an example in the case of document rjm1dqk7; here, the total number of patients is stated nearby in the text, from which the proportion can be computed. In some cases, however, it was not feasible to identify the answer in this manner, and thus we ignored the entry. Thus, not all rows in the Kaggle answer table translated into a question\u2013answer pair. A lesson from the QA literature is that there is considerable nuance in de\ufb01ning exact answers and answer contexts, dating back over 20 years (Voorhees and Tice, 1999). For example, in document 56zhxd6e, although we have annotated the answer as \u201c49 (14.89%) were asymptomatic\u201d (Figure 1), an argument could be made that \u201c14.89%\u201d is perhaps a better exact span. We are cognizant of these complexities and sidestep them by using our dataset to evaluate model effectiveness at the sentence level. That is, we consider a model correct if it identi\ufb01es the sentence that contains the answer. Thus, we only need to ensure that our manually annotated exact answers are (1) proper substrings of the article text, from its raw JSON source provided in CORD-19, and that (2) the substrings do not cross sentence boundaries. In practice, these two assumptions are workable at an intuitive level, and they allow us to avoid the need to articulate complex annotation guidelines that try to de\ufb01ne the \u201cexactness\u201d of an answer. Another challenge that we encountered relates to the scope of some questions. Drawn directly from the literature review, some questions produced too many possible answer spans within a document and thus required rephrasing. As an example, for the topic \u201cdecontamination based on physical science\u201d, most sentences in some articles would be marked as relevant. To address this issue, we deconstructed these broad topics into multiple questions, for example, related to \u201cUVGI intensity used for inactivating COVID-19\u201d and \u201cpurity of ethanol to inactivate COVID-19\u201d. Five of the co-authors participated in this annotation effort, applying the aforementioned approach, with one lead annotator responsible for approving topics and answering technical questions from the other annotators. Two annotators are undergraduate students majoring in computer science, one is a science alumna, another is a computer science professor, and the lead annotator is a graduate student in computer science\u2014all af\ufb01liated with the University of Waterloo. Overall, the dataset took approximately 23 hours to produce, representing a \ufb01nal tally (for the version 0.1 release) of 124 question\u2013answer pairs, 27 questions (topics), and 85 unique articles. For each question\u2013 answer pair, there are 1.6 annotated answer spans on average. We emphasize that this dataset, while too small to train supervised models, should still prove useful for evaluating unsupervised or out-ofdomain transfer-based models. 3 Evaluation Design The creation of the CovidQA dataset was motivated by a multistage design of end-to-end search engines, as exempli\ufb01ed by our Neural Covidex (Zhang et al., 2020) for AI2\u2019s COVID19 Open Research Dataset and related clinical trials data. This architecture, which is quite standard in both academia (Matveeva et al., 2006; Wang et al., 2011) and industry (Pedersen, 2010; Liu et al., 2017), begins with keyword-based retrieval to identify a set of relevant candidate documents that are then reranked by machine-learned models to bring relevant documents into higher ranks. In the \ufb01nal stage of this pipeline, a module would take as input the query and the document (in principle, this could be an abstract, the full text, or some combination of paragraphs from the full text), and identify the most salient passages (for example, sentences), which might be presented as highlights in the search interface (Lin et al., 2003). Functionally, such a \u201chighlighting module\u201d would not be any different from a span-based question answering system, and thus CovidQA can serve as a test set. More formally, in our evaluation design, a model is given the question (either the natural language question or the keyword query) and the full text of the ground-truth article in JSON format (from CORD-19). It then scores each sentence from the full text according to relevance. For evaluation, a sentence is deemed correct if it contains the exact answer, via substring matching. From these results, we can compute a battery of metrics. Treating a model\u2019s output as a ranked list, \fin this paper we evaluate effectiveness in terms of mean reciprocal rank (MRR), precision at rank one (P@1), and recall at rank three (R@3). 4 Baseline Models Let q := (q1, . . . , qLq) be a sequence of query tokens. We represent an article as d := (s1, . . . , sLd), where si := (wi 1, . . . , wi Li) is the ith sentence in the article. The goal is to sort s1, . . . , sLd according to their relevance to the query q, and to accomplish this, we introduce a scoring function \u03c1(q, si), which can be as simple as BM25 or as complex as a transformer model. As there is no suf\ufb01ciently large dataset available for training a QA system targeted at COVID-19, a conventional supervised approach is infeasible. We thus resort to unsupervised learning and out-ofdomain supervision (i.e., transfer learning) in this paper to evaluate both the effectiveness of these approaches and the usefulness of CovidQA. 4.1 Unsupervised Methods Okapi BM25. For a simple, non-neural baseline, we use the ubiquitous Okapi BM25 scoring function (Robertson et al., 1995) as implemented in the Anserini framework (Yang et al., 2017, 2018), with all default parameter settings. Document frequency statistics are taken from the entire collection for a more accurate estimate of term importance. BERT models. For unsupervised neural baselines, we considered \u201cvanilla\u201d BERT (Devlin et al., 2019) as well as two variants trained on scienti\ufb01c and biomedical articles: SciBERT (Beltagy et al., 2019) and BioBERT (Lee et al., 2020). Unless otherwise stated, all transformer models in this paper use the base variant with the cased tokenizer. For each of these variants, we transformed the query q and sentence si, separately, into sequences of hidden vectors, hq := (hq 1, . . . , hq |q|) and hsi := (hsi 1 , . . . , hsi |si|). These hidden sequences represent the contextualized token embedding vectors of the query and sentence, which we can use to make \ufb01ne-grained comparisons. We score each sentence against the query by cosine similarity, i.e., \u03c1(q, si) := max j,k hq j \u00b7 hsi k \u2225hq j\u2225\u2225hsi k \u2225. (1) In other words, we measure the relevance of each token in the document by the cosine similarity against all the query tokens, then determine sentence relevance as the maximum contextual similarity at the token level. 4.2 Out-of-Domain Supervised Models Although at present CovidQA is too small to train (or \ufb01ne-tune) a neural QA model, there exist potentially usable datasets in other domains. We considered a number out-of-domain supervised models: BioBERT on SQuAD. We used BioBERTbase \ufb01ne-tuned on the SQuAD v1.1 dataset (Rajpurkar et al., 2016), provided by the authors of BioBERT.6 Given the query q, for each token in the sentence si, the model assigns a score pair (ai j, bi j) for 1 \u2264j \u2264|si| denoting the pre-softmax scores (i.e., logits) of the start and end of the answer span, respectively. The model is \ufb01ne-tuned on SQuAD to minimize the negative log-likelihood on the correct beginning and ending indices of the answer spans. To compute relevance, we let \u03c1(q, si) := maxj,k max{ai j, bi k}. Our preliminary experiments showed much better quality with such a formulation compared to using log-probabilities, hinting that logits are more informative for relevance estimation in span-based models. BERT and T5 on MS MARCO. We examined pretrained BioBERT, BERT, and T5 (Raffel et al., 2019) \ufb01ne-tuned on MS MARCO (Bajaj et al., 2016). In addition to BERT and BioBERT to mirror the above conditions, we picked T5 for its state-of-the-art effectiveness on newswire retrieval and competitive effectiveness on MS MARCO (Nogueira et al., 2020). Unlike the rest of the transformer models, vanilla BERT uses the uncased tokenizer. To reiterate, we evaluated the base variant for each model here. T5 is \ufb01ne-tuned by maximizing the logprobability of generating the output token \u27e8true\u27e9 when a pair of query and relevant document is provided while maximizing that of the output token \u27e8false\u27e9with a pair of query and non-relevant document. See Nogueira et al. (2020) for details. Once \ufb01ne-tuned, we use log p(\u27e8true\u27e9|q, si) as the score \u03c1(q, si) for ranking sentence relevance. To \ufb01ne-tune BERT and BioBERT, we followed the standard BERT procedure (Devlin et al., 2019) and trained the sequence classi\ufb01cation model endto-end to minimize the negative log-likelihood on the labeled query\u2013document pairs. 6bioasq-biobert GitHub repo \f# Model NL Question Keyword Query P@1 R@3 MRR P@1 R@3 MRR 1 Random 0.012 0.034 \u2013 0.012 0.034 \u2013 2 BM25 0.150 0.216 0.243 0.150 0.216 0.243 3 BERT (unsupervised) 0.081 0.117 0.159 0.073 0.164 0.187 4 SciBERT (unsupervised) 0.040 0.056 0.099 0.024 0.064 0.094 5 BioBERT (unsupervised) 0.097 0.142 0.170 0.129 0.145 0.185 6 BERT (\ufb01ne-tuned on MS MARCO) 0.194 0.315 0.329 0.234 0.306 0.342 7 BioBERT (\ufb01ne-tuned on SQuAD) 0.161 0.403 0.336 0.056 0.093 0.135 8 BioBERT (\ufb01ne-tuned on MS MARCO) 0.194 0.313 0.312 0.185 0.330 0.322 9 T5 (\ufb01ne-tuned on MS MARCO) 0.282 0.404 0.415 0.210 0.376 0.360 Table 1: Effectiveness of the models examined in this paper. 5 Results Evaluation results are shown in Table 1. All \ufb01gures represent micro-averages across each question\u2013answer pair due to data imbalance at the question level. We present results with the wellformed natural language question as input (left) as well as the keyword queries (right). For P@1 and R@3, we analytically compute the effectiveness of a random baseline, reported in row 1; as a sanity check, all our techniques outperform it. The simple BM25 baseline is surprisingly effective (row 2), outperforming the unsupervised neural approaches (rows 3\u20135) on both natural language questions and keyword queries. For both types, BM25 leads by a large margin across all metrics. These results suggest that in a deployed system, we should pick BM25 over the unsupervised neural methods in practice, since it is also much more resource ef\ufb01cient. Of the unsupervised neural techniques, however, BioBERT achieves the highest effectiveness (row 5), beating both vanilla BERT (row 3) and SciBERT (row 4). The comparison between these three models allows us to quantify the impact of domain adaptation\u2014noting, of course, that the target domains of both BioBERT and SciBERT may still differ from CORD-19. We see that BioBERT does indeed improve over vanilla BERT, more so on keyword queries than on natural language questions, with the latter improvement quite substantial (over \ufb01ve points in P@1). SciBERT, on the other hand, performs worse than vanilla BERT (both on natural language questions and keyword queries), suggesting that its target is likely out of domain with respect to CORD-19. Our out-of-domain supervised models are much more effective than their unsupervised counterparts, suggesting bene\ufb01cial transfer effects. When \ufb01ne-tuned on MS MARCO, BERT and BioBERT (rows 6 and 8) achieve comparable effectiveness with natural language input, although there is a bit more variation with keyword queries. This is quite surprising, as BioBERT appears to be more effective than vanilla BERT in the unsupervised setting. This suggests that \ufb01ne-tuning on out-ofdomain MS MARCO is negating the domain adaptation pretraining in BioBERT. Comparing BioBERT \ufb01ne-tuned on SQuAD and MS MARCO (rows 7 vs. 8), we \ufb01nd comparable effectiveness on natural language questions; \ufb01ne-tuning on SQuAD yields lower P@1 but higher R@3 and higher MRR. On keyword queries, however, the effectiveness of BioBERT \ufb01ne-tuned on SQuAD is quite low, likely due to the fact that SQuAD comprises only well-formed natural language questions (unlike MS MARCO, which has more diverse queries). Finally, we observe that T5 achieves the highest overall effectiveness for all but P@1 on keyword queries. These results are consistent with Nogueira et al. (2020) and provide additional evidence that encoder\u2013decoder transformer models represent a promising new direction for search, question answering, and related tasks. Looking at the out-of-domain supervised transformer models (including T5) on the whole, we see that models generally perform better with natural language questions than with keyword queries\u2014although vanilla BERT is an outlier here, especially in terms of P@1. This shows the poten\ftial value of users posing well-formed natural language questions, even though they may degrade the effectiveness of term-based matching since well-formed questions sometimes introduce extraneous distractor words, for example, \u201ctype\u201d in a question that begins with \u201cWhat type of...\u201d (since \u201ctype\u201d isn\u2019t usually a stopword). Thus, in a multistage architecture, the optimal keyword queries used for initial retrieval might differ substantially from the natural language questions fed into downstream neural architectures. Better understanding of these differences is a potential direction for future research. 6 Related Work and Discussion It is quite clear that CovidQA does not have suf\ufb01cient examples to train QA models in a supervised manner. However, we believe that our dataset can be helpful as a test set for guiding NLP research, seeing that there are no comparable resources (as far as we know). We emphasize that our efforts are primarily meant as a stopgap until the community can build more substantial evaluation resources. The signi\ufb01cant effort (both in terms of money and labor) that is required to create high-quality evaluation products means that their construction constitutes large, resource-intensive efforts\u2014and hence slow. As a concrete example, for document retrieval, systems for searching CORD-19 were available within a week or so after the initial release of the corpus in mid-March. However, a formal evaluation effort led by NIST did not kick off until mid-April, and relevance judgments will not be available until early May (more than a month after dozens of systems have been deployed online). In the meantime, researchers are left without concrete guidance for developing ranking algorithms, unless they undertake the necessary effort themselves to build test collections\u2014but this level of effort is usually beyond the capabilities of individual teams, not to mention the domain expertise required. There are, of course, previous efforts in building QA datasets in the biomedical domain. The most noteworthy is BioASQ (Tsatsaronis et al., 2015), a series of challenges on biomedical semantic indexing and question answering. BioASQ does provide datasets for biomedical question answering, but based on manual examination, those questions seem quite different from the tasks in the Kaggle data challenge, and thus it is unclear if a more domain-general dataset could be useful for information needs related to COVID-19. Nevertheless, in parallel, we are exploring how we might rapidly retarget the BioASQ data for our purposes. There is no doubt that organizations with more resources and access to domain experts will build a larger, higher-quality QA dataset for COVID-19 in the future. In the meantime, the alternative is a stopgap such as our CovidQA dataset, creating a alternate private test collection, or something like the Mark I Eyeball.7 We hope that our dataset can provide some value to guide ongoing NLP efforts before it is superseded by something better. There are, nevertheless, a few potential concerns about the current dataset that are worth discussing. The \ufb01rst obvious observation is that building a QA dataset for COVID-19 requires domain knowledge (e.g., medicine, genomics, etc., depending on the type of question)\u2014yet none of the co-authors have such domain knowledge. We overcame this by building on knowledge that has already been ostensibly curated by experts with the relevant domain knowledge. According to Kaggle, notebooks submitted by contributors are vetted by \u201cepidemiologists, MDs, and medical students\u201d (Kaggle\u2019s own description), and each answer table provides the names of the curators. A quick check of these curators\u2019 pro\ufb01les does suggest that they possess relevant domain knowledge. While pro\ufb01les are self-authored, we don\u2019t have any reason to question their veracity. Given that our own efforts involved mapping already vetted answers to spans within the source articles, we do not think that our lack of domain expertise is especially problematic. There is, however, a limitation in the current dataset, in that we lack \u201cno answer\u201d documents. That is, all articles are already guaranteed to have the answer in it; the system\u2019s task is to \ufb01nd it. This is an unrealistic assumption in our actual deployment scenario at the end of a multistage architecture (see Section 3). Instead, it would be desirable to evaluate a model\u2019s ability to detect when the answer is not present in the document\u2014another insight from the QA literature that dates back nearly two decades (Voorhees, 2001). Note this limitation applies to BioASQ as well. We hope to address this issue in the near future, and have a few ideas for how to gather such \u201cno answer\u201d documents. The two-stage design of the 7Wikipedia: Visual Inspection \fKaggle curation effort (raw notebooks, which are then vetted by hand) means that results recorded in raw notebooks that do not appear in the \ufb01nal answer tables may serve as a source for such documents. We have not worked through the details of how this might be operationalized, but this idea seems like a promising route. 7" + }, + { + "url": "http://arxiv.org/abs/1903.12136v1", + "title": "Distilling Task-Specific Knowledge from BERT into Simple Neural Networks", + "abstract": "In the natural language processing literature, neural networks are becoming\nincreasingly deeper and complex. The recent poster child of this trend is the\ndeep language representation model, which includes BERT, ELMo, and GPT. These\ndevelopments have led to the conviction that previous-generation, shallower\nneural networks for language understanding are obsolete. In this paper,\nhowever, we demonstrate that rudimentary, lightweight neural networks can still\nbe made competitive without architecture changes, external training data, or\nadditional input features. We propose to distill knowledge from BERT, a\nstate-of-the-art language representation model, into a single-layer BiLSTM, as\nwell as its siamese counterpart for sentence-pair tasks. Across multiple\ndatasets in paraphrasing, natural language inference, and sentiment\nclassification, we achieve comparable results with ELMo, while using roughly\n100 times fewer parameters and 15 times less inference time.", + "authors": "Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, Jimmy Lin", + "published": "2019-03-28", + "updated": "2019-03-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction In the natural language processing (NLP) literature, the march of the neural networks has been an unending yet predictable one, with new architectures constantly surpassing previous ones in not only performance and supposed insight but also complexity and depth. In the midst of all this neural progress, it becomes easy to dismiss earlier, \u201c\ufb01rst-generation\u201d neural networks as obsolete. Ostensibly, this appears to be true: Peters et al. (2018) show that using pretrained deep word representations achieves state of the art on a variety of tasks. Recently, Devlin et al. (2018) have pushed this line of work even further with bidirectional encoder representations from transformers (BERT), deeper models that greatly improve \u2217Equal contribution. Ordering decided by coin toss. state of the art on more tasks. More recently, OpenAI has described GPT-2, a state-of-the-art, larger transformer model trained on even more data.1 Such large neural networks are, however, problematic in practice. Due to the large number of parameters, BERT and GPT-2, for example, are undeployable in resource-restricted systems such as mobile devices. They may be inapplicable in realtime systems either, because of low inference-time ef\ufb01ciency. Furthermore, the continued slowdown of Moore\u2019s Law and Dennard scaling (Han, 2017) suggests that there exists a point in time when we must compress our models and carefully evaluate our choice of the neural architecture. In this paper, we propose a simple yet effective approach that transfers task-speci\ufb01c knowledge from BERT to a shallow neural architecture\u2014in particular, a bidirectional long short-term memory network (BiLSTM). Our motivation is twofold: we question whether a simple architecture actually lacks representation power for text modeling, and we wish to study effective approaches to transfer knowledge from BERT to a BiLSTM. Concretely, we leverage the knowledge distillation approach (Ba and Caruana, 2014; Hinton et al., 2015), where a larger model serves as a teacher and a small model learns to mimic the teacher as a student. This approach is model agnostic, making knowledge transfer possible between BERT and a different neural architecture, such as a single-layer BiLSTM, in our case. To facilitate effective knowledge transfer, however, we often require a large, unlabeled dataset. The teacher model provides the probability logits and estimated labels for these unannotated samples, and the student network learns from the teacher\u2019s outputs. In computer vision, unlabeled images are usually easy to obtain through augmenting the data using rotation, additive noise, 1 https://goo.gl/Frmwqe arXiv:1903.12136v1 [cs.CL] 28 Mar 2019 \fand other distortions. However, obtaining additional, even unlabeled samples for a speci\ufb01c task can be dif\ufb01cult in NLP. Traditional data augmentation in NLP is typically task-speci\ufb01c (Wang and Eisner, 2016; Serban et al., 2016) and dif\ufb01cult to extend to other NLP tasks. To this end, we further propose a novel, rule-based textual data augmentation approach for constructing the knowledge transfer set. Although our augmented samples are not \ufb02uent natural language sentences, experimental results show that our approach works surprisingly well for knowledge distillation. We evaluate our approach on three tasks in sentence classi\ufb01cation and sentence matching. Experiments show that our knowledge distillation procedure signi\ufb01cantly outperforms training the original simpler network alone. To our knowledge, we are the \ufb01rst to explore distilling knowledge from BERT. With our approach, a shallow BiLSTMbased model achieves results comparable to Embeddings from Language Models (ELMo; Peters et al., 2018), but uses around 100 times fewer parameters and performs inference 15 times faster. Therefore, our model becomes a state-of-the-art \u201csmall\u201d model for neural NLP. 2 Related Work In the past, researchers have developed and applied various neural architectures for NLP, including convolutional neural networks (Kalchbrenner et al., 2014; Kim, 2014), recurrent neural networks (Mikolov et al., 2010, 2011; Graves, 2013), and recursive neural networks (Socher et al., 2010, 2011). These generic architectures can be applied to tasks like sentence classi\ufb01cation (Zhang et al., 2015; Conneau et al., 2016) and sentence matching (Wan et al., 2016; He et al., 2016), but the model is trained only on data of a particular task. Recently, Peters et al. (2018) introduce Embeddings from Language Models (ELMo), an approach for learning high-quality, deep contextualized representations using bidirectional language models. With ELMo, they achieve large improvements on six different NLP tasks. Devlin et al. (2018) propose Bidirectional Encoder Representations from Transformers (BERT), a new language representation model that obtains state-ofthe-art results on eleven natural language processing tasks. Trained with massive corpora for language modeling, BERT has strong syntactic ability (Goldberg, 2019) and captures generic language features. A typical downstream use of BERT is to \ufb01ne-tune it for the NLP task at hand. This improves training ef\ufb01ciency, but for inference ef\ufb01ciency, these models are still considerably slower than traditional neural networks. Model compression. A prominent line of work is devoted to compressing large neural networks to accelerate inference. Early pioneering works include LeCun et al. (1990), who propose a local error-based method for pruning unimportant weights. Recently, Han et al. (2015) propose a simple compression pipeline, achieving 40 times reduction in model size without hurting accuracy. Unfortunately, these techniques induce irregular weight sparsity, which precludes highly optimized computation routines. Thus, others explore pruning entire \ufb01lters (Li et al., 2016; Liu et al., 2017), with some even targeting device-centric metrics, such as \ufb02oating-point operations (Tang et al., 2018) and latency (Chen et al., 2018). Still other studies examine quantizing neural networks (Wu et al., 2018); in the extreme, Courbariaux et al. (2016) propose binarized networks with both binary weights and binary activations. Unlike the aforementioned methods, the knowledge distillation approach (Ba and Caruana, 2014; Hinton et al., 2015) enables the transfer of knowledge from a large model to a smaller, \u201cstudent\u201d network, which is improved in the process. The student network can use a completely different architecture, since distillation works at the output level. This is important in our case, since our research objective is to study the representation power of shallower neural networks for language understanding, while simultaneously compressing models like BERT; thus, we follow this approach in our work. In the NLP literature, it has previously been used in neural machine translation (Kim and Rush, 2016) and language modeling (Yu et al., 2018). 3 Our Approach First, we choose the desired teacher and student models for the knowledge distillation approach. Then, we describe our distillation procedure, which comprises two major components: \ufb01rst, the addition of a logits-regression objective, and second, the construction of a transfer dataset, which augments the training set for more effective knowledge transfer. \f... ... 1 2 n a b c e d h g i j f Figure 1: The BiLSTM model for single-sentence classi\ufb01cation. The labels are (a) input embeddings, (b) BiLSTM, (c, d) backward and forward hidden states, respectively, (e, g) fully-connected layer; (e) with ReLU, (f) hidden representation, (h) logit outputs, (i) softmax activation, and (j) \ufb01nal probabilities. 3.1 Model Architecture For the teacher network, we use the pretrained, \ufb01ne-tuned BERT (Devlin et al., 2018) model, a deep, bidirectional transformer encoder that achieves state of the art on a variety of language understanding tasks. From an input sentence (pair), BERT computes a feature vector h \u2208Rd, upon which we build a classi\ufb01er for the task. For single-sentence classi\ufb01cation, we directly build a softmax layer, i.e., the predicted probabilities are y(B) = softmax(Wh), where W \u2208Rk\u00d7d is the softmax weight matrix and k is the number of labels. For sentence-pair tasks, we concatenate the BERT features of both sentences and feed them to a softmax layer. During training, we jointly \ufb01netune the parameters of BERT and the classi\ufb01er by maximizing the probability of the correct label, using the cross-entropy loss. In contrast, our student model is a single-layer BiLSTM with a non-linear classi\ufb01er. After feeding the input word embeddings into the BiLSTM, the hidden states of the last step in each direction are concatenated and fed to a fully connected layer with recti\ufb01ed linear units (ReLUs), whose output is then passed to a softmax layer for classi\ufb01cation (Figure 1). For sentence-pair tasks, we share BiLSTM encoder weights in a siamese architecture between the two sentence encoders, producing sentence vectors hs1 and hs2 (Figure 2). We then apply a standard concatenate\u2013compare operation (Wang et al., 2018) between the two sentence vectors: f(hs1, hs2) = [hs1, hs2, hs1 \u2299 hs2, |hs1 \u2212hs2|], where \u2299denotes elementwise multiplication. We feed this output to a ReLU... a b e c f ... h g i j d Input #1 Input #2 Figure 2: The siamese BiLSTM model for sentence matching, with shared encoder weights for both sentences. The labels are (a) BiLSTM, (b, c) \ufb01nal backward and forward hidden states, respectively, (d) concatenate\u2013compare unit, (e, g) fully connected layer; (e) with ReLU, (f) hidden representation, (h) logit outputs, (i) softmax activation, and (j) \ufb01nal probabilities. activated classi\ufb01er. It should be emphasized that we restrict the architecture engineering to a minimum to revisit the representation power of BiLSTM itself. We avoid any additional tricks, such as attention and layer normalization. 3.2 Distillation Objective The distillation approach accomplishes knowledge transfer at the output level; that is, the student network learns to mimic a teacher network\u2019s behavior given any data point. In particular, Ba and Caruana (2014) posit that, in addition to a one-hot predicted label, the teacher\u2019s predicted probability is also important. In binary sentiment classi\ufb01cation, for example, some sentences have a strong sentiment polarity, whereas others appear neutral. If we use only the teacher\u2019s predicted one-hot label to train the student, we may lose valuable information about the prediction uncertainty. The discrete probability output of a neural network is given by e yi = softmax(z) = exp{w\u22a4 i h} P j exp{w\u22a4 j h} (1) where wi denotes the ith row of softmax weight W, and z is equivalent to w\u22a4h. The argument of the softmax function is known as logits. Training on logits makes learning easier for the student model since the relationship learned by the teacher \fmodel across all of the targets are equally emphasized (Ba and Caruana, 2014). The distillation objective is to penalize the mean-squared-error (MSE) loss between the student network\u2019s logits against the teacher\u2019s logits: Ldistill = ||z z z(B) \u2212z z z(S)||2 2 (2) where z z z(B) and z z z(S) are the teacher\u2019s and student\u2019s logits, respectively. Other measures such as cross entropy with soft targets are viable as well (Hinton et al., 2015); however, in our preliminary experiments, we found MSE to perform slightly better. At training time, the distilling objective can be used in conjunction with a traditional crossentropy loss against a one-hot label t, given by L = \u03b1 \u00b7 LCE + (1 \u2212\u03b1) \u00b7 Ldistill (3) = \u2212\u03b1 X i ti log y(S) i \u2212(1 \u2212\u03b1)||z z z(B) \u2212z z z(S)||2 2 When distilling with a labeled dataset, the one-hot target t is simply the ground-truth label. When distilling with an unlabeled dataset, we use the predicted label by the teacher, i.e., ti = 1 if i = argmax y(B) and 0 otherwise. 3.3 Data Augmentation for Distillation In the distillation approach, a small dataset may not suf\ufb01ce for the teacher model to fully express its knowledge (Ba and Caruana, 2014). Therefore, we augment the training set with a large, unlabeled dataset, with pseudo-labels provided by the teacher, to aid in effective knowledge distillation. Unfortunately, data augmentation in NLP is usually more dif\ufb01cult than in computer vision. First, there exist a large number of homologous images in computer vision tasks. CIFAR-10, for example, is a subset of the 80 million tiny images dataset (Krizhevsky, 2009). Second, it is possible to synthesize a near-natural image by rotating, adding noise, and other distortions, but if we manually manipulate a natural language sentence, the sentence may not be \ufb02uent, and its effect in NLP data augmentation less clear. In our work, we propose a set of heuristics for task-agnostic data augmentation: we use the original sentences in the small dataset as blueprints, and then modify them with our heuristics, a process analogous to image distortion. Speci\ufb01cally, we randomly perform the following operations. Masking. With probability pmask, we randomly replace a word with [MASK], which corresponds to an unknown token in our models and the masked word token in BERT. Intuitively, this rule helps to clarify the contribution of each word toward the label, e.g., the teacher network produces less con\ufb01dent logits for \u201cI [MASK] the comedy\u201d than for \u201cI loved the comedy.\u201d POS-guided word replacement. With probability ppos, we replace a word with another of the same POS tag. To preserve the original training distribution, the new word is sampled from the unigram word distribution re-normalized by the partof-speech (POS) tag. This rule perturbs the semantics of each example, e.g., \u201cWhat do pigs eat?\u201d is different from \u201cHow do pigs eat?\u201d n n n-gram sampling. With probability png, we randomly sample an n-gram from the example, where n is randomly selected from {1, 2, . . . , 5}. This rule is conceptually equivalent to dropping out all other words in the example, which is a more aggressive form of masking. Our data augmentation procedure is as follows: given a training example {w1, . . . wn}, we iterate over the words, drawing from the uniform distribution Xi \u223cUNIFORM[0, 1] for each wi. If Xi < pmask, we apply masking to wi. If pmask \u2264 Xi < pmask + ppos, we apply POS-guided word replacement. We treat masking and POS-guided swapping as mutually exclusive: once one rule is applied, the other is disregarded. After iterating through the words, with probability png, we apply n-gram sampling to this entire synthetic example. The \ufb01nal synthetic example is appended to the augmented, unlabeled dataset. We apply this procedure niter times per example to generate up to niter samples from a single example, with any duplicates discarded. For sentencepair datasets, we cycle through augmenting the \ufb01rst sentence only (holding the second \ufb01xed), the second sentence only (holding the \ufb01rst \ufb01xed), and both sentences. 4 Experimental Setup For BERT, we use the large variant BERTLARGE (described below) as the teacher network, starting with the pretrained weights and following the original, task-speci\ufb01c \ufb01ne-tuning procedure (Devlin et al., 2018). We \ufb01ne-tune four models using the Adam optimizer with learning rates {2, 3, 4, 5} \u00d7 10\u22125, picking the best model on the validation set. We avoid data augmentation during \ufb01ne-tuning. \fFor our models, we feed the original dataset together with the synthesized examples to the taskspeci\ufb01c, \ufb01ne-tuned BERT model to obtain the predicted logits. We denote our distilled BiLSTM trained on soft logit targets as BiLSTMSOFT, which corresponds to choosing \u03b1 = 0 in Section 3.2. Preliminary experiments suggest that using only the distillation objective works best. 4.1 Datasets We conduct experiments on the General Language Understanding Evaluation (GLUE; Wang et al., 2018) benchmark, a collection of six natural language understanding tasks that are classi\ufb01ed into three categories: single-sentence tasks, similarity and paraphrase tasks, and inference tasks. Due to restrictions in time and computational resources, we choose the most widely used dataset from each category, as detailed below. SST-2. Stanford Sentiment Treebank 2 (SST-2; Socher et al., 2013) comprises single sentences extracted from movie reviews for binary sentiment classi\ufb01cation (positive vs. negative). Following GLUE, we consider sentence-level sentiment only, ignoring the sentiment labels of phrases provided by the original dataset. MNLI. The Multi-genre Natural Language Inference (MNLI; Williams et al., 2017) corpus is a large-scale, crowdsourced entailment classi\ufb01cation dataset. The objective is to predict the relationship between a pair of sentences as one of entailment, neutrality, or contradiction. MNLI-m uses development and test sets that contain the same genres from the training set, while MNLI-mm represents development and test sets from the remaining, mismatched genres. QQP. Quora Question Pairs (QQP; Shankar Iyer and Csernai, 2017) consists of pairs of potentially duplicate questions collected from Quora, a question-and-answer website. The binary label of each question pair indicates redundancy. 4.2 Hyperparameters We choose either 150 or 300 hidden units for the BiLSTM, and 200 or 400 units in the ReLUactivated hidden layer, depending on the validation set performance. Following Kim (2014), we use the traditional 300-dimensional word2vec embeddings trained on Google News and multichannel embeddings. For optimization, we use AdaDelta (Zeiler, 2012) with its default learning rate of 1.0 and \u03c1 = 0.95. For SST-2, we use a batch size of 50; for MNLI and QQP, due to their larger size, we choose 256 for the batch size. For our dataset augmentation hyperparameters, we \ufb01x pmask = ppos = 0.1 and png = 0.25 across all datasets. These values have not been tuned at all on the datasets\u2014these are the \ufb01rst values we chose. We choose niter = 20 for SST-2 and niter = 10 for both MNLI and QQP, since they are larger. 4.3 Baseline Models BERT (Devlin et al., 2018) is a multi-layer, bidirectional transformer encoder that comes in two variants: BERTBASE and the larger BERTLARGE. BERTBASE comprises 12 layers, 768 hidden units, 12 self-attention heads, and 110M parameters. BERTLARGE uses 24 layers, 1024 hidden units, 16 self-attention heads, and 340M parameters. OpenAI GPT (Radford et al., 2018) is, like BERT, a generative pretrained transformer (GPT) encoder \ufb01ne-tuned on downstream tasks. Unlike BERT, however, GPT is unidirectional and only makes use of previous context at each time step. GLUE ELMo baselines. In the GLUE paper, Wang et al. (2018) provide a BiLSTM-based model baseline trained on top of ELMo and jointly \ufb01ne-tuned across all tasks. This model contains 4096 units in the ELMo BiLSTM and more than 93 million total parameters. In the BERT paper, Devlin et al. (2018) provide the same model but a result slightly different from Wang et al. (2018). For fair comparison, we report both results. 5 Results and Discussion We present the results of our models as well as baselines in Table 1. For QQP, we report both F1 and accuracy, since the dataset is slightly unbalanced. Following GLUE, we report the average score of each model on the datasets. 5.1 Model Quality To verify the correctness of our implementation, we train the base BiLSTM model on the original labels, without using distillation (row 7). Across all three datasets, we achieve scores comparable with BiLSTMs from previous works (rows 8 and 9), suggesting that our implementation is fair. Note that, on MNLI, the two baselines differ by 4% in accuracy (rows 8 and 9). None of the nondistilled BiLSTM baselines outperform BERT\u2019s \f# Model SST-2 QQP MNLI-m MNLI-mm Acc F1/Acc Acc Acc 1 BERTLARGE (Devlin et al., 2018) 94.9 72.1/89.3 86.7 85.9 2 BERTBASE (Devlin et al., 2018) 93.5 71.2/89.2 84.6 83.4 3 OpenAI GPT (Radford et al., 2018) 91.3 70.3/88.5 82.1 81.4 4 BERT ELMo baseline (Devlin et al., 2018) 90.4 64.8/84.7 76.4 76.1 5 GLUE ELMo baseline (Wang et al., 2018) 90.4 63.1/84.3 74.1 74.5 6 Distilled BiLSTMSOFT 90.7 68.2/88.1 73.0 72.6 7 BiLSTM (our implementation) 86.7 63.7/86.2 68.7 68.3 8 BiLSTM (reported by GLUE) 85.9 61.4/81.7 70.3 70.8 9 BiLSTM (reported by other papers) 87.6\u2020 \u2013 /82.6\u2021 66.9* 66.9* Table 1: Test results on different datasets. The BiLSTM results reported by other papers are drawn from Zhou et al. (2016),\u2020 Wang et al. (2017),\u2021 and Williams et al. (2017).\u2217All of our test results are obtained from the GLUE benchmark website. ELMo baseline (row 4)\u2014our implementation, although attaining a higher accuracy for QQP, falls short in F1 score. We apply our distillation approach of matching logits using the augmented training dataset, and achieve an absolute improvement of 1.9\u2013 4.5 points against our base BiLSTM. On SST-2 and QQP, we outperform the best reported ELMo model (row 4), coming close to GPT. On MNLI, our results trail ELMo\u2019s by a few points; however, they still represent a 4.3-point improvement against our BiLSTM, and a 1.8\u20132.7-point increase over the previous best BiLSTM (row 8). Overall, our distilled model is competitive with two previous implementations of ELMo BiLSTMs (rows 4\u2013 5), suggesting that shallow BiLSTMs have greater representation power than previously thought. We do not, however, outperform the deep transformer models (rows 1\u20133), doing 4\u20137 points worse, on average. Nevertheless, our model has much fewer parameters and better ef\ufb01ciency, as detailed in the following section. 5.2 Inference Ef\ufb01ciency For our inference speed and parameter analysis, we use the open-source PyTorch implementations for BERT2 and ELMo (Gardner et al., 2017). On a single NVIDIA V100 GPU, we perform model inference with a batch size of 512 on all 67350 sentences of the SST-2 training set. As shown in Table 2, our single-sentence model uses 98 and 349 times fewer parameters than ELMo and BERTLARGE, respectively, and is 15 and 434 times 2 https://goo.gl/iRPhjP # of Par. Inference Time BERTLARGE 335 (349\u00d7) 1060 (434\u00d7) ELMo 93.6 (98\u00d7) 36.71 (15\u00d7) BiLSTMSOFT 0.96 (1\u00d7) 2.44 (1\u00d7) Table 2: Single-sentence model size and inference speed on SST-2. # of Par. denotes number of millions of parameters, and inference time is in seconds. faster. At 2.2 million parameters, the variant with 300-dimensional LSTM units is twice as large, though still substantially smaller than ELMo. For sentence-pair tasks, the siamese counterpart uses no pairwise word interactions, unlike previous state of the art (He and Lin, 2016); its runtime thus scales linearly with sentence length. 6" + }, + { + "url": "http://arxiv.org/abs/1812.07754v1", + "title": "Streaming Voice Query Recognition using Causal Convolutional Recurrent Neural Networks", + "abstract": "Voice-enabled commercial products are ubiquitous, typically enabled by\nlightweight on-device keyword spotting (KWS) and full automatic speech\nrecognition (ASR) in the cloud. ASR systems require significant computational\nresources in training and for inference, not to mention copious amounts of\nannotated speech data. KWS systems, on the other hand, are less\nresource-intensive but have limited capabilities. On the Comcast Xfinity X1\nentertainment platform, we explore a middle ground between ASR and KWS: We\nintroduce a novel, resource-efficient neural network for voice query\nrecognition that is much more accurate than state-of-the-art CNNs for KWS, yet\ncan be easily trained and deployed with limited resources. On an evaluation\ndataset representing the top 200 voice queries, we achieve a low false alarm\nrate of 1% and a query error rate of 6%. Our model performs inference 8.24x\nfaster than the current ASR system.", + "authors": "Raphael Tang, Gefei Yang, Hong Wei, Yajie Mao, Ferhan Ture, Jimmy Lin", + "published": "2018-12-19", + "updated": "2018-12-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "INTRODUCTION Most voice-enabled intelligent agents, such as Apple\u2019s Siri and the Amazon Echo, are powered by a combination of two technologies: lightweight keyword spotting (KWS) to detect a few pre-de\ufb01ned phrases within streaming audio (e.g., \u201cHey Siri\u201d) and full automatic speech recognition (ASR) to transcribe complete user utterances. In this work, we explore a middle ground: techniques for voice query recognition capable of handling a couple of hundred commands. Why is this an interesting point in the design space? On the one hand, this task is much more challenging than the (at most) a couple of dozen keywords handled by state-ofthe-art KWS systems [1, 2]. Their highly constrained vocabulary limits application to wake-word and simple command recognition. Furthermore, their use is constrained to detecting whether some audio contains a phrase, not exact transcriptions needed for voice query recognition. For example, \u2217Work done while interning at Comcast Labs in Washington, D.C. if \u201cYouTube\u201d were the keyword, KWS systems would make no distinction between the phrases \u201cquit YouTube\u201d and \u201copen YouTube\u201d\u2014this is obviously not suf\ufb01cient since they correspond to different commands. On the other hand, our formulation of voice query recognition was speci\ufb01cally designed to be far more lightweight than full ASR models, typically recurrent neural networks that comprise tens of millions of parameters, take weeks to train and \ufb01ne tune, and require enormous investment in gathering training data. Thus, full ASR typically incurs high computational costs during inference time and have large memory footprints [3]. The context of our work is the Comcast X\ufb01nity X1 entertainment platform, which provides a \u201cvoice remote\u201d that accepts spoken queries from users. A user, for example, might initiate a voice query with a button push on the remote and then say \u201cCNN\u201d as an alternative to remembering the exact channel number or \ufb02ipping through channel guides. Voice queries are a powerful feature, since modern entertainment packages typically have hundreds of channels and remote controls have become too complicated for many users to operate. On average, X1 accepts tens of millions of voice queries per day, totaling 1.7 terabytes of audio, equal to 15,000 spoken hours. A middle ground between KWS and full ASR is particularly interesting in our application because of the Zip\ufb01an distribution of users\u2019 queries. The 200 most popular queries cover a signi\ufb01cant portion of monthly voice traf\ufb01c and accounts for millions of queries per day. The key contribution of this work is a novel, resource-ef\ufb01cient architecture for streaming voice query recognition on the Comcast X1. We show that existing KWS models are insuf\ufb01cient for this task, and that our models answer queries more than eight times faster than the current full ASR system, with a low false alarm rate (FAR) of 1.0% and query error rate (QER) of 6.0%. 2. RELATED WORK The typical approach to voice query recognition is to develop a full automatic speech recognition (ASR) system [4]. Open-source toolkits like Kaldi [5] provide ASR models to arXiv:1812.07754v1 [cs.CL] 19 Dec 2018 \fe ... ... ... ... l k j d f i h c . . . g PCEN b a Fig. 1. Illustration of our architecture. The labels are as follows: (A) raw audio waveform (B) streaming Mel\u2013PCEN \ufb01lterbank (C) PCEN features (D) causal convolution (E) GRU layer (F) feature extraction convolution (G) max-pool across time (H) output concatenation (I) 201-class output (J) DNN classi\ufb01er (K) long-term context modeling (L) short-term context modeling. researchers; however, state-of-the-art commercial systems frequently require thousands of hours of training data [6] and dozens of gigabytes for the combined acoustic and language models [3]. Furthermore, we argue that these systems are excessive for usage scenarios characterized by Zipf\u2019s Law, such as those often encountered in voice query recognition: for example, on the X1, the top 200 queries cover a signi\ufb01cant, disproportionate amount of our entire voice traf\ufb01c. Thus, to reduce computational requirements associated with training and running a full ASR system, we propose to develop a lightweight model for handling the top-K queries only. While our task is related to keyword spotting, KWS systems only strictly detect the mere occurrence of a phrase within audio, not the exact transcription, as in our task. Neural networks with both convolutional and recurrent components have been successfully used in keyword spotting [2, 7]; others use only convolutional neural networks (CNNs) [8, 1] and popular image classi\ufb01cation models [9]. 3. TASK AND MODEL Our precise task is to classify an audio clip as one of N + 1 classes, with N labels denoting N different voice queries and a single unknown label representing everything else. To improve responsiveness and hence the user experience, we impose the constraint that model inference executes in an online, streaming manner, de\ufb01ned as predictions that occur every 100 milliseconds and in constant time and space, with respect to the total audio input length. This enables software applications to display on-the-\ufb02y transcriptions of real-time speech, which is important for user satisfaction: we immediately begin processing speech input when the user depresses the trigger button on the X1 voice remote. 3.1. Input preprocessing First, we apply dataset augmentation to reduce generalization error in speech recognition models [10]. In our work, we randomly apply noise, band-pass \ufb01ltering, and pitch shifting to each audio sample. Speci\ufb01cally, we add a mixture of Gaussian and salt-and-pepper noise\u2014the latter is speci\ufb01cally chosen due to the voice remote microphone introducing such artifacts, since we notice \u201cclicks\u201d while listening to audio samples. For band-pass \ufb01ltering, we suppress by a factor of 0.5 the frequencies outside a range with random endpoints [a, b], where a and b roughly correspond to frequencies drawn uniformly from [0, 1.7] kHz and [1.8, 3.3] kHz, respectively. For pitch shifting, we apply a random shift of \u00b133 Hz. The augmentation procedure was veri\ufb01ed by ear to be reasonable. We then preprocess the dataset from raw audio waveform to forty-dimensional per-channel energy normalized (PCEN) [11] frames, with a window size of 30 milliseconds and a frame shift of 10 milliseconds. PCEN provides robustness to per-channel energy differences between near-\ufb01eld and far-\ufb01eld speech applications, where it is used to achieve the state of the art in keyword spotting [2, 11]. Conveniently, it handles streaming audio; in our application, the user\u2019s audio is streamed in real-time to our platform. As is standard in speech recognition applications, all audio is recorded in 16kHz, 16-bit mono-channel PCM format. 3.2. Model Architecture We draw inspiration from convolutional recurrent neural networks (ConvRNN) for text modeling [12], where it has achieved state of the art in sentence classi\ufb01cation. However, the model cannot be applied as-is to our task, since the bidirectional components violate our streaming constraint, and it was originally designed for no more than \ufb01ve output labels. Thus, we begin with this model as a template only. We illustrate our architecture in Figure 1, where the model can be best described as having three sequential components: \ufb01rst, it uses causal convolutions to model short-term speech context. Next, it feeds the short-term context into a gated recurrent unit (GRU) [13] layer and pools across time to model long-term context. Finally, it feeds the long-term context into \fa deep neural network (DNN) classi\ufb01er for our N + 1 voice query labels. Short-term context modeling. Given 40-dimensional PCEN inputs x1, . . . , xt, we \ufb01rst stack the frames to form a 2D input x1:t \u2208Rt\u00d740; see Figure 1, label C, where the x-axis represents 40-dimensional features and the y-axis time. Then, to model short-term context, we use a 2D causal convolution layer (Figure 1, label D) to extract feature vectors s1, . . . , st for s1:t = W \u00b7 x + b, where W \u2208Rc\u00d7(m\u00d7n) is the convolution weight, x\u2212m+2:0 is silence padding in the beginning, \u00b7 denotes valid convolution, and si is a context vector in Rc\u00d7f. Finally, we pass the outputs into a recti\ufb01ed linear (ReLU) activation and then a batch normalization layer, as is standard in image classi\ufb01cation. Since causal convolutions use a \ufb01xed number of past and current inputs only, the streaming constraint is necessarily maintained. Long-term context modeling. To model long-term context, we \ufb01rst \ufb02atten the short-term context vector per time step from si \u2208Rc\u00d7f to Rcf. Then, we feed them into a single unidirectional GRU layer (examine Figure 1, label E) consisting of k hidden units, yielding hidden outputs h1, . . . , ht, hi \u2208 Rk. Following text modeling work [12], we then use a 1D convolution \ufb01lter W \u2208Rd\u00d7k with ReLU activation to extract features from the hidden outputs, where d is the number of output channels. We max-pool these features across time (see Figure 1, label G) to obtain a \ufb01xed-length context cmax \u2208Rd. Finally, we concatenate cmax and ht for the \ufb01nal context vector, c \u2208Rk+d, as shown in Figure 1, label H. Clearly, these operations maintain the streaming constraint, since uni-directional GRUs and max-pooling across time require the storage of only the last hidden and maximum states, respectively. We also experimentally \ufb01nd that the max-pooling operation helps to propagate across time the strongest activations, which may be \u201cforgotten\u201d if only the last hidden output from the GRU were used as the context. DNN classi\ufb01er. Finally, we feed the context vector c into a small DNN with one hidden layer with ReLU activation, and a softmax output across the N + 1 voice query labels. For inference on streaming audio, we merely execute the DNN on the \ufb01nal context vector at a desired interval, such as every 100 milliseconds; in our models, we choose the number of hidden units r so that the classi\ufb01er is suf\ufb01ciently lightweight. 4. EVALUATION On our speci\ufb01c task, we choose N = 200 representing the top 200 queries on the X\ufb01nity X1 platform, altogether covering a signi\ufb01cant portion of all voice traf\ufb01c\u2014this subset corresponds to hundreds of millions of queries to the system per month. For each positive class, we collected 1,500 examples consisting of anonymized real data. For the negative class, we collected a larger set of 670K examples not containing any of the positive keywords. Thus, our dataset contains a total of # Type # Par. # Mult. Hyperparameters Short-term context modeling 1 C. Conv 15K 4.5M c, m, n = 250, 3, 20 2 BN 500 150K \u2013 Long-term context modeling 3 GRU 3.38M 337M k = 750 4 Conv 263K 26.2M d = 350 DNN classi\ufb01er (100ms interval) 5 DNN 845K 8.4M r = 768 6 Softmax 154K 1.5M N + 1 = 201 Total: 4.66M 378M \u2013 Table 1. Model footprint and hyperparameters. \u201c# Mult.\u201d denotes the number of multiplies for one second of audio. 970K examples. For the training set, we used the \ufb01rst 80% of each class; for the validation and test sets, we used the next two 10% partitions. Each example was extremely short\u2014 only 2.1 seconds on average. All of the transcriptions were created by a state-of-the-art commercial ASR system with 5.8\u00b11.6% (95% con\ufb01dence interval) word-error rate (WER) on our dataset; this choice is reasonable because the WER of human annotations is similar [14], and our deployment approach is to short-circuit and replace the current third-party ASR system where possible. 4.1. Training and Hyperparameters For the causal convolution layer, we choose c = 250 output channels, m = 3 width in time, and n = 20 length in frequency. We then stride the entire \ufb01lter across time and frequency by one and ten steps, respectively. This con\ufb01guration yields a receptive \ufb01eld of 50 milliseconds across f = 3 different frequency bands, which roughly correspond to highs, mids, and lows. For long-term context modeling, we choose k = 750 hidden dimensions and d = 350 convolution \ufb01lters. Finally, we choose the hidden layer of the classi\ufb01er to have 768 units. Table 1 summarizes the footprint and hyperparameters of our architecture; we name this model crnn-750m, with the \ufb01rst \u201cc\u201d representing the causal convolution layer and the trailing \u201cm\u201d max pooling. During training, we feed only the \ufb01nal context vector of the entire audio sample into the DNN classi\ufb01er. For each sample, we obtain a single softmax output across the 201 targets for the cross entropy loss. The model is then trained using stochastic gradient descent with a momentum of 0.9, batch size of 48, L2 weight decay of 10\u22124, and an initial learning rate of 10\u22122. At epochs 9 and 13, the learning rate decreases to 10\u22123 and 10\u22124, respectively, before training \ufb01nishes for a total of 16 epochs. Model Variants. As a baseline, we adapt the previous state\f# Model Val. Test Footprint FAR QER FAR QER # Par. # Mult. 1 res8 1.0% 29.4% 0.9% 29.2% 110K 240M 2 crnn-750m 1.0% 6.0% 1.0% 6.0% 4.66M 378M 3 crnn-750 1.0% 6.4% 1.0% 6.5% 4.39M 354M 4 rnn-750m 1.0% 6.4% 1.0% 6.3% 3.04M 267M Table 2. Comparison of model results. \u201c# Mult.\u201d denotes number of multiplies per second of audio. Note that for res8, we report the number on eight seconds of audio, since \ufb01xed-length input is expected. Best results are bolded. of-the-art KWS model res8 [1] to our task by increasing the number of outputs in the \ufb01nal softmax to 201 classes. This model requires \ufb01xed-length audio, so we pad and trim audio input to a length that is suf\ufb01cient to cover most of the audio in our dataset. We choose this length to be eight seconds, since 99.9% of queries are shorter. To examine the effect of the causal convolution layer, we train a model without it, feeding the PCEN inputs directly to the GRU layer. We also examine the contribution of maxpooling across time by removing it: we name these variants rnn-750m and crnn-750. 4.2. Results and Discussion The model runs quickly on a commodity GPU machine with one Nvidia GTX 1080: to classify one second of streaming audio, our model takes 68 milliseconds. Clearly, the model is also much more lightweight than a full ASR system, occupying only 19 MB of disk space for the weights and 5 KB of RAM for the persistent state per audio stream. The state consists of the two previous PCEN frames for the causal convolution layer (320 bytes; all zeros for the \ufb01rst two padding frames), the GRU hidden state (3 KB), and the last maximum state for max-pooling across time (1.4 KB). In our system, we de\ufb01ne a false alarm (FA) as a negative misclassi\ufb01cation. In other words, a model prediction is counted as an FA if it is misclassi\ufb01ed and the prediction is one of the known, 200 queries. This is reasonable, since we fall back to the third-party ASR system if the voice query is classi\ufb01ed as unknown. We also de\ufb01ne a query error (QE) as any misclassi\ufb01ed example; then, false alarm rate (FAR) and query error rate (QER) correspond to the number of FAs and QEs, respectively, divided by the number of examples. Thus, the overall query accuracy rate is 1 \u2212QER. Initially, the best model, crnn-750m, attains an FAR and QER of 2.3% and 5.0%, respectively. This FAR is higher than our production target of 1%; thus, we further threshold the predictions to adjust the speci\ufb01city of the model. Used also in our previous work [1], a simple approach is to classify as unknown all predictions whose probability outputs are below some global threshold \u03b1. That is, if the probability of 0.000 0.002 0.004 0.006 0.008 0.010 0.012 0.014 False Alarm Rate 0.050 0.075 0.100 0.125 0.150 0.175 0.200 0.225 0.250 Query Error Rate ROC (test set) crnn-750m rnn-750m crnn-750 Fig. 2. ROC curves for our models. a prediction falls below some threshold \u03b1, it is classi\ufb01ed as unknown. In Table 2, we report the results corresponding to our target FAR of 1%, with the \u03b1 determined from the validation set. To draw ROC curves (see Figure 2) on the test set, we sweep \u03b1 from 0 to 0.9999, where QER is analogous to false reject rate (FRR) in the classic keyword spotting literature. We omit res8 due to it having a QER of 29%, which is unusable in practice. After thresholding, our best model with max pooling and causal convolutions (crnn-750m) achieves an FAR of 1% and QER of 6% on both the validation and test sets, as shown in Table 2, row 2. Max-pooling across time is effective, resulting in a QER improvement of 0.5% over the ablated model (crnn-750; see row 3). The causal convolution layer is effective as well, though slightly less than max-pooling is; for the same QER (6.4%) on the validation set, the model without the causal convolution layer, rnn-750m, uses 87M fewer multiplies per second than crnn-750 does (presented in row 4), due to the large decrease in the number of parameters for the GRU, which uses an input of size 40 in rnn-750m, compared to 750 in crnn-750. We have similar \ufb01ndings for the ROC curves (see Figure 2), where crnn-750m outperforms crnn-750 and rnn-750m, and the ablated models yield similar curves. All of these models greatly outperform res8, which was originally designed for keyword spotting. 5." + }, + { + "url": "http://arxiv.org/abs/1811.03060v2", + "title": "FLOPs as a Direct Optimization Objective for Learning Sparse Neural Networks", + "abstract": "There exists a plethora of techniques for inducing structured sparsity in\nparametric models during the optimization process, with the final goal of\nresource-efficient inference. However, few methods target a specific number of\nfloating-point operations (FLOPs) as part of the optimization objective,\ndespite many reporting FLOPs as part of the results. Furthermore, a\none-size-fits-all approach ignores realistic system constraints, which differ\nsignificantly between, say, a GPU and a mobile phone -- FLOPs on the former\nincur less latency than on the latter; thus, it is important for practitioners\nto be able to specify a target number of FLOPs during model compression. In\nthis work, we extend a state-of-the-art technique to directly incorporate FLOPs\nas part of the optimization objective and show that, given a desired FLOPs\nrequirement, different neural networks can be successfully trained for image\nclassification.", + "authors": "Raphael Tang, Ashutosh Adhikari, Jimmy Lin", + "published": "2018-11-07", + "updated": "2018-11-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "main_content": "Introduction Neural networks are a class of parametric models that achieve the state of the art across a broad range of tasks, but their heavy computational requirements hinder practical deployment on resourceconstrained devices, such as mobile phones, Internet-of-things (IoT) devices, and of\ufb02ine embedded systems. Many recent works focus on alleviating these computational burdens, mainly falling under two non-mutually exclusive categories: manually designing resource-ef\ufb01cient models, and automatically compressing popular architectures. In the latter, increasingly sophisticated techniques have emerged [4, 5, 6], which have achieved respectable accuracy\u2013ef\ufb01ciency operating points, some even Pareto-better than that of the original network; for example, network slimming [4] reaches an error rate of 6.20% on CIFAR-10 using VGGNet [10] with a 51% FLOPs reduction\u2014an error decrease of 0.14% over the original. However, few techniques impose a FLOPs constraint as part of a single optimization objective. Budgeted super networks [14] are closely related to this work, incorporating FLOPs and memory usage objectives as part of a policy gradient-based algorithm for learning sparse neural architectures. MorphNets [1] apply an L1 norm, shrinkage-based relaxation of a FLOPs objective, but for the purpose of searching and training multiple models to \ufb01nd good network architectures; in this work, we learn a sparse neural network in a single training run. Other papers directly target device-speci\ufb01c metrics, such as energy usage [17], but the pruning procedure does not explicitly include the metrics of interest as part of the optimization objective, instead using them as heuristics. Falling short of continuously deploying a model candidate and measuring actual inference time, as in time-consuming neural architectural search [12], we believe that the number of FLOPs is reasonable to use as a proxy measure for actual latency and energy usage; across variants of the same architecture, Tang et al. suggest that the number of FLOPs is a stronger predictor of energy usage and latency than the number of parameters [13]. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montr\u00e9al, Canada. \fIndeed, there are compelling reasons to optimize for the number of FLOPs as part of the training objective: First, it would permit FLOPs-guided compression in a more principled manner. Second, practitioners can directly specify a desired target of FLOPs, which is important in deployment. Thus, our main contribution is to present a novel extension of the prior state of the art [7] to incorporate the number of FLOPs as part of the optimization objective, furthermore allowing practitioners to set and meet a desired compression target. 2 FLOPs Objective Formally, we de\ufb01ne the FLOPs objective Lflops : f \u00d7 Rm 7\u2192N0 as follows: Lflops(h,\u03b8 \u03b8 \u03b8) := g(h(\u00b7; I(\u03b8 \u03b8 \u03b81 \u0338= 0), I(\u03b8 \u03b8 \u03b82 \u0338= 0), . . . , I(\u03b8 \u03b8 \u03b8m \u0338= 0))) |\u03b8 \u03b8 \u03b8| = m (1) where Lflops is the FLOPs associated with hypothesis h(\u00b7;\u03b8 \u03b8 \u03b8) := p(\u00b7|\u03b8 \u03b8 \u03b8), g(\u00b7) is a function with the explicit dependencies, and I is the indicator function. We assume Lflops to depend only on whether parameters are non-zero, such as the number of neurons in a neural network. For a dataset D, our empirical risk thus becomes R(h;\u03b8 \u03b8 \u03b8) = \u2212log p(D|\u03b8 \u03b8 \u03b8) + \u03bbf max (0, Lflops (h,\u03b8 \u03b8 \u03b8) \u2212T ) D = ((x1, y2), . . . , (xn, yn)) (2) Hyperparameters \u03bbf \u2208R+ 0 and T \u2208N0 control the strength of the FLOPs objective and the target, respectively. The second term is a black-box function, whose combinatorial nature prevents gradient-based optimization; thus, using the same procedure in prior art [7], we relax the objective to a surrogate of the evidence lower bound with a fully-factorized spike-and-slab posterior as the variational distribution, where the addition of the clipped FLOPs objective can be interpreted as a sparsity-inducing prior p(\u03b8 \u03b8 \u03b8) \u221dexp(\u2212\u03bbf max(0, Lflops(h,\u03b8 \u03b8 \u03b8) \u2212T )). Let z \u223cp(z|\u03c0 \u03c0 \u03c0) be Bernoulli random variables parameterized by \u03c0 \u03c0 \u03c0: L(h;\u03b8 \u03b8 \u03b8) = E p(z|\u03c0 \u03c0 \u03c0) [\u2212log p(D|\u03b8 \u03b8 \u03b8 \u2299z) + \u03bbf max (0, Lflops (h,\u03b8 \u03b8 \u03b8 \u2299z) \u2212T )] (3) where \u2299denotes the Hadamard product. To allow for ef\ufb01cient reparameterization and exact zeros, Louizos et al. [7] propose to use a hard concrete distribution as the approximation, which is a stretched and clipped version of the binary Concrete distribution [8]: if \u02c6 z \u223cBinaryConcrete(\u03b1, \u03b2), then \u02dc z := max(0, min(1, (\u03b6 \u2212\u03b3)\u02c6 z + \u03b3)) is said to be a hard concrete r.v., given \u03b6 > 1 and \u03b3 < 0. De\ufb01ne \u03c6 \u03c6 \u03c6 := (\u03b1 \u03b1 \u03b1, \u03b2), and let \u03c8(\u03c6 \u03c6 \u03c6) = Sigmoid(log\u03b1 \u03b1 \u03b1 \u2212\u03b2 log \u2212\u03b3 \u03b6 ) and z \u223cBernoulli(\u03c8(\u03c6 \u03c6 \u03c6)). Then, the approximation becomes L(h;\u03b8 \u03b8 \u03b8) \u2248 E p(\u02dc z|\u03c6 \u03c6 \u03c6) [\u2212log p(D|\u03b8 \u03b8 \u03b8 \u2299z)] + \u03bbf E p(z|\u03c8(\u03c6 \u03c6 \u03c6)) [max (0, Lflops (h,\u03b8 \u03b8 \u03b8 \u2299z) \u2212T )] (4) \u03c8(\u00b7) is the probability of a gate being non-zero under the hard concrete distribution. It is more ef\ufb01cient in the second expectation to sample from the equivalent Bernoulli parameterization compared to hard concrete, which is more computationally expensive to sample multiple times. The \ufb01rst term now allows for ef\ufb01cient optimization via the reparameterization trick [3]; for the second, we apply the score function estimator (REINFORCE) [16], since the FLOPs objective is, in general, nondifferentiable and thus precludes the reparameterization trick. High variance is a non-issue because the number of FLOPs is fast to compute, hence letting many samples to be drawn. At inference time, the deterministic estimator is \u02c6 \u03b8 \u03b8 \u03b8 := \u03b8 \u03b8 \u03b8 \u2299max(0, min(1, Sigmoid(log\u03b1 \u03b1 \u03b1)(\u03b6 \u2212\u03b3) + \u03b3)) for the \ufb01nal parameters \u02c6 \u03b8 \u03b8 \u03b8. FLOPs under group sparsity. In practice, computational savings are achieved only if the model is sparse across \u201cregular\u201d groups of parameters, e.g., each \ufb01lter in a convolutional layer. Thus, each computational group uses one hard concrete r.v. [7]\u2014in fully-connected layers, one per input neuron; in 2D convolution layers, one per output \ufb01lter. Under convention in the literature where one addition and one multiplication each count as a FLOP, the FLOPs for a 2D convolution layer hconv(\u00b7;\u03b8 \u03b8 \u03b8) given a random draw z is then de\ufb01ned as Lflops(hconv, z) = (KwKhCin + 1)(Iw \u2212Kw + Pw + 1)(Ih \u2212Kh + Ph + 1)\u2225z\u22250 for kernel width and height (Kw, Kh), input width and height (Iw, Ih), padding width and height (Pw, Ph), and number of input channels Cin. The number of FLOPs for a fully-connected layer hfc(\u00b7;\u03b8 \u03b8 \u03b8) is Lflops(hfc, z) = (In + 1)\u2225z\u22250, where In is the number of input neurons. Note that these are conventional de\ufb01nitions in neural network compression papers\u2014the objective can easily use instead a number of FLOPs incurred by other device-speci\ufb01c algorithms. Thus, at each training step, we compute the FLOPs objective by sampling from the Bernoulli r.v.\u2019s and using the aforementioned de\ufb01nitions, e.g., Lflops(hconv, \u00b7) for convolution layers. Then, we apply the score function estimator to the FLOPs objective as a black-box estimator. 2 \f3 Experimental Results We report results on MNIST, CIFAR-10, and CIFAR-100, training multiple models on each dataset corresponding to different FLOPs targets. We follow the same initialization and hyperparameters as Louizos et al. [7], using Adam [2] with temporal averaging for optimization, a weight decay of 5 \u00d7 10\u22124, and an initial \u03b1 that corresponds to the original dropout rate of that layer. We similarly choose \u03b2 = 2/3, \u03b3 = \u22120.1, and \u03b6 = 1.1. For brevity, we direct the interested reader to their repository1 for speci\ufb01cs. In all of our experiments, we replace the original L0 penalty with our FLOPs objective, and we train all models to 200 epochs; at epoch 190, we prune the network by weights associated with zeroed gates and replace the r.v.\u2019s with their deterministic estimators, then \ufb01netune for 10 more epochs. For the score function estimator, we draw 1000 samples at each optimization step\u2014this procedure is fast and has no visible effect on training time. Table 1: Comparison of LeNet-5-Caffe results on MNIST Model Architecture Err. FLOPs GL [15] 3-12-192-500 1.0% 205K GD [11] 7-13-208-16 1.1% 254K SBP [9] 3-18-284-283 0.9% 217K BC-GNJ [6] 8-13-88-13 1.0% 290K BC-GHS [6] 5-10-76-16 1.0% 158K L0 [7] 20-25-45-462 0.9% 1.3M L0-sep [7] 9-18-65-25 1.0% 403K Lflops, T = 400K 3-13-208-500 0.9% 218K Lflops, T = 200K 3-8-128-499 1.0% 153K Lflops, T = 100K 2-7-112-478 1.1% 111K We choose \u03bbf = 10\u22126 in all of the experiments for LeNet-5-Caffe, the Caffe variant of LeNet5.1 We observe that our methods (Table 1, bottom three rows) achieve accuracy comparable to those from previous approaches while using fewer FLOPs, with the added bene\ufb01t of providing a tunable \u201cknob\u201d for adjusting the FLOPs. Note that the convolution layers are the most aggressively compressed, since they are responsible for most of the FLOPs in this model. Table 2: Comparison of WideResNet-28-10 results on CIFAR-10 and CIFAR-100 Method CIFAR-10 CIFAR-100 Err. E[FLOPs] FLOPs Err. E[FLOPs] FLOPs Orig. 4.00% 5.9B 5.9B 21.18% 5.9B 5.9B Orig. w/dropout 3.89% 5.9B 5.9B 18.85% 5.9B 5.9B L0 3.83% 5.3B 5.9B 18.75% 5.3B 5.9B L0-small 3.93% 5.2B 5.9B 19.04% 5.2B 5.9B Lflops, T = 4B 3.82% 3.9B 4.6B 18.93% 3.9B 4.6B Lflops, T = 2.5B 3.91% 2.4B 2.4B 19.48% 2.4B 2.4B Orig. in Table 2 denotes the original WRN-28-10 model [18], and L0-* refers to the L0-regularized models [7]; likewise, we augment CIFAR-10 and CIFAR-100 with standard random cropping and horizontal \ufb02ipping. For each of our results (last two rows), we report the median error rate of \ufb01ve different runs, executing a total of 20 runs across two models for each of the two datasets; we use \u03bbf = 3 \u00d7 10\u22129 in all of these experiments. We also report both the expected FLOPs and actual FLOPs, the former denoting the number of FLOPs, on average, at training time under stochastic gates and the latter denoting the number of FLOPs at inference time. We restrict the FLOPs calculations to the penalized non-residual convolution layers only. For CIFAR-10, our approaches result in Pareto-better models with decreases in both error rate and the actual number of inference-time FLOPs. For CIFAR-100, we do not achieve a Pareto-better model, since our approach trades accuracy for improved ef\ufb01ciency. The acceptability of the tradeoff depends on the end application. 1https://github.com/AMLab-Amsterdam/L0_regularization 3" + }, + { + "url": "http://arxiv.org/abs/1811.00942v1", + "title": "Progress and Tradeoffs in Neural Language Models", + "abstract": "In recent years, we have witnessed a dramatic shift towards techniques driven\nby neural networks for a variety of NLP tasks. Undoubtedly, neural language\nmodels (NLMs) have reduced perplexity by impressive amounts. This progress,\nhowever, comes at a substantial cost in performance, in terms of inference\nlatency and energy consumption, which is particularly of concern in deployments\non mobile devices. This paper, which examines the quality-performance tradeoff\nof various language modeling techniques, represents to our knowledge the first\nto make this observation. We compare state-of-the-art NLMs with \"classic\"\nKneser-Ney (KN) LMs in terms of energy usage, latency, perplexity, and\nprediction accuracy using two standard benchmarks. On a Raspberry Pi, we find\nthat orders of increase in latency and energy usage correspond to less change\nin perplexity, while the difference is much less pronounced on a desktop.", + "authors": "Raphael Tang, Jimmy Lin", + "published": "2018-11-02", + "updated": "2018-11-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing (Chen and Manning, 2014) to named-entity recognition (Lample et al., 2016) to machine translation (Luong et al., 2015). The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity (Melis et al., 2018; Merity et al., 2018b). Speci\ufb01cally focused on language modeling, this paper examines an issue that to our knowledge has not been explored: advances in neural language models have come at a signi\ufb01cant cost in terms of increased computational complexity. Computing the probability of a token sequence using nonneural techniques requires a number of phrase lookups and perhaps a few arithmetic operations, whereas model inference with NLMs require large matrix multiplications consuming perhaps millions of \ufb02oating point operations (FLOPs). These performance tradeoffs are worth discussing. In truth, language models exist in a quality\u2013 performance tradeoff space. As model quality increases (e.g., lower perplexity), performance as measured in terms of energy consumption, query latency, etc. tends to decrease. For applications primarily running in the cloud\u2014say, machine translation\u2014practitioners often solely optimize for the lowest perplexity. This is because such applications are embarrassingly parallel and hence trivial to scale in a data center environment. There are, however, applications of NLMs that require less one-sided optimizations. On mobile devices such as smartphones and tablets, for example, NLMs may be integrated into software keyboards for next-word prediction, allowing much faster text entry. Popular Android apps that enthusiastically tout this technology include SwiftKey and Swype. The greater computational costs of NLMs lead to higher energy usage in model inference, translating into shorter battery life. In this paper, we examine the quality\u2013 performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser\u2013Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented (Melis et al., 2018), but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We \ufb01nd that a 2.5\u00d7 reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with arXiv:1811.00942v1 [cs.CL] 2 Nov 2018 \fNLMs takes 49\u00d7 longer and requires 32\u00d7 more energy. Furthermore, we \ufb01nd that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the \ufb01rst known elucidation of this quality\u2013performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point. 2 Background and Related Work Melis et al. (2018) evaluate recent neural language models; however, their focus is not on the computational footprint of each model, but rather the perplexity. To further reduce perplexity, many neural language model extensions exist, such as continuous cache pointer (Grave et al., 2017) and mixture of softmaxes (Yang et al., 2018). Since our focus is on comparing \u201ccore\u201d neural and non-neural approaches, we disregard these extra optimizations techniques in all of our models. Other work focus on designing lightweight models for resource-ef\ufb01cient inference on mobile devices. Liu et al. (2018) explore LSTMs (Hochreiter and Schmidhuber, 1997) with binary weights for language modeling; Botha et al. (2017) examine shallow feedforward neural networks for natural language processing. AWD-LSTM. Merity et al. (2018b) show that a simple three-layer LSTM, with proper regularization and optimization techniques, can achieve state of the art on various language modeling datasets, surpassing more complex models. Speci\ufb01cally, Merity et al. (2018b) apply randomized backpropagation through time, variational dropout, activation regularization, embedding dropout, and temporal activation regularization. A novel scheduler for optimization, non-monotonically triggered ASGD (NT-ASGD) is also introduced. Merity et al. (2018b) name their three-layer LSTM model trained with such tricks, \u201cAWD-LSTM.\u201d Quasi-Recurrent Neural Networks. Quasirecurrent neural networks (QRNNs; Bradbury et al., 2017) achieve current state of the art in word-level language modeling (Merity et al., 2018a). A quasi-recurrent layer comprises two separate parts: a convolution layer with three weights, and a recurrent pooling layer. Given an input X \u2208Rk\u00d7n, the convolution layer is Z = tanh(Wz \u00b7 X) F = \u03c3(Wf \u00b7 X) O = \u03c3(Wo \u00b7 X) where \u03c3 denotes the sigmoid function, \u00b7 represents masked convolution across time, and W{z,f,o} \u2208 Rm\u00d7k\u00d7r are convolution weights with k input channels, m output channels, and a window size of r. In the recurrent pooling layer, the convolution outputs are combined sequentially: ct = ft \u2299ct\u22121 + (1 \u2212ft) \u2299zt ht = ot \u2299ct Multiple QRNN layers can be stacked for deeper hierarchical representation, with the output h1:t being fed as the input into the subsequent layer: In language modeling, a four-layer QRNN is a standard architecture (Merity et al., 2018a). Perplexity\u2013Recall Scale. Word-level perplexity does not have a strictly monotonic relationship with recall-at-k, the fraction of top k predictions that contain the correct word. A given R@k imposes a weak minimum perplexity constraint\u2014 there are many free parameters that allow for large variability in the perplexity given a certain R@k. Consider the corpus, \u201cchoo choo train,\u201d with an associated unigram model P(\u201cchoo\u201d) = 0.1, P(\u201ctrain\u201d) = 0.9, resulting in an R@1 of 1/3 and perplexity of 4.8. Clearly, R@1 = 1/3 for all P(\u201cchoo\u201d) \u22640.5; thus, perplexity can drop as low as 2 without affecting recall. 3 Experimental Setup We conducted our experiments on Penn Treebank (PTB; Marcus et al., 1993) and WikiText103 (WT103; Merity et al., 2017). Preprocessed by Mikolov et al. (2010), PTB contains 887K tokens for training, 70K for validation, and 78K for test, with a vocabulary size of 10,000. On the other hand, WT103 comprises 103 million tokens for training, 217K for validation, and 245K for test, spanning a vocabulary of 267K unique tokens. For the neural language model, we used a four-layer QRNN (Bradbury et al., 2017), which achieves state-of-the-art results on a variety of datasets, such as WT103 (Merity et al., 2018a) and PTB. To compare against more common \fLSTM architectures, we also evaluated AWDLSTM (Merity et al., 2018b) on PTB. For the non-neural approach, we used a standard \ufb01vegram model with modi\ufb01ed Kneser-Ney smoothing (Chen and Goodman, 1996), as explored in Mikolov and Zweig (2012) on PTB. We denote the QRNN models for PTB and WT103 as ptb-qrnn and wt103-qrnn, respectively. For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity\u2013recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set. 3.1 Hyperparameters and Training The QRNN models followed the exact training procedure and architecture delineated in the of\ufb01cial codebase from Merity et al. (2018a). For ptb-qrnn, we trained the model for 550 epochs using NT-ASGD (Merity et al., 2018b), then \ufb01netuned for 300 epochs using ASGD (Polyak and Juditsky, 1992), all with a learning rate of 30 throughout. For wt103-qrnn, we followed Merity et al. (2018a) and trained the QRNN for 14 epochs, using the Adam optimizer with a learning rate of 10\u22123. We also applied regularization techniques from Merity et al. (2018b); all the speci\ufb01c hyperparameters are the same as those in the repository. Our model architecture consists of 400-dimensional tied embedding weights (Inan et al., 2017) and four QRNN layers, with 1550 hidden units per layer on PTB and 2500 per layer on WT103. Both QRNN models have window sizes of r = 2 for the \ufb01rst layer and r = 1 for the rest. For the KN-5 model, we trained an off-theshelf \ufb01ve-gram model using the popular SRILM toolkit (Stolcke, 2002). We did not specify any special hyperparameters. 3.2 Infrastructure We trained the QRNNs with PyTorch (0.4.0; commit 1807bac) on a Titan V GPU. To evaluate the models under a resource-constrained environment, we deployed them on a Raspberry Pi 3 (Model B) running Raspbian Stretch (4.9.41-v7+). The Raspberry Pi (RPi) is not only a standard platform, but also a close surrogate to mobile phones, using the same Cortex-A7 in many phones. We then transferred the trained models to the RPi, using the same frameworks for evaluation. We plugged the RPi into a Watts Up Pro meter, a power meModel Val. Test Penn Treebank Skip LSTM (Melis et al., 2018) 60.9 58.3 AWD-LSTM (Merity et al., 2018b) 60.0 57.3 QRNN 59.1 56.8 WikiText-103 Rae-LSTM (Rae et al., 2018) 36.0 36.4 QRNN 31.9 32.8 Table 1: Comparison of neural language models on Penn Treebank and WikiText-103. ter that can be read programatically over USB at a frequency of 1 Hz. For the QRNNs, we used the \ufb01rst 350 words of the test set, and averaged the ms/query and mJ/query. For KN-5, we used the entire test set for evaluation, since the latency was much lower. To adjust for the base power load, we subtracted idle power draw from energy usage. For a different perspective, we further evaluated all the models under a desktop environment, using an i7-4790k CPU and Titan V GPU. Because the base power load for powering a desktop is much higher than running neural language models, we collected only latency statistics. We used the entire test set, since the QRNN runs quickly. In addition to energy and latency, another consideration for the NLP developer selecting an operating point is the cost of underlying hardware. For our setup, the RPi costs $35 USD, the CPU costs $350 USD, and the GPU costs $3000 USD. 4 Results and Discussion To demonstrate the effectiveness of the QRNN models, we present the results of past and current state-of-the-art neural language models in Table 1; we report the Skipand AWD-LSTM results as seen in the original papers, while we report our QRNN results. Skip LSTM denotes the four-layer Skip LSTM in Melis et al. (2018). Rae et al. (2018) focus on Hebbian softmax, a model extension technique\u2014Rae-LSTM refers to their base LSTM model without any extensions. In our results, KN-5 refers to the traditional \ufb01ve-gram model with modi\ufb01ed Kneser-Ney smoothing, and AWD is shorthand for AWD-LSTM. Perplexity\u2013recall scale. In Figure 1, using KN5 as the model, we plot the log perplexity (cross entropy) and R@3 error (1 \u2212R@3) for every sentence in PTB and WT103. The horizontal clusters arise from multiple perplexity points representing \f1 2 3 4 5 6 7 Log Perplexity (Cross Entropy) 0.0 0.2 0.4 0.6 0.8 1.0 1 R@3 (Error) Log Perplexity Recall Scale on PTB r2 = 0.72 1 2 3 4 5 6 7 Log Perplexity (Cross Entropy) 0.0 0.2 0.4 0.6 0.8 1.0 1 R@3 (Error) Log Perplexity Recall Scale on WT-103 r2 = 0.88 Figure 1: Log perplexity\u2013recall error with KN-5. 0 1 2 3 4 5 6 7 Log Perplexity (Cross Entropy) 0.0 0.2 0.4 0.6 0.8 1.0 1 R@3 (Error) Log Perplexity Recall Scale on PTB r2 = 0.77 0 1 2 3 4 5 6 7 Log Perplexity (Cross Entropy) 0.0 0.2 0.4 0.6 0.8 1.0 1 R@3 (Error) Log Perplexity Recall Scale on WT-103 r2 = 0.86 Figure 2: Log perplexity\u2013recall error with QRNN. # Method Model Quality RPi CPU | GPU Val. Test R@3 ms/q mJ/q ms/q ms/q Penn Treebank 1 KN-5 148.4 141.5 36.7% 7 6 0.8 \u2013 2 AWD 59.2 56.8 44.9% 223 295 7.9 1.7 3 QRNN 59.1 56.8 44.7% 224 296 7.5 1.6 WikiText-103 4 KN-5 145.2 152.7 39.8% 264 229 37 \u2013 5 QRNN 31.9 32.8 53.5% 1240 1480 59 3.5 Table 2: Language modeling results on performance and model quality. the same R@3 value, as explained in Section 2. We also observe that the perplexity\u2013recall scale is non-linear\u2014instead, log perplexity appears to have a moderate linear relationship with R@3 error on PTB (r = 0.85), and an even stronger relationship on WT103 (r = 0.94). This is partially explained by WT103 having much longer sentences, and thus less noisy statistics. From Figure 2, we \ufb01nd that QRNN models yield strongly linear log perplexity\u2013recall plots as well, where r = 0.88 and r = 0.93 for PTB and WT103, respectively. Note that, due to the improved model quality over KN-5, the point clouds are shifted downward compared to Figure 1. We conclude that log perplexity, or cross entropy, provides a more human-understandable indicator of R@3 than perplexity does. Overall, these \ufb01ndings agree with those from Chen et al. (1998), which explores the log perplexity\u2013word error rate scale in language modeling for speech recognition. Quality\u2013performance tradeoff. In Table 2, from left to right, we report perplexity results on the validation and test sets, R@3 on test, and \ufb01nally perquery latency and energy usage. On the RPi, KN-5 is both fast and power-ef\ufb01cient to run, using only about 7 ms/query and 6 mJ/query for PTB (Table 2, row 1), and 264 ms/q and 229 mJ/q on WT103 (row 5). Taking 220 ms/query and consuming 300 mJ/query, AWD-LSTM and ptb-qrnn are still viable for mobile phones: The modern smartphone holds upwards of 10,000 joules (Carroll et al., 2010), and the latency is within usability standards (Miller, 1968). Nevertheless, the models are still 49\u00d7 slower and 32\u00d7 more power-hungry than KN-5. The wt103-qrnn model is completely unusable on phones, taking over 1.2 seconds per next-word prediction. Neural models achieve perplexity drops of 60\u201380% and R@3 increases of 22\u201334%, but these improvements come at a much higher cost in latency and energy usage. In Table 2 (last two columns), the desktop yields very different results: the neural models on PTB (rows 2\u20133) are 9\u00d7 slower than KN-5, but the absolute latency is only 8 ms/q, which is still much faster than what humans perceive as instantaneous (Miller, 1968). If a high-end commodity GPU is available, then the models are only twice as slow as KN-5 is. From row 5, even better results are noted with wt103-qrnn: On the CPU, the QRNN is only 60% slower than KN-5 is, while the model is faster by 11\u00d7 on a GPU. These results suggest that, if only latency is considered under a commodity desktop environment, the QRNN model is humanly indistinguishable from the KN5 model, even without using GPU acceleration. 5" + }, + { + "url": "http://arxiv.org/abs/1809.10282v1", + "title": "Adaptive Pruning of Neural Language Models for Mobile Devices", + "abstract": "Neural language models (NLMs) exist in an accuracy-efficiency tradeoff space\nwhere better perplexity typically comes at the cost of greater computation\ncomplexity. In a software keyboard application on mobile devices, this\ntranslates into higher power consumption and shorter battery life. This paper\nrepresents the first attempt, to our knowledge, in exploring\naccuracy-efficiency tradeoffs for NLMs. Building on quasi-recurrent neural\nnetworks (QRNNs), we apply pruning techniques to provide a \"knob\" to select\ndifferent operating points. In addition, we propose a simple technique to\nrecover some perplexity using a negligible amount of memory. Our empirical\nevaluations consider both perplexity as well as energy consumption on a\nRaspberry Pi, where we demonstrate which methods provide the best\nperplexity-power consumption operating point. At one operating point, one of\nthe techniques is able to provide energy savings of 40% over the state of the\nart with only a 17% relative increase in perplexity.", + "authors": "Raphael Tang, Jimmy Lin", + "published": "2018-09-27", + "updated": "2018-09-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "main_content": "INTRODUCTION An emerging application of neural language models (NLMs) is smart software keyboards on such mobile devices as smartphones and tablets that provide next-word prediction, allowing users to input entire words with a single tap. For example, the apps SwiftKey1 and Swype2 both advertise the use of neural networks for predictions. According to Google Play Store, SwiftKey has more than 100 million downloads, demonstrating its popularity. Based on standard metrics such as perplexity, neural techniques represent an advance in the state of the art in language modeling (Merity et al., 2018b). Better models, however, come at a cost in computational complexity, which translates to higher power consumption. In the context of mobile devices, energy ef\ufb01ciency is, of course, an important optimization objective. A casual web search, for example, reveals numerous complaints from users of the above apps about battery drain, indicating that this is not a hypothetical concern. In reality, neural language models exist in a accuracy\u2013ef\ufb01ciency tradeoff space. Although this fact has been recognized for applications such as image recognition (Canziani et al., 2016) and keyword spotting (Tang et al., 2018), to our knowledge no one in the NLP community has explored these tradeoffs. All previous papers on NLMs simply report single-point perplexity \ufb01gures. In contrast, the high-level goal of our work is to understand the tradeoffs between neural modeling accuracy and real-world ef\ufb01ciency constraints: in addition to perplexity, NLMs should be evaluated in terms of FLOPs,3 milliJoule per query (mJ/q), and inference latency. We conduct exactly such experiments, using the Raspberry Pi (which shares the same architecture as most mobile devices today) as a more convenient hardware platform. Ideally, NLMs should provide a \u201cknob\u201d that allows developers to tune accuracy\u2013ef\ufb01ciency tradeoffs. In this paper, we explore pruning approaches that take a pre-trained quasi-recurrent neural network 1http://www.swiftkey.com/ 2http://www.swype.com/ 3Convention from literature de\ufb01nes number of FLOPs as the total number of additions and multiplications. 1 arXiv:1809.10282v1 [cs.CL] 27 Sep 2018 \fThe quick brown fox jumps over the lazy time Figure 1: An illustration of the \ufb01rst QRNN layer for language modeling. In this visualization, a QRNN layer with a window size of two convolves and pools using embeddings from the input. Note the absence of recurrent weights. (QRNN; Bradbury et al., 2017), representing the state of the art in NLM today, and provides exactly such a knob. Furthermore, our techniques allow these tradeoffs to be tuned at inference time, which allows a mobile device to adaptively control its behavior, e.g., favor ef\ufb01ciency at the cost of accuracy when the battery is low. Thus, this paper makes the following contributions: First, to our knowledge, we are the \ufb01rst to comprehensively explore accuracy\u2013ef\ufb01ciency tradeoffs for NLMs with experimental evaluation of energy consumption on a Raspberry Pi. Second, we evaluate a number of inference-time pruning techniques that takes any pre-trained QRNN and provides a tunable accuracy\u2013ef\ufb01ciency \u201cknob\u201d. 2 BACKGROUND AND RELATED WORK 2.1 QUASI-RECURRENT NEURAL NETWORKS Quasi-recurrent neural networks (QRNNs; Bradbury et al., 2017) achieve highly competitive perplexity on word-level language modeling datasets, including state-of-the-art perplexity on WikiText103 (Merity et al., 2018b). Although applying such techniques as dynamic evaluation (Krause et al., 2017), Hebbian softmax (Rae et al., 2018), and mixture of softmaxes (Yang et al., 2017) can produce lower perplexity, our focus is on the recurrent architecture. Thus, we explore the task of pruning QRNNs without using any other extensions. Each word is encoded as a one-hot vector and then fed into a linear layer, which produces lowerdimensional word embeddings for the QRNN layers. A single QRNN layer consists of two distinct components\u2014convolution and recurrent pooling\u2014that alternate to imitate an LSTM (Hochreiter & Schmidhuber, 1997). Given a stacked sequence of inputs X = x1 \u2295\u00b7 \u00b7 \u00b7 \u2295xn \u2208Rk\u00d7n (e.g., word embeddings in language modeling), the one-dimensional convolution layer is de\ufb01ned as Z = tanh(Wz \u00b7 X) F = \u03c3(Wf \u00b7 X) O = \u03c3(Wo \u00b7 X) where Wz, Wf, Wo are the weights associated with the input, forget, and output gates, respectively, \u00b7 represents a masked convolution along time, and \u03c3 denotes the sigmoid function. For W{z,f,o} \u2208 Rm\u00d7(k\u00d7r), m is the number of output channels, k is the number of input channels, and r the window size across time. Without loss of generality, we henceforth represent W{z,f,o} as two-dimensional matrices \u2208Rm\u00d7s, where s = k \u00d7 r. The outputs are fed into a recurrent pooling layer: ct = ft \u2299ct\u22121 + (1 \u2212ft) \u2299zt ht = ot \u2299ct where \u2299denotes element-wise product. Altogether, these two layers de\ufb01ne a single QRNN layer (Bradbury et al., 2017; see Figure 1). Multiple layers can be stacked for greater expressiveness, where the output h1:n of the previous layer is the input X to the current layer. We tie the weights between the input and output layers, as used by Merity et al. (2018a) and proposed by Inan et al. (2017). In addition to improving perplexity, weight tying reduces the number of parameters and hence the memory footprint, which is bene\ufb01cial to our task. 2 \f2.2 PRUNING Weight pruning is an effective strategy for reducing the computational footprint of a model. An in\ufb02uential pioneering work, LeCun et al. (1990) proposes to discard weights using a error-approximation approach based on Hessian diagonals. More recent work suggests pruning weights with small magnitudes (Han et al., 2016), with quantization and Huffman coding as additional steps. However, these approaches introduce irregular sparsity to the weights, and they assume that re-training the weights is feasible. In this work, we take a different approach and focus on techniques that eliminate entire \ufb01lters. This is because modern implementations of feedforward evaluation (e.g., im2col and particularly NEON instruction on ARM processors) take advantage of dense matrix multiplications. Pruning individual weights without changing the dimensions of the weight matrices has minimal effect on power consumption\u2014this is con\ufb01rmed by our initial exploratory studies on the Raspberry Pi. Hence, we only examine pruning techniques that discard entire \ufb01lters of the convolutional layers: Random pruning. A simple baseline (Mittal et al., 2018) is random \ufb01lter pruning, where n% of the \ufb01lters are randomly pruned, layer-by-layer. Interestingly, Mittal et al. (2018) \ufb01nd that random pruning is competitive with more advanced methods. Filter norm. Li et al. (2017) propose ranking \ufb01lters by their L1-norms, and then dropping off n% of the smallest \ufb01lters on a layer-by-layer basis. Mittal et al. (2018) have previously found that L1-norm \ufb01lter pruning (Li et al., 2017) outperforms a multitude of competing approaches. Mean activation norm. Among other approaches, Molchanov et al. (2016) suggest pruning \ufb01lters whose mean activations are small. This approach is especially effective on ReLU, which both creates sparse activations and forces them to be non-negative. L0 regularization. Louizos et al. (2018) apply L0 regularization to neural networks in order to learn sparse, ef\ufb01cient structures. Formally, de\ufb01ne an objective R(\u03b8) = L(\u03b8) + \u03bb\u2225\u03b8\u22250 \u03b8\u2217= arg min \u03b8 R(\u03b8) where L is the original loss function and \u03b8 the weights. The dependence on the hypothesis and training examples has been omitted for brevity. The optimal solution entails a non-differentiable objective and iteration over all 2|\u03b8| possibilities to choose the best \u03b8\u2217; hence, Louizos et al. (2018) propose the following relaxation of the objective: \u02c6 R(\u03b8, \u03c6) = Ez\u223cp(z|\u03c6) [L(\u03b8 \u2299z)] + \u03bb |\u03b8| X i=1 (1 \u2212Q(zi \u22640; \u03c6i)) \u03b8\u2217, \u03c6\u2217= arg min \u03b8,\u03c6 \u02c6 R(\u03b8, \u03c6) where z \u223cp(z|\u03c6) is a binary discrete random mask parameterized by \u03c6, and Q is the CDF. Intuitively, for some choice of \u03c6, the number of active parameters (on average) is penalized. Inspired by the Concrete distribution (Maddison et al., 2016), Louizos et al. (2018) propose the hard concrete distribution for z, further relaxing the discrete random mask into a continuous one: s = \u03c3 ((log u \u2212log(1 \u2212u) + log \u03b1)/\u03b2) z = min(1, max(0, (\u03b6 \u2212\u03b3)s + \u03b3)) where u \u2208R|\u03b8| is a continuous random vector such that ui \u223cUniform [0, 1], \u03c6 = log \u03b1 are the mask parameters, and \u03b3 = \u22120.1, \u03b6 = 1.1, \u03b2 = 2/3 are scaling hyperparameters. Note that \u03b2 can also be included as part of the mask parameters \u03c6; we follow Louizos et al. (2018) and \ufb01x \u03b2 = 2/3. Louizos et al. (2018) then apply the reparameterization trick (Kingma & Welling, 2014; Rezende et al., 2014) and make a Monte Carlo approximation to the objective: \u02c6 R(\u03b8, \u03c6) = 1 N N X i=1 \u0010 L(\u03b8 \u2299z(i)) \u0011 + \u03bb |\u03b8| X i=1 (1 \u2212Q(zi \u22640; \u03c6i)) 3 \fA closed form expression is derived for the penalty, (1 \u2212Q(zi \u22640; \u03c6i)) = \u03c3(log \u03b1i \u2212\u03b2 log \u2212\u03b3 \u03b6 ). At test time, the following estimator is used: z = min(1, max(0, \u03c3(log \u03b1)(\u03b6 \u2212\u03b3) + \u03b3) 3 INFERENCE-TIME PRUNING In this section, we explain how the various techniques in Section 2.2 can be adapted to QRNNs. For the following methods, we assume that a pre-trained model is provided. We denote the weights at QRNN layer l as W(l). In all methods, we tie the indices across Wz, Wf, Wo. For example, if \ufb01lter i is selected for pruning at layer l, then W(l) {z,f,o} := W(l) {z,f,o}[\u2212i, :], where \u2212i denotes exclusion of index i. This allows the removal of the column [:, \u2212i] in the next layer as well. Random pruning. We apply random pruning to Wz, Wf, and Wo. That is, we randomly prune \ufb01lters associated with the same indices across the three weights. Filter norm. We apply \ufb01lter norm pruning (Li et al., 2017), with the \ufb01lter norms of Wz acting as the criteria. We \ufb01nd Wz most helpful, since small \ufb01lter norms should result in small hidden outputs, which is not necessarily the case for Wf and Wo. Mean activation norm. The hidden output H = h1 \u2295\u00b7 \u00b7 \u00b7 \u2295hn is a natural candidate for collecting mean activation statistics. Intuitively, if \u2225H:,i\u22251 is small on average, then the ith \ufb01lters for Wz, Wf, Wo are less important. Statistics are collected using a single pass of the entire training set. For inference-time pruning, we store the collected statistics. L0 regularization. Since we are given a pre-trained model and are prohibited from altering the weights, we learn the mask parameters only: \u03c6\u2217= arg min\u03c6 \u02c6 R(\u03b8, \u03c6). We also enforce the sparsity on entire rows of Wz, which corresponds to \u201cgroup sparsity\u201d in Louizos et al. (2018). Speci\ufb01cally, we formulate the regularization on a feature map level instead, with Z as the target: Z(l) := \u0010 diag(z(l))W(l) z \u0011 \u00b7 X = Z(l) \u2299z(l) Z is chosen for the property that the ith feature map for h is zero if Zi is zero for c0 = 0. This approach entails training and storing extra mask parameters for each operating point. However, we \ufb01nd this to be a non-issue for our task, since there are few operating points\u2014three or four at most, out of which we use two for L0 regularization\u2014so the extra storage is negligible. 3.1 WITH SINGLE-RANK UPDATE At speci\ufb01c operating points (e.g., 40% and 80% FLOPs), pre-trained weight updates can be stored and applied at inference-time to recover some perplexity. Suppose W \u2208Rm\u00d7n is a weight matrix in a neural network, and W\u2217\u2208Rm\u00d7n is some known set of weights that results in a lower loss. Clearly, \u2206W := W\u2217\u2212W can be stored and added at inference-time to obtain a better neural network. However, it is obvious that this scheme is wasteful, since W\u2217could have directly substituted W in the \ufb01rst place. Sacri\ufb01cing a negligible amount of storage to recover some perplexity, we propose learning a singlerank weight matrix update \u2206W := uv\u22ba, u \u2208Rm, v \u2208Rn to each weight in the convolution layers. Speci\ufb01cally, the process is as follows, beginning with a pre-trained model: 1. Prune a pre-determined set of \ufb01lters for some operating point (e.g., 40% FLOPs). 2. Initialize the weight updates \u2206Wl = u(l)v(l)\u22ba, u(l) i , v(l) i \u223cp(\u03f5) for each convolution layer l, in our case Normal(0, 0.1). 3. Fixing the existing weights Wl for each convolution layer, train a single-rank update such that W\u2217 l := Wl + \u2206Wl, where W\u2217 l is used as the new weight. 4. Store \u2206Wl for use at inference time on the same operating point. 4 \f4 EXPERIMENTAL SETUP We evaluate the aforementioned pruning techniques for word-level language modeling on Penn Treebank (PTB) (Marcus et al., 1993; as preprocessed by Mikolov et al., 2010) and WikiText-103 (WT103) (Merity et al., 2017). We denote the models for PTB and WT103 as ptb-qrnn and wt103-qrnn, respectively. 4.1 DATASETS AND TASKS For each model, we report word-level perplexity and recall-at-three (R@3), de\ufb01ned as the percentage of top three token\u2013logit outputs that contain the true next token. For example, if {\u201ccat\u201d, \u201cdog\u201d, \u201cbaby\u201d} are the top three predicted tokens for, \u201cI adopted a ,\u201d with \u201cdog\u201d being the ground truth, then the prediction is correct, regardless of the rank of \u201cdog\u201d. Penn Treebank. Built from Wall Street Journal articles, Penn Treebank (PTB) is a small yet popular word-level dataset for language modeling. In the standard pre-processed version (Mikolov et al., 2010), the dataset contains roughly 887K, 70K, and 78K training, validation, and testing tokens, respectively. The number of unique tokens is capped at 10,000, yielding a relatively large 4.8% out-of-vocabulary (OOV) rate. WikiText-103. Merity et al. (2017) introduce WikiText-2 and WikiText-103, datasets based on freely available Wikipedia articles. We use only WikiText-103, since WikiText-2 was designed to be similar to Penn Treebank. With 103 million training tokens, WikiText-103 is 103 times as large as PTB. WikiText-103 contains around 217K tokens for validation, and 245K for testing. The number of unique tokens is 267K, resulting in a 0.4% OOV rate, signi\ufb01cantly lower than that of PTB. 4.2 HYPERPARAMETERS AND TRAINING In all of the models, we chose the hyperparameters as suggested in Merity et al.\u2019s codebase.4 For ptb-qrnn, we used a four-layer QRNN with 1550 hidden units for each layer and a 400dimensional embedding. For wt103-qrnn, we used a four-layer QRNN with 2500 hidden units and 400-dimensional embeddings, along with a tied adaptive softmax (Merity et al., 2018b). In both models, the \ufb01rst layer uses a window size of two, while the rest use a windows size of one. Following Merity et al. (2018a), we also adopted the regularization techniques randomized backpropagation through time, embedding dropout, temporal activation regularization (TAR), activation regularization (AR), and variational dropout. We followed the same training process as well, with non-monotonically triggered ASGD (NT-ASGD) as the optimizer. We use the same hyperparameters as Merity et al. (2018a) and Merity et al. (2018b) for each model\u2013dataset pair. During the training of wt103-qrnn, we follow Merity et al. (2018b), using a tied adaptive softmax (Grave et al., 2017; Merity et al., 2018b) layer. At inference time, we use a regular softmax instead, since we require R@3. Pruning. We selected a number of distinct operating points that represent discrete points in the accuracy\u2013ef\ufb01ciency tradeoff space. Based on previous work (Tang et al., 2018), \ufb02oating-point operations (FLOPs) is a good proxy of both energy usage and latency, and so we use FLOPs as a way of selecting our operating points. In L0 regularization, the \u03bb decay strength was selected so that the resulting model corresponds to roughly the FLOPs targets: To achieve 80% and 60% FLOPs for the model on PTB, we used \u03bb = 5.5 \u00d7 10\u22124, 8.5 \u00d7 10\u22124, respectively. To achieve about 70% FLOPs on WT103, we chose \u03bb = 6 \u00d7 10\u22124. We trained the hard concrete mask parameters for roughly 5000 steps using Adam with a learning rate of 5 \u00d7 10\u22123. Since the weight decay penalty is incompatible with the objective, we removed it while training the mask. For mean activation pruning, which requires some training examples to collect statistics, we used the entire training set for ptb-qrnn. Since WikiText-103 is large, we used roughly 10% of the \ufb01rst training examples for collecting statistics on wt103-qrnn. 4https://github.com/salesforce/awd-lstm-lm 5 \fSingle-rank update (SRU). For the PTB model, the single-rank update was trained for 10 epochs using NT-ASGD (Merity et al., 2018a) with a non-monotonic interval of three. For WikiText-103, the update was trained for 2000 steps using Adam with a learning rate of 5 \u00d7 10\u22123. All other hyperparameters were the same as those used during the training stage. 4.3 INFRASTRUCTURE DETAILS We trained all of our models on a commodity machine with a Titan V GPU, i7-4790k CPU, and 16 GB of RAM. We used PyTorch 0.4.0 (commit 1807bac) for developing and running our models. We deployed our models on a Raspberry Pi (RPi) 3 Model B (ARM Cortex-A53) running Raspbian Stretch (4.9.41-v7+). Speci\ufb01cally, we copied the trained models over to the RPi, and ran them at the same operating points accordingly. We plugged the RPi into a Watts Up Pro meter, a wattmeter that reports power usage at the rate of 1 Hz via a USB cable, which is connected back to the RPi. Evaluating on the test set, we collected power draw statistics on 350 next-word predictions, which were averaged to produce a millijoule per query (mJ/q) estimate. We obtained latency estimates in a similar manner by averaging the milliseconds per query (ms/q). Finally, we subtracted off the idle power usage of the RPi to obtain a better estimate of the actual power for each query. Although our \ufb01nal application is NLMs running on mobile devices such as smartphones and tablets, there are many challenges to directly evaluating on such hardware. The Raspberry Pi is a convenient stand-in since it uses exactly the same ARM processor architecture as nearly all mobile devices today. Evaluation on the RPi is widely adopted for research on ef\ufb01cient NNs today (Amato et al., 2017; Tang et al., 2018). 5 RESULTS AND DISCUSSION In our results for PTB and WT-103, we compare to state-of-the-art results in the past. In general, we \ufb01nd that QRNNs are strong competitors to LSTM approaches, and achieve state-of-the-art perplexity on WikiText-103 (Merity et al., 2018b). # Method Model Quality Footprint w/SRU Val. Test R@3 % FLOPs ms/q mJ/q Test R@3 1 Skip LSTM 60.9 58.3 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 2 AWD-LSTM 60.0 57.3 \u2013 \u2013 223 295 \u2013 \u2013 3 Orig. 59.0 56.8 44.7% 100% 224 296 \u2013 \u2013 4 L0 reg. 63.0 60.7 43.6% 80% 185 227 59.3 44.1% 5 L0 reg. 69.2 66.8 42.1% 60% 142 183 64.0 42.7% 6 Random 68.2 66.0 42.9% 80% 182 238 61.1 43.8% 7 Filter norm 76.1 72.7 42.4% 80% 182 238 66.1 43.1% 8 Mean activation 68.3 66.1 42.6% 80% 182 238 61.0 43.5% Table 1: Select pruning results on Penn Treebank using a 4-layer QRNN, along with past results drawn from the original papers. Skip LSTM refers to the four-layer skip LSTM from Melis et al. (2018), and AWD-LSTM is from Merity et al. (2018a). The four-layer QRNN (Merity et al., 2018b) is the same model that we use, but we achieve better perplexity following the same methodology. The best results of each category are bolded. \u201cw/SRU\u201d denotes the results after applying an SRU. For PTB, we note that a 20-point increase in perplexity may only correspond to a few points decrease in R@3, showing that perplexity changes on a much different scale than accuracy does (see Table 1, rows 3 and 7). Furthermore, lower perplexity does not necessarily imply higher accuracy (see rows 5 and 7), con\ufb01rming that perplexity alone cannot completely determine the recall. In Table 1, we chose 75 as the cutoff-point for perplexity\u2014further results are illustrated in Figure 2. For WT-103, we observe trends similar to those of PTB; A large drop in perplexity corresponds to a much smaller decrease in R@3 (see Table 2, rows 3 and 4). 6 \f# Method Model Quality Footprint w/SRU Val. Test R@3 % FLOPs sec/q J/q Test R@3 1 Rae-LSTM 36.0 36.4 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 2 4-layer QRNN 32.0 33.0 \u2013 \u2013 1.24 1.48 \u2013 \u2013 3 Orig. 31.9 32.8 51.5% 100% 1.24 1.48 \u2013 \u2013 4 L0 reg. 65.8 65.4 43.1% 69% 0.912 1.06 56.9 44.7% 5 Mean activation 89.8 92.9 38.9% 70% 0.942 1.10 55.7 46.0% 6 Filter norm 85.9 88.2 41.7% 70% 0.942 1.10 59.2 45.4% 7 Random 80.9 81.4 42.9% 70% 0.942 1.10 54.2 46.1% Table 2: Select pruning results on WikiText-103 using a 4-layer QRNN, along with past results, drawn directly from the original papers. Note that Rae et al. (2018) primarily explore Hebbian softmax; Rae-LSTM refers to their LSTM model without any extensions. Bolded are the best results for each category. 5.1 ACCURACY\u2013EFFICIENCY TRADEOFFS We illustrate the accuracy\u2013ef\ufb01ciency tradeoff space of the PTB and WT103 models in Figure 2. For each model, we tabulate the results at \ufb01xed intervals according to the approximated percentage of FLOPs, relative to that of the unpruned model. We omit results that exceed 100 in test perplexity, since they are insuf\ufb01cient for language modeling in practice. 40 50 60 70 80 90 100 % FLOPs in full model 60 65 70 75 80 85 90 95 100 Word-level Perplexity Pruning on QRNN for PTB L_0 FN MA Rand. 60 70 80 90 100 % FLOPs in full model 40 50 60 70 80 90 100 Word-level Perplexity Pruning on QRNN for WikiText-103 L_0 FN MA Rand. Figure 2: Full experimental results on Penn Treebank and WikiText-103. We illustrate the perplexity\u2013ef\ufb01ciency tradeoff space on the test set obtained before applying the single-rank update. Surprisingly, random \ufb01lter pruning is a strong baseline, which supports the \ufb01ndings from Mittal et al. (2018). Random pruning not only outperforms \ufb01lter norm and mean activation pruning, but also regains perplexity more easily with a single-rank update. From Table 1 (rows 6\u20138) and Table 2 (rows 5\u20137), we see that random pruning displays equivalent or superior performance to \ufb01lter norm and mean activation pruning. Interestingly, random pruning achieves the lowest perplexity with a single-rank update (Table 2, rows 4\u20137), out of all the baseline approaches on WT103. On the other hand, \ufb01lter norm pruning is relatively weak, doing worse than random pruning in all cases\u2014with or without a single-rank update\u2014suggesting that \ufb01lter norm pruning has no practical bene\ufb01t over random pruning. L0 regularization (Louizos et al., 2018) works best, as shown in rows 4\u20135 in Table 1 and row 4 in Table 2. In general, testing on Penn Treebank and WikiText-103\u2014two very different datasets\u2014gives us consistent results, thus demonstrating the robustness of L0 regularization (Louizos et al., 2018), compared to the other pruning approaches. 7 \fFigure 3: Illustration depicting pruning on a truncated subset of the \ufb01rst layer\u2019s weights from the PTB model, where each row corresponds to a different technique, and each column a different operating point. From left to right, the operating points are 100%, 80%, 70%, 60%, and 50% FLOPs. For each of the sub\ufb01gures, we concatenate from top to bottom the \ufb01rst 25 \ufb01lters of W{z,f,o}, and from left to right the \ufb01rst 75 elements in each \ufb01lter, yielding square visualizations. All the pruning techniques appear to be dropping weights differently\u2014we note that, for L0 regularization (row 4), the dropped weights remain largely constant throughout. 5.2 POWER USAGE AND LATENCY On the Raspberry Pi, the PTB models are relatively fast, while the WT103 models are high latency, taking over one second (Table 2, rows 2\u20133 and 8) for the full models. For type-ahead prediction on a mobile device, the WT103 models are unsuitable as-is; further steps (e.g., more pruning then re-training, vocabulary reduction, quantization) would be required to deploy the models for practical use. Supporting the \ufb01ndings from Tang et al. (2018), the number of FLOPs scales linearly with latency and power: Full experimental results from Figure 2 yield Pearson\u2019s r2 = 0.98 for both latency\u2013 and power\u2013FLOPs measurements, suggesting a strong linear relationship between the number of FLOPs and both latency and power. In terms of extra parameters, a single-rank update costs less than 74 KB for ptb-qrnn, and less than 120 KB for wt103-qrnn. Mean activation statistics requires 20 KB for ptb-qrnn, and 30 KB for wt103-qrnn. Mask parameters for L0 regularization cost about 20 KB on each power level for ptb-qrnn, and 30 KB for wt103-qrnn. Filter norm pruning and random pruning do not require any extra storage. 6" + }, + { + "url": "http://arxiv.org/abs/1711.00333v2", + "title": "An Experimental Analysis of the Power Consumption of Convolutional Neural Networks for Keyword Spotting", + "abstract": "Nearly all previous work on small-footprint keyword spotting with neural\nnetworks quantify model footprint in terms of the number of parameters and\nmultiply operations for a feedforward inference pass. These values are,\nhowever, proxy measures since empirical performance in actual deployments is\ndetermined by many factors. In this paper, we study the power consumption of a\nfamily of convolutional neural networks for keyword spotting on a Raspberry Pi.\nWe find that both proxies are good predictors of energy usage, although the\nnumber of multiplies is more predictive than the number of model parameters. We\nalso confirm that models with the highest accuracies are, unsurprisingly, the\nmost power hungry.", + "authors": "Raphael Tang, Weijie Wang, Zhucheng Tu, Jimmy Lin", + "published": "2017-10-30", + "updated": "2018-09-21", + "primary_cat": "cs.OH", + "cats": [ + "cs.OH" + ], + "main_content": "INTRODUCTION Conversational agents that offer speech-based interfaces are increasingly part of our daily lives, both embodied in mobile phones as well as standalone consumer devices for the home. Prominent examples include Google\u2019s Assistant, Apple\u2019s Siri, Amazon\u2019s Alexa, and Microsoft\u2019s Cortana. Due to model complexity and computational requirements, full speech recognition is typically performed in the cloud: recorded audio is transferred to a datacenter for processing. For both practical and privacy concerns, devices usually perform keyword spotting locally to detect a trigger phrase such as \u201chey Siri\u201d, which provides an explicit acknowledgment that subsequent audio recordings of user utterances will be sent to backend servers and thus may be logged and analyzed. Beyond detecting these triggers, it makes sense to perform recognition of simple commands such as \u201cgo\u201d and \u201cstop\u201d as well as common responses such as \u201cyes\u201d and \u201cno\u201d directly on-device. Together, these represent instances of the keyword spotting task on continuous speech input. Due to power constraints on mobile devices, it is desirable that such keyword spotting models are \u201ccompact\u201d and have a \u201csmall footprint\u201d (which we formally de\ufb01ne below). Over the past several years, neural networks have been successfully applied to the keyword spotting task (see more details in Section 2). When discussing the \u201cfootprint\u201d of a model, the literature usually refers to two easily quanti\ufb01able values: the number of model parameters and the number of multiplies for a feedforward inference pass. Model \u201ccompactness\u201d is thus measured in terms of these two quantities, which are of course proxies at best. Ultimately, what matters most is the energy consumption during inference. To our knowledge, previous work in keyword spotting stops short of actual energy measurements. Thus, the primary contribution of this paper is the deployment of a number of convolutional neural networks for keyword spotting on a Raspberry Pi, where we are able to measure the energy usage of various models and relate these measurements back to the proxies used in previous work. We \ufb01nd that the number of multiplies does indeed predict energy usage and model latency, as does the number of parameters (albeit the relationship is weaker). Therefore, in the absence of actual power measurements, these proxies can be helpful in guiding model development, although we advise caution in interpreting both measures. Finally, as expected, we con\ufb01rm that the most accurate models are also the most power hungry, suggesting unavoidable tradeoffs with this family of CNN architectures. 2. RELATED WORK The application of neural networks to keyword spotting, of course, is not new. Chen et al. [1] introduced multi-layer perceptrons as an alternative to HMM-based approaches. Sainath and Parada [2] built on that work and achieved better results using convolutional neural networks (CNNs). They speci\ufb01cally cited reduced model footprints (for low-power applications) as a major motivation in moving to CNNs. Despite more recent work in applying recurrent neural networks to the keyword spotting task [3, 4], we focus on the family of CNN models for several reasons. CNNs today remain the standard baseline for small-footprint keyword spotting\u2014they have a straightforward architecture, are relatively easy to tune, and have implementations in multiple deep learning frameworks. In this paper, we do not propose any new models for arXiv:1711.00333v2 [cs.OH] 21 Sep 2018 \f\u22ee MFCCs input Conv layer #1 Conv layer #2 Output Fig. 1: Convolutional neural network architecture for keyword spotting. type m r n p q Par. Mult. conv 20 8 64 1 3 10.2K 27.7M conv 10 4 64 1 1 164K 95.7M lin 32 1.20M 1.20M dnn 128 4.1K 4.1K softmax nlabels 1.54K 1.54K Total 1.37M 125M Table 1: Structure of the cnn-trad-fpool3 model. keyword spotting. Instead, we conducted a thorough experimental analysis of the power consumption of CNNs proposed by Sainath and Parada [2], using Google\u2019s recently-released Speech Commands Dataset [5] as the benchmark. Canziani et al. [6] previously studied the relationship between power consumption, inference time, and accuracy using deep neural networks for computer vision tasks running on an NVIDIA Jetson TX1 board. This work studies many of these same relationships for keyword spotting, but on a more wimpy device, the Raspberry Pi. 3. EXPERIMENTAL DESIGN All experiments described in this paper were conducted with Honk, our open-source PyTorch reimplementation of public TensorFlow keyword spotting models, which are in turn based on the work of Sainath and Parada [2]. We have con\ufb01rmed that our PyTorch implementation achieves the same accuracy as the original TensorFlow references [7]. All our code is available on GitHub1 for others to build upon. 3.1. Model Description For feature extraction, we \ufb01rst apply a band-pass \ufb01lter of 20Hz/4kHz to the input audio to reduce noise. Fortydimensional Mel-Frequency Cepstrum Coef\ufb01cient (MFCC) frames are then constructed and stacked using a 30ms window and a 10ms frame shift. All frames are stacked across a 1s interval to form the two-dimensional input to our models. The basic model architecture for keyword spotting, shown in Figure 1, comprises one or more convolutional layers followed by fully-connected hidden layers, ending with a soft1https://github.com/castorini/honk type m r n p q Par. Mult. conv 21 8 94 2 3 15.8K 42.2M conv 6 4 94 1 1 212K 60.2M lin 32 854K 854K dnn 128 4.1K 4.1K softmax nlabels 1.54K 1.54K Total 1.09M 103M Table 2: Structure of the cnn-tpool2 model. type m r n p q s v Par. Mult. conv 101 8 186 1 1 1 1 150K 4.99M dnn 128 786K 786K dnn 128 16.4K 16.4K softmax nlabels 1.54K 1.54K Total 954K 5.76M Table 3: Structure of the cnn-one-stride1 model. max output. More speci\ufb01cally, an input of MFCCs X \u2208Rt\u00d7f is convolved with weights from the \ufb01rst convolutional layer, W \u2208Rm\u00d7r\u00d7n, where t and f are the lengths in time and frequency, m and r are the width and height of the convolution \ufb01lter, and n is the number of feature maps. If desired, the convolution can stride by s \u00d7 v and max-pool in p \u00d7 q, parameters which also affect the compactness of the model. Recti\ufb01ed linear units are used as the activation function for each non-linear layer. From this basic design, Sainath and Parada [2] proposed a number of speci\ufb01c models. We evaluated the following: \u2022 trad-fpool3: The base model, illustrated in Table 1, comprises two convolution layers followed by a linear layer, a hidden layer, and a \ufb01nal softmax layer. All other variants are derived from this model. \u2022 one-fstride{4,8}: Limiting the number of multiplies and parameters, these are compact variants that stride in frequency and also use only one convolution layer. Sainath and Parada found that one-fstride4 performs better than one-fstride8. \u2022 tpool{2,3}: These are variants that pool in time. Sainath and Parada found that, depending on the task, tpool2 has performance equivalent to or better than trad-fpool3. See Table 2 for the parameter breakdown of tpool2. \u2022 trad-pool2: TensorFlow\u2019s variant of the base model trad-fpool3, with comparable accuracy, but using fewer multiplies. \u2022 one-stride1: TensorFlow\u2019s compact variant of one-fstride4 (detailed in Table 3). It uses a standard striding of 1\u00d71 and thus has more parameters and multiplies, but achieves better accuracy. \fModel Test Accuracy Par. Mult. Latency/q (ms) Energy/q (mJ) Peak Power (W) one-fstride4 70.28% 220K 1.43M 40 28 0.99 one-fstride8 67.90% 337K 1.43M 42 29 1.02 one-stride1 77.06% 954K 5.76M 100 115 1.52 trad-pool2 87.51% 1.38M 98.8M 146 306 2.60 tpool2 91.97% 1.09M 103M 204 384 2.21 tpool3 91.23% 823K 73.7M 159 279 2.16 trad-fpool3 89.43% 1.37M 125M 227 431 2.20 Feature extraction only \u2014 \u2014 \u2014 31 19 0.80 Table 4: Performance of CNN variants on the Raspberry Pi in terms of accuracy, footprint, latency, and power consumption. The compact model is one-stride1 and the full model is trad-pool2. For reference, we also include a condition that only performs feature extraction. Energy calculations and peak power exclude idle power draw of 1.9W. 3.2. Model Export and Inference To run model inference on the Raspberry Pi, we exported Honk models written and trained in PyTorch to Caffe2 using ONNX,2 the Open Neural Network Exchange format used for interchanging models between different deep learning frameworks. While PyTorch is good for research and rapidly iterating on model architecture, it was not designed to serve models in deployment settings, unlike Caffe2, which supports running deep learning models on production servers as well as mobile devices and Raspbian. This feature makes Caffe2 especially useful for evaluating keyword spotting models in environments where they will actually be deployed. The practice of building and training models in one framework and running inference in another has been used in production at Facebook. We built Caffe2 from source for Raspbian, with the -mfpu=neon \ufb02ag, to specify the use of NEON (ARM\u2019s Advanced SIMD) optimizations. Our models use 32-bit \ufb02oating point operations and Caffe2 implements convolutions using the im2col approach. Evaluation was performed on a Raspberry Pi 3 Model B (ARM Cortex-A53) running Raspbian Stretch (4.9.41-v7+). On the Raspberry Pi, we run a Caffe2 service which imports an ONNX model and performs inference (as described above). To capture power measurements, the Raspberry Pi is plugged into a Watts Up Pro meter, which has a USB port from which measurements can be programmatically read. Power measurements are taken at a frequency of 1 Hz from an external laptop connected to the meter. The length of each experimental trial is suf\ufb01ciently long (on the order of minutes) that this resolution yields reasonably accurate measurements. During each experimental trial, a script on the Raspberry Pi iterates through all keyword classes for a \ufb01xed model, calling an API served by the laptop to start and stop measurements before and after the Caffe2 service call. Each Caffe2 service call evaluates all test examples for a given keyword class. There were a total of 2,567 test examples for all keyword classes combined. 2http://onnx.ai/ 4. EXPERIMENTAL RESULTS We evaluated the convolutional neural networks described in the previous section using Google\u2019s Speech Commands Dataset [5], which was released in August 2017 under a Creative Commons license.3 The dataset contains 65,000 one-second long utterances of 30 short words by thousands of different people, as well as such background noise samples as pink noise, white noise, and human-made sounds. The Google blog post also references the TensorFlow implementation of Sainath and Parada\u2019s models, which we have ported to PyTorch and validated their correctness [7]. For consistency, we evaluated our PyTorch implementations, but otherwise followed exactly the same experimental setup as Google\u2019s reference. Speci\ufb01cally, our task is to classify a short one-second utterance as \u201cyes\u201d, \u201cno\u201d, \u201cup\u201d, \u201cdown\u201d, \u201cleft\u201d, \u201cright\u201d, \u201con\u201d, \u201coff\u201d, \u201cstop\u201d, \u201cgo\u201d, silence, or unknown. As the focus of this paper is on model performance in terms of energy usage and not on accuracy per se, we refer interested readers to details in Tang and Lin [7]. Our evaluation metric is accuracy, which is simply measured as the fraction of classi\ufb01cation decisions that are correct. Our main results are shown in Table 4. For each model, we show its accuracy on the test set and its model footprint in terms of the number of model parameters and the number of multiplies required for a feedforward inference pass. The next columns show the average query latency for each model on a test instance, the energy per query, and the peak power draw during the experimental run. The energy calculations as well as the peak power \ufb01gures exclude energy consumed by the Raspberry Pi in its idle state, which has a power draw of 1.9W. For reference, we also report a condition that performs only feature extraction. We found strong evidence of a positive linear relationship between the number of multiply operations used in the models and the energy used per query (R2 = 0.9641, p = 0.0001) in Figure 2 (left) and also between the number of multiplies and latency per query (R2 = 0.8863, p = 0.0015) in Figure 2 (right). There is also strong evidence of a positive relation3https://research.googleblog.com/2017/08/ \fFig. 2: Energy (left) and latency (right) per query vs. number of multiplies, with the 95% con\ufb01dence interval. Fig. 3: Energy per query vs. number of parameters (95% CI). ship between the number of parameters and the energy used per query (R2 = 0.7498, p = 0.0118) in Figure 3 and between the number of parameters and latency per query (R2 = 0.7237, p = 0.0152), not shown. However, the strength of correlations for the number of parameters is weaker. These results suggest that the number of multiplies, and to a lesser extent, the number of parameters, are useful proxy measures when developing small-footprint keyword spotting models that optimize for power consumption. Nevertheless, we suggest that these metrics must still be interpreted with caution. For example, we see that two models with similar numbers of multiplies can still have very different energy pro\ufb01les: tpool2 and trad-pool2 have comparable numbers of multiplies but the former is 40% slower and consumes 25% more energy per query. However, the latter has a higher peak power draw. Finally, we plot the relationship between energy usage and model accuracy in Figure 4. The strong correlation observed (R2 = 0.8919, p = 0.0014) suggests that \u201cyou get Fig. 4: Energy per query vs. accuracy (95% CI). what you pay for\u201d, in the sense that at least for this family of models, a designer must trade off accuracy for power consumption, and the relationship is surprisingly linear. 5." + }, + { + "url": "http://arxiv.org/abs/1710.10361v2", + "title": "Deep Residual Learning for Small-Footprint Keyword Spotting", + "abstract": "We explore the application of deep residual learning and dilated convolutions\nto the keyword spotting task, using the recently-released Google Speech\nCommands Dataset as our benchmark. Our best residual network (ResNet)\nimplementation significantly outperforms Google's previous convolutional neural\nnetworks in terms of accuracy. By varying model depth and width, we can achieve\ncompact models that also outperform previous small-footprint variants. To our\nknowledge, we are the first to examine these approaches for keyword spotting,\nand our results establish an open-source state-of-the-art reference to support\nthe development of future speech-based interfaces.", + "authors": "Raphael Tang, Jimmy Lin", + "published": "2017-10-28", + "updated": "2018-09-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "INTRODUCTION The goal of keyword spotting is to detect a relatively small set of prede\ufb01ned keywords in a stream of user utterances, usually in the context of an intelligent agent on a mobile phone or a consumer \u201csmart home\u201d device. Such a capability complements full automatic speech recognition, which is typically performed in the cloud. Because cloud-based interpretation of speech input requires transferring audio recordings from the user\u2019s device, there are signi\ufb01cant privacy implications. Therefore, on-device keyword spotting has two main uses: First, recognition of common commands such as \u201con\u201d and \u201coff\u201d as well as other frequent words such as \u201cyes\u201d and \u201cno\u201d can be accomplished directly on the user\u2019s device, thereby sidestepping any potential privacy concerns. Second, keyword spotting can be used to detect \u201ccommand triggers\u201d such as \u201chey Siri\u201d, which provide explicit cues for interactions directed at the device. It is additionally desirable that such models have a small footprint (for example, measured in the number of model parameters) so they can be deployed on low power and performance-limited devices. In recent years, neural networks have been shown to provide effective solutions to the small-footprint keyword spotting problem. Research typically focuses on a tradeoff between achieving high detection accuracy and having a small footprint. Compact models are usually variants derived from a full model that sacri\ufb01ce accuracy for a smaller model footprint, often via some form of sparsi\ufb01cation. In this work, we focus on convolutional neural networks (CNNs), a class of models that has been successfully applied to small-footprint keyword spotting in recent years. In particular, we explore the use of residual learning techniques and dilated convolutions. On the recently-released Google Speech Commands Dataset, which provides a common benchmark for keyword spotting, our full residual network model outperforms Google\u2019s previously-best CNN [1] (95.8% vs. 91.7% in accuracy). We can tune the depth and width of our networks to target a desired tradeoff between model footprint and accuracy: one variant is able to achieve accuracy only slightly below Google\u2019s best CNN with a 50\u00d7 reduction in model parameters and an 18\u00d7 reduction in the number of multiplies in a feedforward inference pass. This model far outperforms previous compact CNN variants. 2. RELATED WORK Deep residual networks (ResNets) [2] represent a groundbreaking advance in deep learning that has allowed researchers to successfully train deeper networks. They were \ufb01rst applied to image recognition, where they contributed to a signi\ufb01cant jump in state-of-the-art performance [2]. ResNets have subsequently been applied to speaker identi\ufb01cation [3] and automatic speech recognition [4, 5]. This paper explores the application of deep residual learning techniques to the keyword spotting task. The application of neural networks to keyword spotting, of course, is not new. Chen et al. [6] applied a standard multilayer perceptron to achieve signi\ufb01cant improvements over previous HMM-based approaches. Sainath and Parada [1] built on that work and achieved better results using convolutional neural networks (CNNs). They speci\ufb01cally cited reduced model footprints (for low-power applications) as a major motivation in moving to CNNs. Despite more recent work in applying recurrent neural networks to the keyword spotting task [7, 8], we focus on the family of CNN models for several reasons. CNNs today remain the standard baseline for small-footprint keyword arXiv:1710.10361v2 [cs.CL] 21 Sep 2018 \f3x3 conv, 45 3x3 conv, 45 3x3 conv, 45 Avg pool softmax 3x3 conv, 45 MFCCs \u22ee 3x3 conv, 45 3x3 Convolution Batch Normalization ReLU 3x3 Convolution Batch Normalization ReLU Add 3x3 conv, 45 Fig. 1. Our full architecture, with a magni\ufb01ed residual block. spotting\u2014they have a straightforward architecture, are relatively easy to tune, and have implementations in multiple deep learning frameworks (at least TensorFlow [9] and PyTorch [10]). We are not aware of any publicly-available implementations of recurrent architectures to compare against. We believe that residual learning techniques form a yet unexplored direction for the keyword spotting task, and that our use of dilated convolutions achieves the same goal that proponents of recurrent architectures tout, the ability to capture long(er)-range dependencies. 3. MODEL IMPLEMENTATION This section describes our base model and its variants. All code necessary to replicate our experiments has been made open source in our GitHub repository.1 3.1. Feature Extraction and Input Preprocessing For feature extraction, we \ufb01rst apply a band-pass \ufb01lter of 20Hz/4kHz to the input audio to reduce noise. Fortydimensional Mel-Frequency Cepstrum Coef\ufb01cient (MFCC) frames are then constructed and stacked using a 30ms window and a 10ms frame shift. All frames are stacked across a 1s interval to form the two-dimensional input to our models. 3.2. Model Architecture Our architecture is similar to that of He et al. [2], who postulated that it may be easier to learn residuals than to learn the original mapping for deep convolutional neural networks. They found that additional layers in deep networks cannot 1https://github.com/castorini/honk/ Layer 1 Layer 1 + k Layer 1 + 2k \u22ef \u22ef Convolution filter Receptive field Fig. 2. Exponentially increasing dilated convolutions; in this case, k = 1. type m r n dw dh Par. Mult. conv 3 3 45 405 1.52M res \u00d7 6 3 3 45 2\u230ai 3 \u230b 2\u230ai 3 \u230b 219K 824M conv 3 3 45 16 16 18.2K 68.6M bn 45 169K avg-pool 45 45 softmax 12 540 540 Total 238K 894M Table 1. Parameters used for res15, along with the number of parameters and multiplies. be merely \u201ctacked on\u201d to shallower nets. Speci\ufb01cally, He et al. proposed that it may be easier to learn the residual H(x) = F(x) + x instead of the true mapping F(x), since it is empirically dif\ufb01cult to learn the identity mapping for F when the model has unnecessary depth. In residual networks (ResNets), residuals are expressed via connections between layers (see Figure 1), where an input x to layer i is added to the output of some downstream layer i + k, enforcing the residual de\ufb01nition H(x) = F(x) + x. Following standard ResNet architectures, our residual block begins with a bias-free convolution layer with weights W \u2208R(m\u00d7r)\u00d7n, where m and r are the width and height, respectively, and n the number of feature maps. After the convolution layer, there are ReLU activation units and\u2014instead of dropout\u2014a batch normalization [11] layer. In addition to using residual blocks, we also use a (dw, dh) convolution dilation [12] to increase the receptive \ufb01eld of the network, which allows us to consider the one-second input in its entirety using a smaller number of layers. To expand our input for the residual blocks, which requires inputs and outputs of equal size throughout, our entire architecture starts with a convolution layer with weights W \u2208R(m\u00d7r)\u00d7n. A separate non-residual convolution layer and batch normalization layer are further appended to the chain of residual blocks, as shown in Figure 1 and Table 1. Our base model, which we refer to as res15, comprises six such residual blocks and n = 45 feature maps (see Figure 1). For dilation, as illustrated in Figure 2, an exponential sizing schedule [12] is used: at layer i, the dilation is dw = \ftype m r n Par. Mult. conv 3 3 19 171 643K avg-pool 4 3 19 6.18K res \u00d7 3 3 3 19 19.5K 5.0M avg-pool 19 19 softmax 12 228 228 Total 19.9K 5.65M Table 2. Parameters used for res8-narrow. type m r n Par. Mult. conv 3 3 45 405 1.80M avg-pool 2 2 45 45K res \u00d7 12 3 3 45 437K 378M avg-pool 45 45 softmax 12 540 540 Total 438K 380M Table 3. Parameters used for res26. dh = 2\u230ai 3 \u230b, resulting in a total receptive \ufb01eld of 125\u00d7125. As is standard in ResNet architectures, all output is zero-padded at each layer and \ufb01nally average-pooled and fed into a fullyconnected softmax layer. Following previous work, we measure the \u201cfootprint\u201d of a model in terms of two quantities: the number of parameters in the model and the number of multiplies that are required for a full feedforward inference pass. Our architecture uses roughly 238K parameters and 894M multiplies (see Table 1 for the exact breakdown). To derive a compact small-footprint model, one simple approach is to reduce the depth of the network. We tried cutting the number of residual blocks in half to three, yielding a model we call res8. Because the footprint of res15 arises from its width as well as its depth, the compact model adds a 4 \u00d7 3 average-pooling layer after the \ufb01rst convolutional layer, reducing the size of the time and frequency dimensions by a factor of four and three, respectively. Since the average pooling layer suf\ufb01ciently reduces the input dimension, we did not use dilated convolutions in this variant. In the opposite direction, we explored the effects of deeper models. We constructed a model with double the number of residual blocks (12) with 26 layers, which we refer to as res26. To make training tractable, we prepend a 2 \u00d7 2 average-pooling layer to the chain of residual blocks. Dilation is also not used, since the receptive \ufb01eld of 25 3\u00d73 convolution \ufb01lters is large enough to cover our input size. In addition to depth, we also varied model width. All models described above used n = 45 feature maps, but we also considered variants with n = 19 feature maps, denoted by -narrow appended to the base model\u2019s name. A detailed breakdown of the footprint of res8-narrow, our best compact model, is shown in Table 2; the same analysis for our deepest and widest model, res26, is shown in Table 3. 4. EVALUATION 4.1. Experimental Setup We evaluated our models using Google\u2019s Speech Commands Dataset [9], which was released in August 2017 under a Creative Commons license.2 The dataset contains 65,000 one-second long utterances of 30 short words by thousands of different people, as well as background noise samples such as pink noise, white noise, and human-made sounds. The blog post announcing the data release also references Google\u2019s TensorFlow implementation of Sainath and Parada\u2019s models, which provide the basis of our comparisons. Following Google\u2019s implementation, our task is to discriminate among 12 classes: \u201cyes,\u201d \u201cno,\u201d \u201cup,\u201d \u201cdown,\u201d \u201cleft,\u201d \u201cright,\u201d \u201con,\u201d \u201coff,\u201d \u201cstop,\u201d \u201cgo\u201d, unknown, or silence. Our experiments followed exactly the same procedure as the TensorFlow reference. The Speech Commands Dataset was split into training, validation, and test sets, with 80% training, 10% validation, and 10% test. This results in roughly 22,000 examples for training and 2,700 each for validation and testing. For consistency across runs, the SHA1-hashed name of the audio \ufb01le from the dataset determines the split. To generate training data, we followed Google\u2019s preprocessing procedure by adding background noise to each sample with a probability of 0.8 at every epoch, where the noise is chosen randomly from the background noises provided in the dataset. Our implementation also performs a random time-shift of Y milliseconds before transforming the audio into MFCCs, where Y \u223cUNIFORM[\u2212100, 100]. In order to accelerate the training process, all preprocessed inputs are cached for reuse across different training epochs. At each epoch, 30% of the cache is evicted. Accuracy is our main metric of quality, which is simply measured as the fraction of classi\ufb01cation decisions that are correct. For each instance, the model outputs its most likely prediction, and is not given the option of \u201cdon\u2019t know\u201d. We also plot receiver operating characteristic (ROC) curves, where the x and y axes show false alarm rate (FAR) and false reject rate (FRR), respectively. For a given sensitivity threshold\u2014de\ufb01ned as the minimum probability at which a class is considered positive during evaluation\u2014FAR and FRR represent the probabilities of obtaining false positives and false negatives, respectively. By sweeping the sensitivity interval [0.0, 1.0], curves for each of the keywords are computed and then averaged vertically to produce the overall curve for a particular model. Curves with less area under the curve (AUC) are better. 4.2. Model Training Mirroring the ResNet paper [2], we used stochastic gradient descent with a momentum of 0.9 and a starting learning rate 2https://research.googleblog.com/2017/08/ launching-speech-commands-dataset.html \fModel Test accuracy Par. Mult. trad-fpool3 90.5% \u00b1 0.297 1.37M 125M tpool2 91.7% \u00b1 0.344 1.09M 103M one-stride1 77.9% \u00b1 0.715 954K 5.76M res15 95.8% \u00b1 0.484 238K 894M res15-narrow 94.0% \u00b1 0.516 42.6K 160M res26 95.2% \u00b1 0.184 438K 380M res26-narrow 93.3% \u00b1 0.377 78.4K 68.5M res8 94.1% \u00b1 0.351 110K 30M res8-narrow 90.1% \u00b1 0.976 19.9K 5.65M Table 4. Test accuracy of each model with 95% con\ufb01dence intervals (across \ufb01ve trials), as well as footprint size in terms of number of parameters and multiplies. of 0.1, which is multiplied by 0.1 on plateaus. We also experimented with Nesterov momentum, but we found slightly decreased learning performance in terms of cross entropy loss and test accuracy. We used a mini-batch size of 64 and L2 weight decay of 10\u22125. Our models were trained for a total of 26 epochs, resulting in roughly 9,000 training steps. 4.3. Results Since our own networks are implemented in PyTorch, we used our PyTorch reimplementations of Sainath and Parada\u2019s models as a point of comparison. We have previously con\ufb01rmed that our PyTorch implementation achieves the same accuracy as the original TensorFlow reference [10]. Our ResNet models are compared against three CNN variants proposed by Sainath and Parada: trad-fpool3, which is their base model; tpool2, the most accurate variant of those they explored; and one-stride1, their best compact variant. The accuracies of these models are shown in Table 4, which also shows the 95% con\ufb01dence intervals from \ufb01ve different optimization trials with different random seeds. The table provides the number of model parameters as well as the number of multiplies in an inference pass. We see that tpool2 is indeed the best performing model, slightly better than trad-fpool3. The one-stride1 model substantially reduces the model footprint, but this comes at a steep price in terms of accuracy. The performance of our ResNet variants is also shown in Table 4. Our base res15 model achieves signi\ufb01cantly better accuracy than any of the previous Google CNNs (the con\ufb01dence intervals do not overlap). This model requires fewer parameters, but more multiplies, however. The \u201cnarrow\u201d variant of res15 with fewer feature maps sacri\ufb01ces accuracy, but remains signi\ufb01cantly better than the Google CNNs (although it still uses \u223c30% more multiplies). Looking at our compact res8 architecture, we see that the \u201cwide\u201d version strictly dominates all the Google models\u2014 it achieves signi\ufb01cantly better accuracy with a smaller footprint. The \u201cnarrow\u201d variant reduces the footprint even more, Fig. 3. ROC curves for different models. albeit with a small degradation in performance compared to tpool2, but requires 50\u00d7 fewer model parameters and 18\u00d7 fewer multiplies. Both models are far superior to Google\u2019s compact variant, one-stride1. Turning our attention to the deeper variants, we see that res26 has lower accuracy than res15, suggesting that we have overstepped the network depth for which we can properly optimize model parameters. Comparing the narrow vs. wide variants overall, it appears that width (the number of feature maps) has a larger impact on accuracy than depth. We plot the ROC curves of selected models in Figure 3, comparing the two competitive baselines to res8, res8-narrow, and res15. The remaining models were less interesting and thus omitted for clarity. These curves are consistent with the accuracy results presented in Table 4, and we see that res15 dominates the other models in performance at all operating points. 5." + }, + { + "url": "http://arxiv.org/abs/1710.06554v2", + "title": "Honk: A PyTorch Reimplementation of Convolutional Neural Networks for Keyword Spotting", + "abstract": "We describe Honk, an open-source PyTorch reimplementation of convolutional\nneural networks for keyword spotting that are included as examples in\nTensorFlow. These models are useful for recognizing \"command triggers\" in\nspeech-based interfaces (e.g., \"Hey Siri\"), which serve as explicit cues for\naudio recordings of utterances that are sent to the cloud for full speech\nrecognition. Evaluation on Google's recently released Speech Commands Dataset\nshows that our reimplementation is comparable in accuracy and provides a\nstarting point for future work on the keyword spotting task.", + "authors": "Raphael Tang, Jimmy Lin", + "published": "2017-10-18", + "updated": "2017-11-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "INTRODUCTION Conversational agents that ofer speech-based interfaces are increasingly part of our daily lives, both embodied in mobile phones as well as standalone consumer devices for the home. Prominent examples include Google\u2019s Assistant, Apple\u2019s Siri, Amazon\u2019s Alexa, and Microsof\u2019s Cortana. Due to model complexity and computational requirements, full speech recognition is typically performed in the cloud: recorded audio is transferred to a datacenter for processing. For both practical and privacy concerns, devices do not continuously stream user speech into the cloud, but rely on a command trigger, e.g., \u201cHey Siri\u201d, that provides an explicit cue signaling input directed at the device. Tese verbal triggers also serve as an acknowledgment that subsequent audio recordings of user utterances will be sent to backend servers and thus may be logged and analyzed. A recent incident where user privacy expectations have not been met involves the Google Home Mini smart speaker, when a reviewer discovered that the device was surreptitiously recording his conversations without his knowledge or consent [3]. Tis incident demonstrates the importance of on-device command triggering, which requires accurate, low-powered keyword spoting capabilities. Sainath and Parada [1] proposed simple convolutional neural network models for keyword spoting and reference implementations are provided in TensorFlow. Tese models, coupled with the release of Google\u2019s Speech Commands Dataset [2], provide a public benchmark for the keyword spoting task. Tis paper describes Honk, a PyTorch reimplementation of these models. We are able to achieve recognition accuracy comparable to the TensorFlow reference implementations. 2 DATA AND TASK A blog post from Google in August 2017 [2] announced the release of the Speech Commands Dataset, along with training and inference code for convolutional neural networks for keyword spoting. Te dataset, released under a Creative Commons license, contains 65,000 one-second long uterances of 30 short words by thousands of diferent people. Additionally, the dataset contains such background noise samples as pink noise, white noise, and human-made sounds. Qite explicitly, the blog writes: Te dataset is designed to let you build basic but useful voice interfaces for applications, with common words like \u201cYes\u201d, \u201cNo\u201d, digits, and directions included. As such, this resource provides a nice benchmark for the keyword spoting task that we are interested in. Following Google\u2019s demo, our task is to discriminate among 10 of the 30 short words, treating the rest of the 20 unused classes as a single \u201cunknown\u201d group of words. Tere is also one silence class comprised of sof background noise to avoid misclassifying silence. Terefore, in total, there are 12 output labels: ten keywords, one unknown class, and one silence class. 3 IMPLEMENTATION Honk, named afer local fauna, is an open-source PyTorch reimplementation of public TensorFlow keyword spoting models,1 which are in turn based on the work of Sainath and Parada [1]. In some cases, as we note below, details difer between the two. We embarked on a PyTorch reimplementation primarily to maintain consistency with the rest of our research group\u2019s projects. However, we feel that PyTorch has an advantage over TensorFlow in terms of readability of the model specifcations. Following the TensorFlow reference, our implementation consists of two distinct components: an input preprocessor and the convolutional neural network models themselves. All our code is available on GitHub2 for others to build upon. 3.1 Input Preprocessing Our PyTorch implementation uses the same preprocessing pipeline as the TensorFlow reference (see Figure 1). To augment the dataset and to increase robustness, background noise consisting of white noise, pink noise, and human-made noise are mixed in with some of the input audio, and the sample is randomly time-shifed. For feature extraction, a band-pass flter of 20Hz/4kHz is frst applied to reduce the efect of unimportant sounds. Forty-dimensional Mel-Frequency Cepstrum Coefcient (MFCC) frames are then constructed and stacked using a 30-millisecond window size and a 10-millisecond frame shif. 1htps://github.com/tensorfow/tensorfow/tree/master/tensorfow/examples/ speech commands 2htps://github.com/castorini/honk arXiv:1710.06554v2 [cs.CL] 28 Nov 2017 \fAdd background noise with probability \ud835\udc5d Compute MFCCs Perform random time-shift Figure 1: Te input preprocessing pipeline. \u22ee MFCCs input Conv layer #1 Conv layer #2 Output Figure 2: Convolutional neural network architecture for keyword spotting. For the actual keyword spoting, Sainath and Parada [1] proposed to stack 23 frames to the lef and 8 frames to the right at every frame for the input. However, we followed the TensorFlow implementation and use as input the entire one-second stack. Tat is, our implementation stacks all 30-millisecond windows within the one-second sample, using a 10-millisecond frame shif. 3.2 Model Architecture Te basic model architecture for keyword spoting, shown in Figure 2, comprises one or more convolutional layers followed by fully-connected hidden layers, ending with a sofmax output. More specifcally, an input of MFCCs X \u2208Rt\u00d7f is convolved with weights from the frst convolutional layer, W \u2208Rm\u00d7r\u00d7n, where t and f are the lengths in time and frequency, m and r are the width and height of the convolution flter, and n is the number of feature maps. If desired, the convolution can stride by s \u00d7 v and max-pool in p \u00d7 q, parameters which also infuence the compactness of the model. Rectifed linear units are used as the activation function for each non-linear layer. type m r n p q s v conv 20 8 64 2 2 1 1 conv 10 4 64 1 1 1 1 sofmax nlabels Table 1: Parameters used in the cnn-trad-pool2 model. type m r n p q s v conv t 8 186 1 1 1 1 hidden 128 hidden 128 sofmax nlabels Table 2: Parameters used in TensorFlow\u2019s variant of cnn-one-fstride4. Sainath and Parada [1] proposed a model comprised of two convolutional layers (as described above) with a linear layer, a hidden layer, and a sofmax layer for their full model, which they referred to as cnn-trad-fpool3. Tey also devised compact variants of their full model that reduce the number of parameters and multiplies (for a low-power seting). We discuss our reimplementation of the full model and its variants below. 3.2.1 Full Model. Our full model architecture is a faithful reimplementation of the full TensorFlow model, which diverges slightly from the cnn-trad-fpool3 model in the Sainath and Parada paper. Te TensorFlow model makes a few changes, selecting p = 2 and q = 2 and dropping the hidden and linear layers in the original paper. Surprisingly, we confrmed that this leads to beter accuracy for our task. We refer to this variant as cnn-trad-pool2. For our task, with an input size of 101 \u00d7 40 and nlabels = 12, applying this architecture (see details in Table 1) results in 2.77\u00d7107 +7.08\u00d7107 +3.32\u00d7105 = 9.88 \u00d7 107 multiply operations. 3.2.2 Compact Models. Sainath and Parada [1] proposed a few compact variants of their full model that difer in pooling size and the number of convolutional layers. Sacrifcing some accuracy, these variants have fewer parameters and multiply operations, specifcally targeting low-powered devices. We reimplemented all the variants but examined only TensorFlow\u2019s variant of cnn-one-fstride4 (see Table 2), since it obtains the best accuracy out of the compact variants that restrict the number of multiplies performed. For our task, this architecture requires 5.76 \u00d7 106 multiplies, more than an order of magnitude less than the number of multiplies in the full model. Note that only one convolutional layer is used for this model, and the TensorFlow variant performs no increased striding in frequency or time (see Table 2). 4 EXPERIMENTAL RESULTS For the purpose of ataining consistent comparisons, we closely replicate the same setup as in the TensorFlow reference implementation. Specifcally, the task is to classify a short one-second uterance as \u201cyes\u201d, \u201cno\u201d, \u201cup\u201d, \u201cdown\u201d, \u201clef\u201d, \u201cright\u201d, \u201con\u201d, \u201cof\u201d, \u201cstop\u201d, \u201cgo\u201d, silence, or unknown. \fModel Full Compact TensorFlow (TF) 87.8% \u00b1 0.435 77.4% \u00b1 0.839 PyTorch (PT) 87.5% \u00b1 0.340 77.9% \u00b1 0.715 PT with momentum 90.2% \u00b1 0.515 78.4% \u00b1 0.631 Table 3: Test accuracy along with 95% confdence intervals for PyTorch and TensorFlow implementations of the full and compact models. Following the TensorFlow implementation, we initialized all biases to zero and all weights to samples from a truncated normal distribution with \u00b5 = 0 and \u03c3 = 0.01. We used stochastic gradient descent with a mini-batch size of 100, learning rates of 0.001 and 0.01 for the full and compact models, respectively. We also ran our entire training/validation/test process using fve diferent random seeds, obtaining a distribution of the model accuracy. For the full model, approximately 30 epochs were required for convergence, while roughly 55 epochs were needed for the compact model. Deviating from the TensorFlow implementation, we also tried training our models using stochastic gradient descent with a momentum of 0.9. Te compact model had failed to converge with a learning rate of 0.01, so the rate was decreased to 0.001. As shown in Table 3, we fnd that training with momentum yields improved results, especially for the full model. Te Speech Commands Dataset was split into training, validation, and test sets, with 80% in training, 10% in validation, and 10% in test. Tis results in roughly 22, 000 examples for training and 2, 700 each for validation and testing. Mirroring the TensorFlow implementation, for consistency across runs, the hashed name of the audio fle from the dataset determines which split the sample belongs to. Specifcally, the integer value of the SHA1 hash of the flename is used to place each example into either the training, validation, or test sets. To generate training data via the process described in Section 3.1, Honk adds background noise to each sample with a probability of 0.8 at every epoch, where the noise is chosen randomly from the background noises provided in the Speech Commands Dataset. Our implementation also performs a random time-shif of Y milliseconds before transforming the audio into MFCCs, where Y \u223c Uniform[\u2212100, 100]. In order to accelerate the training process, all preprocessed inputs are cached for reuse across diferent training epochs. At each epoch, 30% of the cache is evicted. We trained all our models using a workstation built from commodity hardware: dual GeForce GTX 1080 graphics cards, an i76800K CPU, and 64 GB of RAM. Tis machine was more than suffcient to train the models in this paper, all of which required less than 2 GB of GPU memory. Our evaluation metric is accuracy, simply computed as the percentage of correct forced choice predictions out of the examples in the test set. Results are shown in Table 3, where we compare the PyTorch and TensorFlow implementations of the full and compact models. Te reported accuracy is the mean across all individual runs, accompanied by the 95% confdence interval. We fnd that the accuracies of the diferent implementations are comparable, and the confdence intervals overlap. Tis suggests that we have faithfully reproduced the TensorFlow models. 5 OPEN-SOURCE CODEBASE Beyond the implementation of the convolutional neural network models themselves in our GitHub repository,3 our codebase includes a number of additional features: \u2022 A utility for recording and building custom speech commands, producing audio samples of the appropriate length and format. \u2022 Test harnesses for training and testing any of a number of models implemented in TensorFlow and those proposed by Sainath and Parada [1] \u2022 A RESTful service that deploys a trained model. Te server accepts base 64-encoded audio and responds with the predicted label of the uterance. Tis service can be used for on-device keyword spoting via local loopback. \u2022 A desktop application for demonstrating the keyword spoting models described in this paper. Te client uses the REST API above for model inference. Tese features allow anyone to replicate the experiments described in this paper, and provide a platform that others can build on for the keyword spoting task. 6" + } + ], + "Ferhan Ture": [ + { + "url": "http://arxiv.org/abs/1609.08210v1", + "title": "Learning to Translate for Multilingual Question Answering", + "abstract": "In multilingual question answering, either the question needs to be\ntranslated into the document language, or vice versa. In addition to direction,\nthere are multiple methods to perform the translation, four of which we explore\nin this paper: word-based, 10-best, context-based, and grammar-based. We build\na feature for each combination of translation direction and method, and train a\nmodel that learns optimal feature weights. On a large forum dataset consisting\nof posts in English, Arabic, and Chinese, our novel learn-to-translate approach\nwas more effective than a strong baseline (p<0.05): translating all text into\nEnglish, then training a classifier based only on English (original or\ntranslated) text.", + "authors": "Ferhan Ture, Elizabeth Boschee", + "published": "2016-09-26", + "updated": "2016-09-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction Question answering (QA) is a speci\ufb01c form of the information retrieval (IR) task, where the goal is to \ufb01nd relevant well-formed answers to a posed question. Most QA pipelines consist of three main stages: (a) preprocessing the question and collection, (b) retrieval of candidate answers in the collection, and (c) ranking answers with respect to their relevance to the question and return the top N answers. The types of questions can range from factoid (e.g., \u201cWhat is the capital of France?\u201d) to causal (e.g., \u201cWhy are trees green?\u201d), and opinion questions (e.g., \u201cShould USA lower the drinking age?\u201d). The most common approach to multilingual QA (MLQA) has been to translate all content into its \u2217This work was completed while author was an employee of Raytheon BBN Technologies. most probable English translation via machine translation (MT) systems. This strong baseline, which we refer to as one-best MT (1MT), has been successful in prior work (Hartrumpf et al., 2009; Lin and Kuo, 2010; Shima and Mitamura, 2010). However, recent advances in cross-lingual IR (CLIR) show that one can do better by representing the translation space as a probability distribution (Ture and Lin, 2014). In addition, MT systems perform substantially worse with user-generated text, such as web forums (Van der Wees et al., 2015), which provide extra motivation to consider alternative translation approaches for higher recall. To our knowledge, it has yet to be shown whether these recent advancements in CLIR transfer to MLQA. We introduce a novel answer ranking approach for MLQA (i.e., Learning to Translate or L2T), a model that learns the optimal translation of question and/or candidate answer, based on how well it discriminates between good and bad answers. We achieve this by introducing a set of features that encapsulate lexical and semantic similarities between a question and a candidate answer through various translation strategies (Section 3.1). The model then learns feature weights for each combination of translation direction and method, through a discriminative training process (Section 3.2). Once a model is trained, it can be used for MLQA, by sorting each candidate answer in the collection by model score. Instead of learning a single model to score candidate answers in any language, it might be meaningful to train a separate model that can learn to discriminate between good and bad answers in each language. This can let each model learn feature weights custom to arXiv:1609.08210v1 [cs.CL] 26 Sep 2016 \fthe language, therefore allowing a more \ufb01ne-grained ranking (Section 3.4). We call this alternative approach Learning to Custom Translate (L2CT). Experiments on the DARPA Broad Operational Language Technologies (BOLT) IR task1 con\ufb01rm that L2T yields statistically signi\ufb01cant improvements over a strong baseline (p < 0.05), in three out of four experiments. L2CT outperformed the baseline as well, but was not more effective than L2T. 2 Related Work For the last decade or so, research in QA has mostly been driven by annual evaluation campaigns like TREC,2 CLEF,3 and NTCIR.4 Most earlier work relied on either rule-based approaches where a set of rules were manually crafted for each type of question, or IR-like approaches where each pair of question and candidate answer was scored using retrieval functions (e.g., BM25 (Robertson et al., 2004)). On the other hand, training a classi\ufb01er for ranking candidate answers allows the exploitation of various features extracted from the question, candidate answer, and surrounding context (Madnani et al., 2007; Zhang et al., 2007). In fact, an explicit comparison at 2007 TREC con\ufb01rmed the superiority of machine learning-based (ML-based) approaches (F-measure 35.9% vs 38.7%) (Zhang et al., 2007). Learning-torank approaches have also been applied to QA successfully (Agarwal et al., 2012). Previous ML-based approaches have introduced useful features from many aspects of natural language, including lexical (Brill et al., 2001; Attardi et al., 2001), syntactic (Alfonseca et al., 2001; Katz et al., 2005), semantic (Cui et al., 2005; Katz et al., 2005; Alfonseca et al., 2001; Hovy et al., 2001), and discourse features, such as coreference resolution (Morton, 1999), or identifying temporal/spatial references (Saquete et al., 2005; Harabagiu and Bejan, 2005), which are especially useful for \u201cwhy\u201d and \u201chow\u201d questions (Kolomiyets and Moens, 2011). Additionally, semantic role labeling and dependency trees are other forms of semantic analysis used widely in NLP applications (Shen and Lapata, 2007; Cui et al., 2005). 1http://www.darpa.mil/Our_Work/I2O/Programs 2http://trec.nist.gov 3http://www.clef-initiative.eu 4http://research.nii.ac.jp/ntcir/index.html When dealing with multilingual collections, most prior approaches translate all text into English beforehand, then treat the task as monolingual retrieval (previously referred to as 1MT). At recent evaluation campaigns like CLEF and NTCIR,5 almost all teams simply obtained the one-best question translation, treating some online MT system as a black box (Adafre and van Genabith, 2009; Hartrumpf et al., 2009; Martinez-Gonzalez et al., 2009; Lin and Kuo, 2010; Shima and Mitamura, 2010), with few notable exceptions that took term importance (Ren et al., 2010), or semantics (Munoz-Terol et al., 2009) into account. Even for non-factoid MLQA, most prior work does not focus on the translation component (Luo et al., 2013; Chaturvedi et al., 2014). Contributions. Ture and Lin described three methods for translating queries into the collection language in a probabilistic manner, improving document retrieval effectiveness over a one-best translation approach (2014). Extending this idea to MLQA appears as a logical next step, yet most prior work relies solely on the one-best translation of questions or answers (Ko et al., 2010b; Garc\u00b4 \u0131a-Cumbreras et al., 2012; Chaturvedi et al., 2014), or selects the best translation out of few options (Sacaleanu et al., 2008; Mitamura et al., 2006). Mehdad et al. reported improvements by including the top ten translations (instead of the single best) and computing a distance-based entailment score with each (2010). While Espla-Gomis et al. argue that using MT as a black box is more convenient (and modular) (2012), there are potential bene\ufb01ts from a closer integration between statistical MT and multilingual retrieval (Nie, 2010). To the best of our knowledge, there is no prior work in the literature, where the optimal query and/or answer translation is learned via machine learning. This is our main contribution, with which we outperform the state of the art. In addition to learning the optimal translation, we learn the optimal subset of the training data for a given task, where the criteria of whether we include a certain data instance is based on either the source language of the sentence, or the language in which the sentence was annotated. Training data selection strategies have not been studied extensively in the 5Most recent MLQA tracks were in 2008 (CLEF) and 2010 (NTCIR). \fQA literature, therefore the effectiveness of our simple language-related criteria can provide useful insights to the community. When there are multiple independent approaches for ranking question-answer pairs, it is required to perform a post-retrieval merge: each approach generates a ranked list of answers, which are then merged into a \ufb01nal ranked list. This type of system combination approach has been applied to various settings in QA research. Merging at the document-level is common in IR systems (e.g., (Tsai et al., 2008)), and has shown to improve multilingual QA performance as well (Garc\u00b4 \u0131a-Cumbreras et al., 2012). Many QA systems combine answers obtained by different variants of the underlying model (e.g., (Brill et al., 2001) for monolingual, (Ko et al., 2010a; Ko et al., 2010b) for multilingual QA). We are not aware, however, of any prior work that has explored the merging of answers that are generated by language-speci\ufb01c ranking models. Although this does not show increased effectiveness in our experiments, we believe that it brings a new perspective to the problem. 3 Approach Our work is focused on a speci\ufb01c stage of the QA pipeline, namely answer ranking: Given a natural-language question q and k candidate answers s1, . . . , sk, we score each answer in terms of its relevance to q. In our case, candidate answers are sentences extracted from all documents retrieved in the previous stage of the pipeline (using Indri (Metzler and Croft, 2005)). Hereafter, sentence and answer might be used interchangeably. While our approach is not language-speci\ufb01c, we assume (for simplicity) that questions are in English, whereas sentences are in either English, Arabic, or Chinese. Non-English answers are translated back into English before returning to user. Our approach is not limited to any question type, factoid or non-factoid. Our main motivation is to provide good QA quality on any multilingual Web collection. This entails \ufb01nding answers to questions where there is no single answer, and for which human agreement is low. We aim to build a system that can successfully retrieve relevant information from open-domain and informal-language content. In this scenario, two assumptions made by many of the prior approaches fail: 1) We can accurately classify questions via template patterns (Chaturvedi et al. argue that this does not hold for non-factoid questions (2014)) 2) We can accurately determine the relevance of an answer, based on its automatic translation into English (Wees et al. show how recall decreases when translating user-generated text (2015)) To avoid these assumptions, we opted for a more adaptable approach, in which question-answer relevance is modeled as a function of features, intended to capture the relationship between the question and sentence text. Also, instead of relying solely on a single potentially incorrect English translation, we increase our chances of a hit by translating both the question and the candidate answer, using four different translation methods. Our main features, described throughout this section, are based on lexical similarity computed using these translations. The classi\ufb01er is trained on a large number of questionanswer pairs, each labeled by a human annotator with a binary relevance label.6 3.1 Representation In MLQA, since questions and answers are in different languages, most approaches translate both into an intermediary language (usually English). Due to the error-prone nature of MT, valuable information often gets \u201clost in translation\u201d. These errors are especially noticeable when translating informal text or less-studied languages (Van der Wees et al., 2015). Translation Direction. We perform a two-way translation to better retain the original meaning: in addition to translating each non-English sentence into English, we also translate the English questions into Arabic and Chinese (using multiple translation methods, described below). For each question-answer pair, we have two \u201cviews\u201d: comparing translated question to the original sentence (i.e., collection-language (CL) view); and comparing original question to the translated sentence (i.e., question-language (QL) view). Translation Method. When translating text for retrieval tasks like QA, including a variety of alterna6Annotators score each answer from 1 to 5. We label any score of 3 or higher as relevant. \ftive translations is as important as \ufb01nding the most accurate translation, especially for non-factoid questions, where capturing (potentially multiple) underlying topics is essential. Recent work in crosslanguage IR (CLIR) has shown that incorporating probabilities from the internal representations of an MT system to \u201ctranslate\u201d the question can accomplish this, outperforming standard one-best translation (Ture and Lin, 2014). We hypothesize that these improvements transfer to multilingual QA as well. In addition to translation directions, we explored four translation methods for converting the English question into a probabilistic representation (in Arabic and Chinese). Each method builds a probability distribution for every question word, expressing the translation space in the collection language. More details of \ufb01rst three methods can be found in (Ture and Lin, 2014), while fourth method is a novel query translation method adapted from the neural network translation model described in (Devlin et al., 2014). Word: In MT, a word alignment is a many-tomany mapping between sourceand target-language words, learned without supervision, at the beginning of the training pipeline (Och, 2003). These alignments can be converted into word translation probabilities for CLIR (Darwish and Oard, 2003). For example, in an English-Arabic parallel corpus, if an English word appears m times in total and is aligned to a certain Arabic word k times, we assign a probability of k m for this translation. This simple idea has performed greatly in IR for generating a probability distribution for query word translations. Grammar: Probabilities are derived from a synchronous context-free grammar, which is a typical translation model found in MT systems (Ture and Lin, 2014). The grammar contains rules r that follow the form \u03b1|\u03b2|A|\u2113(r), stating that sourcelanguage word \u03b1 can be translated into targetlanguage word \u03b2 with an associated likelihood value \u2113(r) (A represents word alignments). For each rule r that applies to the question, we identify each source word sj. From the word alignment information included in the rule, we can \ufb01nd all target words that sj is aligned to. By processing all the rules to accumulate likelihood values, we construct translation probabilities for each word in the question. 10-best: Statistical MT systems retrieve a ranked list of translations, not a single best. Ture and Lin exploited this to obtain word translation probabilities from the top 10 translations of the question (2014). For each question word w, we can extract which grammar rules were used to produce the translation \u2013 once we have the rules, word alignments allow us to \ufb01nd all target-language words that w translates into. By doing this for each question translation, we construct a probability distribution that de\ufb01nes the translation space of each question word. Context: Neural network-based MT models learn context-dependent word translation probabilities \u2013 the probability of a target word is dependent on the source word it aligns to, as well as a 5-word window of context (Devlin et al., 2014). For example, if the Spanish word \u201cplacer\u201d is aligned to the English word \u201cpleasure\u201d, the model will not only learn from this word-to-word alignment but also consider the source sentence context (e.g., \u201cFue un placer conocerte y tenerte unos meses.\u201d). However, since short questions might lack full sentence context, our model should have the \ufb02exibility to translate under partial or no context. Instead of training the NN-base translation model on full, well-formed sentences, we custom-\ufb01t it for question translation: words in the context window are randomly masked by replacing it with a special \ufb01ller token . This teaches the model how to accurately translate with full, partial context, or no context. For the above example, we generate partial contexts such as \u201cfue un placer y\u201d or \u201c placer conocerte y\u201d. Since there are many possibilities, if the context window is large, we randomly sample a few of the possibilities (e.g., 4 out of 9) per training word. In Figure 1, we display the probabilistic structure produced the grammar-based translation method, when implemented as described above. Each English word in the question is translated into a probabilistic structure, consisting of Chinese words and corresponding probabilities that represent how much weight the method decides to put on that speci\ufb01c word. Similar structures are learned with the other three translation methods. We are not aware of any other MLQA approach that represents the question-answer pair based on their probabilistic translation space. \fchild:'['0.32' \b\f0.25'\u0006\u0005\f0.21'\u0005\u0004\f0.15'\u0001 \f\r...']' labor:'['0.36' \b\f0.26'\u0003\b\f0.17'\u0003\u0002\f0.13'\u0003\u0002\u0002\f...']' Africa:'['0.89'\u000b\t\f0.02'\u000b\f0.02'\u0001\u0007\f0.01'\u0003\u000b\f...']' non#$ development$of$ child$labor$ child$$$$$$$$$$$$$$$$$$$$$$$$children$$$$$$$$$$$$$$$$$$$$child$ labor$$$$$$$$$$$$$$$$$$$$$$$$$labor$$$$$$$$$$$$$$$$$$$$$$$$labor$force$ Africa$ South$Africa$ Figure 1: Probabilistic grammar-based translation of example question. The example question \u201cTell me about child labor in Africa\u201d is simpli\ufb01ed by our preprocessing engine to \u201cchild labor africa\u201d. 3.2 Features Given two different translation directions (CL and QL), and four different translation methods (Word, Grammar, 10-best, Context), our strategy is to leverage a machine learning process to determine how helpful each signal is with respect to the end task. For this, we introduced separate question-answer similarity features based on each combination of translation direction and method. Collection-language Features. In order to compute a single real-valued vector to represent the question in the collection language (LexCL), we start with the probabilistic structure representing the question translation (e.g., Figure 1 is one such structure when the translation method is grammar-based). For each word in the collection-language vocabulary, we compute a weight by averaging its probability across the terms in the probabilistic structure. vqgrammar(w) = avgiPr(w|qi) (1) where w is a non-Engish word and Pr(w|qi) is the probability of w in the probability distribution corresponding to the ith query term. Figure 2 shows the real-valued vector computed based on the probabilistic question translation in Figure 1. The Chinese word translated as \u201cchild labor\u201d has a weight of 0.32, 0.36, and 0 in the probability distributions of the three query terms, respectively. Averaging these three values results in the \ufb01nal weight of 0.23 in vqgrammar in Figure 2. Notice that these weights are normalized by construction. Similarly, a candidate answer s in Chinese is represented by normalized word frequencies: vs(w) = freq(w|s) P w\u2032 freq(w\u2032|s) (2) Given the two vectors, we compute the cosine similarity. Same process is repeated for the Feature Question Sentence Feature category repr. (vq\u2032) repr. (vs\u2032) Value LexCL vqword vs vq10best vs cosine (vq\u2032, vs\u2032) vqcontext vs vqgrammar vs LexQL vq vs1best Table 1: List of features used in L2T, and how the values are computed from vector representations. other three translation methods. The four lexical collection-language similarity features are collectively called LexCL. vqgrammar:([(0.30(\u0018\u0012\u00190.23(\u0014\f\u00190.08(\u000b \u00190.09(\u0005\f\u0019\u2026(]( s:(\u0005\t\u0018\u0012\u001b\u0016\r\u000f\u0014\f\u0013\u0010\u0006\u0004\u0004\u0011\u000e\t\r\u0017\u001b\b\u0015\u000e\u0002\u0007\u0013\u0003\u0002\u001a\u0001 vs:([(2.0(\u0001(1.0(\u0006\u0004\u00011.0(\u0014\f(1.0(\u0005\u0002\u0003\u0001\u2026(]( Figure 2: Vector representation of grammar-translated question (qgrammar) and sentence (s). Question-language Features. As mentioned before, we also obtain a similarity value by translating the sentence (s1best) and computing the cosine similarity with the original question (q). vq and vs1best are computed using Equation 2. Although it is possible to translate the sentence into English using the same four methods, we only used the one-best translation due to the computational cost. Hence, we have only one lexical similarity feature in the QL view (call LexQL). The computation process for the \ufb01ve lexical similarity features is summarized in Table 1. After computation, feature weights are learned via a maximum-entropy model.7 Although not included in the \ufb01gure or table, we also include the same set of features from the sentence preceding the answer (within the corresponding forum post), in order to represent the larger discourse. 3.3 Data Selection In order to train a machine learning model with our novel features, we need positive and negative examples of question-answer pairs (i.e., (q, s)). For this, for each training question, our approach is to hire 7Support vector machines yielded worse results. \fhuman annotators to label sentences retrieved from the non-English collections used in our evaluation. It is possible to label the sentences in the source language (i.e., Arabic or Chinese) or in the question language (i.e., translated into English). In this section, we explore the question of whether it is useful to distinguish between these two independently created labels, and whether this redundancy can be used to improve the machine learning process. We hypothesize two reasons why selecting training data based on language might bene\ufb01t MLQA: i) The translation of non-English candidate answers might lack in quality, so annotators are likely to judge some relevant answers as non-relevant. Hence, training a classi\ufb01er on this data might lead to a tendency to favor English answers. ii) For the question-answer pairs that were annotated in both languages, we can remove noisy (i.e., labeled inconsistently by annotators) instances from the training set. The question of annotation is an unavoidable part of evaluation of MLQA systems, so \ufb01nding the optimal subset for training is a relevant problem. In order to explore further, we generated six subsets with respect to (a) the original language of the answer, or (b) the language of annotation (i.e., based on original text or its English translation): en: Sentences from the English corpus. ar/ch: Sentences from the Arabic / Chinese corpus (regardless of how it was judged). consist: All sentences except those that were judged inconsistently. src+: Sentences judged only in original text, or judged in both consistently. en+: Sentences that are either judged only in English, or judged in both original and English translation consistently. all: All sentences. These subsets were determined based on linguistically motivated heuristics, but choosing the most suitable one (for a given task) is done via machine learning (see Section 4). 3.4 Language-speci\ufb01c Ranking Scoring Arabic sentences with respect to a question is inherently different than scoring English (or Chinese) sentences. The quality of resources, grammar, etc., as well as other internal dynamics might differ greatly across languages. We hypothesize that there is no one-size-\ufb01ts-all model, so the parameters that work best for English retrieval might not be as useful when scoring sentences in Arabic, and/or Chinese. Our proposed solution is to apply a separate classi\ufb01er, custom-tuned to each collection, and retrieve three single-language ranked lists (i.e., in English, Arabic, and Chinese). In addition to comparing each custom-tuned, language-speci\ufb01c classi\ufb01er to a single, language-independent one, we also use this idea to propose an approach for MLQA: L2CT(n) Retrieve answers from each language using separate classi\ufb01ers (call these lists English-only, Arabic-only, and Chinese-only), take the best answers from each language, then merge them into a mixed-language set of n answers. We compare this to the standard approach: L2T(n) Retrieve up to n mixed-language answers using a single classi\ufb01er. Four heuristics were explored for merging lists in the L2CT approach.8 Two common approaches are uniform and alternate merging (Savoy, 2004): Uniform: A straightforward merge can be achieved by using the classi\ufb01er scores (i.e., probability of answer relevance, given question) to sort all answers, across all languages, and include the top n in the \ufb01nal list of answers. Classi\ufb01er scores are normalized into the [0,1] range for comparability. Alternate: We alternate between the lists, picking one answer at a time from each, stopping when the limit n has been reached. Since answers are expected in English, there is a natural preference for answers that were originally written English, avoiding noisy text due to translation errors. However, it is also important not to restrict answers entirely to English sources, since that would defeat the purpose of searching in a multilingual collection. We implemented the following methods to account for language preferences: English \ufb01rst: We keep all suf\ufb01ciently-con\ufb01dent (i.e., normalized score above a \ufb01xed threshold) answers from the English-only list \ufb01rst, and start including answers from Arabicand Chinese-only lists only if the limit of n answers has not been reached. 8In addition to these heuristics, the optimal merge could be learned from training data, as a \u201clearning to rank\u201d problem. This is out of the scope of this paper, but we plan to explore the idea in the future. \fWeighted: Similar to Uniform, but we weight the normalized scores before sorting. The optimal weights can be learned by using a grid-search procedure and a cross-validation split. 4 Evaluation In order to perform controlled experiments and gain more insights, we split our evaluation into four separate tasks: three tasks focus on retrieving answers from posts written in a speci\ufb01ed language (English-only, Arabic-only, or Chinese-only) 9, and the last task is not restricted to any language (Mixedlanguage). All experiments were conducted on the DARPA BOLT-IR task. The collection consists of 12.6M Arabic, 7.5M Chinese, and 9.6M English Web forum posts. All runs use a set of 45 nonfactoid (mostly opinion and causal) English questions, from a range of topics. All questions and forum posts were processed with an information extraction (IE) toolkit (Boschee et al., 2005), which performs sentence-splitting, named entity recognition, coreference resolution, parsing, and part-ofspeech tagging. All non-English posts were translated into English (one-best only), and all questions were translated into Arabic and Chinese (probabilistic translation methods from Section 3.1). For all experiments, we used the same state-of-the-art English\u2194Arabic (En-Ar) and English\u2194Chinese (En-Ch) MT systems (Devlin et al., 2014). Models were trained on parallel corpora from NIST OpenMT 2012, in addition to parallel forum data collected as part of the BOLT program (10M En-Ar words; 30M EnCh words). Word alignments were learned with GIZA++ (Och and Ney, 2003) (\ufb01ve iterations of IBM Models 1\u20134 and HMM). After all preprocessing, features were computed using the original post and question text, and their translations. Training data were created by having annotators label all sentences of the top 200 documents retrieved by Indri from each collection (for each question). Due to the nature of retrieval tasks, training data usually contains an unbalanced portion of negative examples. Hence, we split the data into balanced subsets (each sharing the same set of positively labeled data) and train multiple classi\ufb01ers, 9Shortened as Eng, Arz, and Cmn, respectively. then take a majority vote when predicting. For testing, we froze the set of candidate answers and applied the trained classi\ufb01er to each questionanswer pair, generating a ranked list of answers for each question. This ranked list was evaluated by average precision (AP).10 Due to the size and redundancy of the collections, we sometimes end up with over 1000 known relevant answers for a question. So it is neither reasonable nor meaningful to compute AP until we reach 100% recall (e.g., 11-point AP) for these cases. Instead, we computed AP-k, by accumulating precision values at every relevant answer until we get k relevant answers.11 In order to provide a single metric for the test set, it is common to report the mean average precision (MAP), which in this case is the average of the AP-k values across all questions. Baseline. As described earlier, the baseline system computes similarity between question text and the one-best translation of the candidate answer (we run the sentence through our state-of-the-art MT system). After translation, we compute similarity via scoring the match between the parse of the question text and the parse of the candidate answer, using our \ufb01nely-tuned IE toolkit [reference removed for anonymization]. This results in three different similarity features: matching the tree node similarity, edge similarity, and full tree similarity. Feature weights are then learned by training this classi\ufb01er discriminatively on the training data described above. This already performs competitively, outperforming the simpler baseline where we compute a single similarity score between question and translated text, and matching the performance of the system by Chaturvedi et al. on the BOLT evaluation (2014). Baseline MAP values are reported on the leftmost column of Table 2. Data effect. In the baseline approach, we do not perform any data selection, and use all available data for training the classi\ufb01er. In order to test our hypothesis that selecting a linguistically-motivated subset of the training data might help, we used 10fold cross-validation to choose the optimal data set 10Many other metrics (e.g., NDCG, R-precision) were explored during BOLT, and results were very similar. 11k was \ufb01xed to 20 in our evaluation, although we veri\ufb01ed that conclusions do not change with varying k. \f(among seven options described in Section 3.3). Results indicate that including English or Arabic sentences when training a classi\ufb01er for Chinese-only QA is a bad idea, since effectiveness increases when restricted to Chinese sentences (lang=ch). On the other hand, for the remaining three tasks, the most effective training data set is annot=en+consist. These selections are consistent across all ten folds, and the difference is statistically signi\ufb01cant for all but Arabic-only. The second column in Table 2 displays the MAP achieved when data selection is applied before training the baseline model. Feature effect. To measure the impact of our novel features, we trained classi\ufb01ers using either LexCL, LexQL, or both feature sets (Section 3.2). In these experiments, the data is \ufb01xed to the optimal subset found earlier. Results are summarized on right side of Table 2. Statistically signi\ufb01cants improvements over Baseline/Baseline+Data selection are indicated with single/double underlining. For Arabic-only QA, adding LexQL features yields greatest improvements over the baseline, while the same statement holds for LexCL features for the Chinese-only task. For the English-only and mixed-language tasks, the most signi\ufb01cant increase in MAP is observed with all of our probabilistic bilingual features. For all but Arabic-only QA, the MAP is statistically signi\ufb01cantly better (p < 0.05) than the baseline; for Chinese-only and mixedlanguage tasks, it also outperforms baseline plus data selection (p < 0.05).12 All of this indicates the effectiveness of our probabilistic question translation, as well as our data selection strategy. Task Base +Data +Feats Cmn 0.416 0.425 (ch) 0.451 (LexCL) Arz 0.421 0.423 (en+) 0.425 (LexQL) Eng 0.637 0.657 (en+) 0.660 (all) Mixed 0.665 0.675 (en+) 0.681 (all) Table 2: L2T evaluated using MAP with 10-fold crossvalidation for each task. A statistically signi\ufb01cant increase over Baseline/Base+Data is shown by single/double underlining (p < 0.05). Understanding the contribution of each of the four 12Note that bilingual features are not expected to help on the English-only task, and the improvements come solely from data selection. LexCL features is also important. To gain insight, we trained a classi\ufb01er using all LexCL features (using the optimal data subset learned earlier for each task), and then incrementally removed one of the features, and tested on the same task. This controlled experiment revealed that the word translation feature is most useful for Chinese-only QA (i.e., removing it produces largest drop in MAP, 0.6 points), whereas context translation appears to be most useful (by a slighter margin) in Arabic-only QA. In the former case, the diversity provided by word translation might be useful at increasing recall in retrieving Chinese answers. In retrieving Arabic answers, using context to disambiguate the translation might be useful at increasing precision. This result further emphasizes the importance of a customized translation approach for MLQA. Furthermore, to test the effectiveness of the probabilistic translation approach (Section 3.1), we replaced all LexCL features with a single lexical similarity feature computed from the one-best question translation. This resulted in lower MAP: 0.427 to 0.423 for Arabic-only, and 0.451 to 0.425 for Chinese-only task (p < 0.01), supporting the hypothesis that probabilistic translation is more effective than the widely-used one-best translation. In fact, almost all gains in Chinese-only QA seems to be coming from the probabilistic translation. For a robustness test, we let cross-validation select the best combination of (data, feature), mimicking a less controlled, real-world setting. In this case, the best MAP for the Arabic-, Chinese-, Englishonly, and Mixed-language tasks are 0.403, 0.448, 0.657, and 0.679, respectively. In all but Arabiconly, these are statistically signi\ufb01cantly better (p < 0.05) than not tuning the feature set or training data (i.e., Baseline). This result suggests that our approach can be used for any MLQA task out of the box, and provide improvements. Learning to Custom Translate (L2CT). We took the ranked list of answers output by each languagespeci\ufb01c model, and merged all of them into a ranked list of mixed-language answers. For the weighted heuristic, we tried three values for the weight. In Table 3, we see that training separate classi\ufb01ers for each subtask does not bring overall improvements to the end task. Amongst merging strategies, \fthe most effective were weighted (weights for each query learned by performing a grid-search on other queries) and English \ufb01rst \u2013 however, both are statistically indistinguishable from the single classi\ufb01er baseline. In the latter case, the percentage of English answers is highest (88%), which might not be desirable. Depending on the application, the ratio of languages can be adjusted with an appropriate merging method. For instance, alternate and norm heuristics tend to represent languages almost equally. Approach (En-Ch-Ar) % MAP L2T 64-19-16 0.681 L2CT Uniform 24-35-41 0.548 Alt. 32-34-34 0.574 Eng. First 88-6-6 0.668 Weight 2 37-30-34 0.599 5 51-24-25 0.654 10 61-20-19 0.669 Table 3: L2T vs. L2CT for multilingual QA. Even though we get lower MAP in the overall task, Table 2 suggests that it is worthwhile customizing classi\ufb01ers for each subtask (e.g., the Chinese responses in the ranked list of L2CT are more relevant than Single.). The question of how to effectively combine the results into a mixed-language list, however, remains an open question. 5" + }, + { + "url": "http://arxiv.org/abs/1606.05029v2", + "title": "No Need to Pay Attention: Simple Recurrent Neural Networks Work! (for Answering \"Simple\" Questions)", + "abstract": "First-order factoid question answering assumes that the question can be\nanswered by a single fact in a knowledge base (KB). While this does not seem\nlike a challenging task, many recent attempts that apply either complex\nlinguistic reasoning or deep neural networks achieve 65%-76% accuracy on\nbenchmark sets. Our approach formulates the task as two machine learning\nproblems: detecting the entities in the question, and classifying the question\nas one of the relation types in the KB. We train a recurrent neural network to\nsolve each problem. On the SimpleQuestions dataset, our approach yields\nsubstantial improvements over previously published results --- even neural\nnetworks based on much more complex architectures. The simplicity of our\napproach also has practical advantages, such as efficiency and modularity, that\nare valuable especially in an industry setting. In fact, we present a\npreliminary analysis of the performance of our model on real queries from\nComcast's X1 entertainment platform with millions of users every day.", + "authors": "Ferhan Ture, Oliver Jojic", + "published": "2016-06-16", + "updated": "2017-07-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction First-order factoid question answering (QA) assumes that the question can be answered by a single fact in a knowledge base (KB). For example, \u201cHow old is Tom Hanks\u201d is about the [age] of [Tom Hanks]. Also referred to as simple questions by Bordes et al. (2015), recent attempts that apply either complex linguistic reasoning or attention-based complex neural network architectures achieve up to 76% accuracy on benchmark sets (Golub and He, 2016; Yin et al., 2016). While it is tempting to study QA systems that can handle more complicated questions, it is hard to reach reasonably high precision for unrestricted questions. For more than a decade, successful industry applications of QA have focused on \ufb01rst-order questions. This bears the question: are users even interested in asking questions beyond \ufb01rst-order (or are these use cases more suitable for interactive dialogue)? Based on voice logs from a major entertainment platform with millions of users every day, Comcast X1, we \ufb01nd that most existing use cases of QA fall into the \ufb01rst-order category. Our strategy is to tailor our approach to \ufb01rstorder QA by making strong assumptions about the problem structure. In particular, we assume that the answer to a \ufb01rst-order question is a single property of a single entity in the KB, and decompose the task into two subproblems: (a) detecting entities in the question and (b) classifying the question as one of the relation types in the KB. We simply train a vanilla recurrent neural network (RNN) to solve each subproblem (Elman, 1990). Despite its simplicity, our approach (RNN-QA) achieves the highest reported accuracy on the SimpleQuestions dataset. While recent literature has focused on building more complex neural network architectures with attention mechanisms, attempting to generalize to broader QA, we enforce stricter assumptions on the problem structure, thereby reducing complexity. This also means that our solution is ef\ufb01cient, another critical requirement for real-time QA applications. In fact, we present a performance analysis of RNNQA on Comcast\u2019s X1 entertainment system, used by millions of customers every day. 2 Related work If knowledge is presented in a structured form (e.g., knowledge base (KB)), the standard ap\fproach to QA is to transform the question and knowledge into a compatible form, and perform reasoning to determine which fact in the KB answers a given question. Examples of this approach include pattern-based question analyzers (Buscaldi et al., 2010), combination of syntactic parsing and semantic role labeling (Bilotti et al., 2007, 2010), as well as lambda calculus (Berant et al., 2013) and combinatory categorical grammars (CCG) (Reddy et al., 2014). A downside of these approaches is the reliance on linguistic resources/heuristics, making them languageand/or domain-speci\ufb01c. Even though Reddy et al. (2014) claim that their approach requires less supervision than prior work, it still relies on many Englishspeci\ufb01c heuristics and hand-crafted features. Also, their most accurate model uses a corpus of paraphrases to generalize to linguistic diversity. Linguistic parsers can also be too slow for real-time applications. In contrast, an RNN can detect entities in the question with high accuracy and low latency. The only required resources are word embeddings and a set of questions with entity words tagged. The former can be easily trained for any language/domain in an unsupervised fashion, given a large text corpus without annotations (Mikolov et al., 2013; Pennington et al., 2014). The latter is a relatively simple annotation task that exists for many languages and domains, and it can also be synthetically generated. Many researchers have explored similar techniques for general NLP tasks (Collobert et al., 2011), such as named entity recognition (Lu et al., 2015; Hammerton, 2003), sequence labeling (Graves, 2008; Chung et al., 2014), part-of-speech tagging (Huang et al., 2015; Wang et al., 2015), chunking (Huang et al., 2015). Deep learning techniques have been studied extensively for constructing parallel neural networks for modeling a joint probability distribution for question-answer pairs (Hsu et al., 2016; Yang et al., 2014; He et al., 2015; Mueller and Thyagarajan, 2016) and re-ranking answers output by a retrieval engine (Rao et al., 2016; Yang et al., 2016). These more complex approaches might be needed for general-purpose QA and sentence similarity, where one cannot make assumptions about the structure of the input or knowledge. However, as noted in Section 1, \ufb01rst-order factoid questions can be represented by an entity and a relation type, and the answer is usually stored in a structured knowledge base. Dong et al. (2015) similarly assume that the answer to a question is at most two hops away from the target entity. However, they do not propose how to obtain the target entity, since it is provided as part of their dataset. Bordes et al. (2014) take advantage of the KB structure by projecting entities, relations, and subgraphs into the same latent space. In addition to \ufb01nding the target entity, the other key information to \ufb01rst-order QA is the relation type corresponding to the question. Many researchers have shown that classifying the question into one of the prede\ufb01ned types (e.g., based on patterns (Zhang and Lee, 2003) or support vector machines (Buscaldi et al., 2010)) improves QA accuracy. 3 Approach (a) From Question to Structured Query. Our approach relies on a knowledge base, containing a large set of facts, each one representing a binary [subject, relation, object] relationship. Since we assume \ufb01rst-order questions, the answer can be retrieved from a single fact. For instance, \u201cHow old is Sarah Michelle Gellar?\u201d can be answered by the fact: [Sarah Michelle Gellar,bornOn,4/14/1977] The main idea is to dissect a \ufb01rst-order factoid natural-language question by converting it into a structured query: {entity \u201cSarah Michelle Gellar\u201d, relation: bornOn}. The process can be modularized into two machine learning problems, namely entity detection and relation prediction. In the former, the objective is to tag each question word as either entity or not. In the latter, the objective is to classify the question into one of the K relation types. We modeled both using an RNN. We use a standard RNN architecture: Each word in the question passes through an embedding lookup layer E, projecting the one-hot vector into a d-dimensional vector xt. A recurrent layer combines this input representation with the hidden layer representation from the previous word and applies a non-linear transformation to compute the hidden layer representation for the current word. The hidden representation of the \ufb01nal recurrent layer is projected to the output space of k dimensions and normalized into a probability distribution via soft-max. In relation prediction, the question is classi\ufb01ed \finto one of the 1837 classes (i.e., relation types in Freebase). In the entity detection task, each word is classi\ufb01ed as either entity or context (i.e., k = 2). Given a new question, we run the two RNN models to construct the structured query. Once every question word is classi\ufb01ed as entity (denoted by E) or context (denoted by C), we can extract entity phrase(s) by grouping consecutive entity words. For example, for question \u201cHow old is Michelle Gellar\u201d, the output of entity detection is [C C C E E], from which we can extract a single entity \u201cMichelle Gellar\u201d. The output of relation prediction is bornOn. The inferred structured query q becomes the following: {entityText: \u201cmichelle gellar\u201d, relation: bornOn} (b) Entity Linking. The textual reference to the entity (entityText in q) needs to be linked to an actual entity node in our KB. In order to achieve that, we build an inverted index Ientity that maps all ngrams of an entity (n \u2208{1, 2, 3}) to the entity\u2019s alias text (e.g., name or title), each with an associated TF-IDF score. We also map the exact text (n = \u221e) to be able to prioritize exact matches. Following our running example, let us demonstrate how we construct Ientity. Let us assume there is a node ei in our KB that refers to the actress \u201cSarah Michelle Gellar\u201d. The alias of this entity node is the name, which has three unigrams (\u201csarah\u201d, \u201cmichelle\u201d, \u201cgellar\u201d), two bigrams (\u201csarah michelle\u201d, \u201cmichelle gellar\u201d) and a single trigram (i.e., the entire name). Each one of these n-grams gets indexed in Ientity with TFIDF weights. Here is how the weights would be computed for unigram \u201csarah\u201d and bigram \u201cmichelle gellar\u201d (\u21d2denotes mapping): Ientity(\u201csarah\u201d) \u21d2{node : ei, score : TF-IDF(\u201csarah\u201d, \u201csarah michelle gellar\u201d)} Ientity(\u201cmichelle gellar\u201d) \u21d2{node : ei, score : TF-IDF(\u201cmichelle gellar\u201d, \u201csarah michelle gellar\u201d)} This is performed for every n-gram (n \u2208 {1, 2, 3, \u221e}) of every entity node in the KB. Assuming there is an entity node, say ej, for the actress \u201cSarah Jessica Parker\u201d, we would end up creating a second mapping from unigram \u201csarah\u201d: Ientity(\u201csarah\u201d) \u21d2{node : ej, score : TF-IDF(\u201csarah\u201d, \u201csarah jessica parker\u201d)} In other words, \u201csarah\u201d would be linked to both ei and ej, with corresponding TF-IDF weights. Once the index Ientity is built, we can link entityText from the structured query (e.g., \u201cmichelle gellar\u201d) to the intended entity in the KB (e.g., ei). Starting with n = \u221e, we iterate over n-grams of entityText and query Ientity, which returns all matching entities in the KB with associated TFIDF relevance scores. For each n-gram, retrieved entities are appended to the candidate set C. We continue this process with decreasing value of n (i.e., n \u2208{\u221e, 3, 2, 1}) Early termination happens if C is non-empty and n is less than or equal to the number of tokens in entityText. The latter criterion is to avoid cases where we \ufb01nd an exact match but there are also partial matches that might be more relevant: For \u201cjurassic park\u201d, for n = \u221e, we get an exact match to the original movie \u201cJurassic Park\u201d. But we would also like to retrieve \u201cJurassic Park II\u201d as a candidate entity, which is only possible if we keep processing until n = 2. (c) Answer Selection. Once we have a list of candidate entities C, we use each candidate node ecand as a starting point to reach candidate answers. A graph reachability index Ireach is built for mapping each entity node e to all nodes e\u2032 that are reachable, with the associated path p(e, e\u2032). For the purpose of the current approach, we limit our search to a single hop away, but this index can be easily expanded to support a wider search. Ireach(ei) \u21d2 {node:ei1, text:The Grudge, path:[actedIn]} {node:ei2, text:4/14/1977, path:[bornOn]} {node:ei3, text:F. Prinze, path:[marriedTo]} We use Ireach to retrieve all nodes e\u2032 that are reachable from ecand, where the path from is consistent with the predicted relation r (i.e., r \u2208p(ecand, e\u2032)). These are added to the candidate answer set A. For example, in the example above, node ei2 would have been added to the answer set A, since the path [bornOn] matches the predicted relation in the structured query. After repeating this process for each entity in C, the highest-scored node in A is our best answer to the question. 4 Experimental Setup Data. Evaluation of RNN-QA was carried out on SimpleQuestions, which uses a subset of Freebase containing 17.8M million facts, 4M unique entities, and 7523 relation types. Indexes Ientity and Ireach are built based on this knowledge base. \fSimpleQuestions was built by (Bordes et al., 2014) to serve as a larger and more diverse factoid QA dataset.1 Freebase facts are sampled in a way to ensure a diverse set of questions, then given to human annotators to create questions from, and get labeled with corresponding entity and relation type. There are a total of 1837 unique relation types that appear in SimpleQuestions. Training. We \ufb01xed the embedding layer based on the pre-trained 300-dimensional Google News embedding,2 since the data size is too small for training embeddings. Out-of-vocabulary words were assigned to a random vector (sampled from uniform distribution). Parameters were learned via stochastic gradient descent, using categorical cross-entropy as objective. In order to handle variable-length input, we limit the input to N tokens and prepend a special pad word if input has fewer.3 We tried a variety of con\ufb01gurations for the RNN: four choices for the type of RNN layer (GRU or LSTM, bidirectional or not); depth from 1 to 3; and drop-out ratio from 0 to 0.5, yielding a total of 48 possible con\ufb01gurations. For each possible setting, we trained the model on the training portion and used the validation portion to avoid over-\ufb01tting. After running all 48 experiments, the most optimal setting was selected by micro-averaged F-score of predicted entities (entity detection) or accuracy (relation prediction) on the validation set. We concluded that the optimal model is a 2-layer bidirectional LSTM (BiLSTM2) for entity detection and BiGRU2 for relation prediction. Drop-out was 10% in both cases. 5 Results End-to-End QA. For evaluation, we apply the relation prediction and entity detection models on each test question, yielding a structured query q = {entityText: te, relation: r} (Section 3a). Entity linking gives us a list of candidate entity nodes (Section 3b). For each candidate entity ecand, we can limit our relation choices to the set of unique relation types that some candidate entity ecand is associated with. This helps eliminate the arti\ufb01cial ambiguity due to overlapping rela175910/10845/21687 question-answer pairs for training/validation/test is an order of magnitude larger than comparable datasets. Vocabulary size is 55K as opposed to around 3K for WebQuestions (Berant et al., 2013). 2word2vec.googlecode.com 3Input length (N) was set to 36, the maximum number of tokens across training and validation splits. tion types as well as the spurious ambiguity due to redundancies in a typical knowledge base. Even though there are 1837 relation types in Freebase, the number of relation types that we need to consider per question (on average) drops to 36. The highest-scored answer node is selected by \ufb01nding the highest scored entity node e that has an outward edge of type r (Section 3c). We follow Bordes et al. (2015) in comparing the predicted entity-relation pair to the ground truth. A question is counted as correct if and only if the entity we select (i.e., e) and the relation we predict (i.e, r) match the ground truth. Table 1 summarizes end-to-end experimental results. We use the best models based on validation set accuracy and compare it to three prior approaches: a specialized network architecture that explicitly memorizes facts (Bordes et al., 2015), a network that learns how to convolve sequence of characters in the question (Golub and He, 2016), and a complex network with attention mechanisms to learn most important parts of the question (Yin et al., 2016). Our approach outperforms the state of the art in accuracy (i.e., precision at top 1) by 11.9 points (15.6% relative). Model P@1 Memory Network (2015) 63.9 Char-level CNN (2016) 70.9 Attentive max-pooling (2016) 76.4 RNN-QA (best models) 88.3 naive ED 58.9 naive RP 4.1 naive ED and RP 3.7 Table 1: Top-1 accuracy on test portion of SimpleQuestions. Ablation study on last three rows. Last three rows quantify the impact of each component via an ablation study, in which we replace either entity detection (ED) or relation prediction (RP) models with a naive baseline: (i) we assign the relation that appears most frequently in training data (i.e., bornOn), and/or (ii) we tag the entire question as an entity (and then perform the n-gram entity linking). Results con\ufb01rm that RP is absolutely critical, since both datasets include a diverse and well-balanced set of relation types. When we applied the naive ED baseline, our results drop signi\ufb01cantly, but they are still comparable to prior results. Given that most prior work do not use the network to detect entities, we can \fdeduce that our RNN-based entity detection is the reason our approach performs so well. Error Analysis. In order to better understand the weaknesses of our approach, we performed a blame analysis: Among 2537 errors in the test set, 15% can be blamed on entity detection \u2014 the relation type was correctly predicted, but the detected entity did not match the ground truth. The reverse is true for 48% cases.4 We manually labeled a sample of 50 instances from each blame scenario. When entity detection is to blame, 20% was due to spelling inconsistencies between question and KB, which can be resolved with better text normalization during indexing (e.g., \u201cla kings\u201d refers to \u201cLos Angeles Kings\u201d). We found 16% of the detected entities to be correct, even though it was not the same as the ground truth (e.g., either \u201cNew York\u201d or \u201cNew York City\u201d is correct in \u201cwhat can do in new york?\u201d); 18% are inherently ambiguous and need clari\ufb01cation (e.g., \u201cwhere bin laden got killed?\u201d might mean \u201cOsama\u201d or \u201cSalem\u201d). When blame is on relation prediction, we found that the predicted relation is reasonable (albeit different than ground truth) 29% of the time (e.g., \u201cwhat was nikola tesla known for\u201d can be classi\ufb01ed as profession or notable for). RNN-QA in Practice. In addition to matching the state of the art in effectiveness, we also claimed that our simple architecture provides an ef\ufb01cient and modular solution. We demonstrate this by applying our model (without any modi\ufb01cations) to the entertainment domain and deploying it to the Comcast X1 platform serving millions of customers every day. Training data was generated synthetically based on an internal entertainment KB. For evaluation, 295 unique question-answer pairs were randomly sampled from real usage logs of the platform. We can draw two important conclusions from Table 2: First of all, we \ufb01nd that almost all of the user-generated natural-language questions (278/295\u223c95%) are \ufb01rst-order questions, supporting the signi\ufb01cance of \ufb01rst-order QA as a task. Second, we show that even if we simply use an open-sourced deep learning toolkit (keras.io) for implementation and limit the computational resources to 2 CPU cores per thread, RNN-QA answers 75% of questions correctly with very reasonable latency. 4In remaining 37% incorrect answers, both models fail, so the blame is shared. Error Count Correct 220 Incorrect entity 16 Incorrect relation 42 Not \ufb01rst-order question 17 Total Latency 76\u00b116 ms Table 2: Evaluation of RNN-QA on real questions from X platform. 6" + } + ], + "Jimmy Lin": [ + { + "url": "http://arxiv.org/abs/2404.15279v1", + "title": "Jointly Modeling Spatio-Temporal Features of Tactile Signals for Action Classification", + "abstract": "Tactile signals collected by wearable electronics are essential in modeling\nand understanding human behavior. One of the main applications of tactile\nsignals is action classification, especially in healthcare and robotics.\nHowever, existing tactile classification methods fail to capture the spatial\nand temporal features of tactile signals simultaneously, which results in\nsub-optimal performances. In this paper, we design Spatio-Temporal Aware\ntactility Transformer (STAT) to utilize continuous tactile signals for action\nclassification. We propose spatial and temporal embeddings along with a new\ntemporal pretraining task in our model, which aims to enhance the transformer\nin modeling the spatio-temporal features of tactile signals. Specially, the\ndesigned temporal pretraining task is to differentiate the time order of\ntubelet inputs to model the temporal properties explicitly. Experimental\nresults on a public action classification dataset demonstrate that our model\noutperforms state-of-the-art methods in all metrics.", + "authors": "Jimmy Lin, Junkai Li, Jiasi Gao, Weizhi Ma, Yang Liu", + "published": "2024-01-21", + "updated": "2024-01-21", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.AI" + ], + "main_content": "Introduction Similar to visual and acoustic signals, tactile signals are important for modeling and understanding humans. In recent years, various wearable electronics have been designed to collect tactile signals, which are widely used in multiple scenarios, especially in healthcare and robotics (Zhu et al. 2019; Fan et al. 2020; Lou et al. 2020; Okunevich et al. 2021). The collected tactile signals can be utilized for different purposes, and one of their main applications is the action classification task. Sundaram et al. (2019) propose to identify hand actions by tactile signals with sensors in gloves. Luo et al. (2021) and Wicaksono et al. (2022) use wearable electronic socks to collect tactile signals for feet action classification. Figure 1 is an example, where the continuous tactile signals are collected by e-textile sensors in socks, and then used to classify the action (e.g., walking, etc.). Tactile signals are spatially and temporally sensitive, hence utilizing their spatio-temporal features is important for action classification. Firstly, tactile signals are spatially sensitive as they are not translation invariant. The same signals in different positions (i.e., collected by various sensors) *Corresponding authors. Copyright \u00a9 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: An overview of action classification based on tactile signals collected by wearable electronic socks. indicate distinct actions. For example, the same signals collected by sensors located in different positions should be classified as standing on toes or heels, respectively. Secondly, tactile signals are temporally sensitive as they are collected regularly with high frequency, e.g., in 10HZ (10 data points per second), and the time order of these signals is informative. For example, if ignoring the order of collected signals, two signal sequences collected by the same sensor from distinct actions may be seen as identical actions (i.e., same elements with different orders), which becomes useless in classification. Furthermore, we want to point out that jointly modeling spatial and temporal features is essential for tactile signals in action classifications. We conduct an empirical study on a real-scenario dataset (Luo et al. 2021), and draw the spatial and temporal features of different actions in Figure 2. The heatmap of each action shows the averaged results of all samples, which indicates spatial features. The temporal change of a specific sensor shows the averaged sequence data of all samples collected by this sensor, which indicates temporal features. As shown in Figure 2(a), two actions, stand on toes and lean left, have similar temporal features but different spatial features. However, in Figure 2(b), two actions, upstairs and walk fast, have similar spatial features but different temporal features. These observations verify tactile signals\u2019 spatio-temporal features, and further indicate that only using one of them is inadequate for classification. However, existing tactile methods lack the ability to capture the mentioned temporal and spatial nature of tactile signals simultaneously. On the one hand, most previous tactile-related studies adopt CNN-based methods to model arXiv:2404.15279v1 [eess.SP] 21 Jan 2024 \fFigure 2: Empirical study of actions in a tactile dataset. Heatmaps are the averaged results of all samples collected by sensors in the left foot, and the tactile sensor of Figure 2(a) and 2(b) is located at positions (5,20) and (28,19) of the left foot, respectively. the tactile signal frames and then combine them by concatenating or sequential models, which fail to jointly capture their translation variance and temporal properties (Luo et al. 2021; Sundaram et al. 2019; Gao et al. 2020). On the other hand, various transformer models have been designed to handle different continuous signals. But most of them (Zerveas et al. 2021; Tong et al. 2022; Amiridi, Darnell, and Jewell 2022) focus on temporal features, which are inadequate to model tactile signals\u2019 spatial nature, especially the translation variant property. In this paper, we design a Spatio-Temporal Aware tactility Transformer (STAT) to utilize tactile signals for action classification, which utilizes their temporal and spatial features simultaneously. We design spatial and temporal embeddings to explicitly model the translation variant and sequential features of tactile signals, respectively. Additionally, we introduce a temporal pretraining task to enhance the modeling of temporal features by distinguishing the time order of signal tubelets. After pretraining the STAT transformer, the embedding of the [CLS] token is utilized for action classification. Experimental results on tactile show that our model outperforms all baseline methods in all metrics, including state-of-the-art multivariate and video classification models. Further analyses verify the effectiveness of the proposed pretraining task and embeddings. To the best of our knowledge, this is the first transformer model designed for tactile signals by jointly modeling spatio-temporal features, which can be applied to various tactile-related scenarios. Related Work Action Classification with Tactile Signals In recent years, various wearable electronics have been designed to model user actions based on tactile signals in different scenarios. Luo et al. (2021) design wearable electronic socks to classify user walking actions. Noh et al. (2021) use tactile signals in healthcare scenarios, which predict the fall risk of users. Robotic studies also point out that modeling tactile signals are important in understanding humans (Kragic et al. 2018; Negre et al. 2018). Despite the importance of tactile signals in various scenarios, we find that previous tactile classification models are unable to capture the spatial and temporal properties of tactile signals simultaneously. Sundaram et al. (2019) use CNN to capture the embedding of each frame and simply concatenate them for action classification. Recent studies enhance this method by adopting a GRU/LSTM model rather than concatenation to model the sequential features (Luo et al. 2021; Okunevich et al. 2021; Gao et al. 2020). Cao et al. (2020) introduces temporal attention operation combined with spatial features in separate phases. However, as CNNs are designed to utilize the translation invariance features, they fail to capture tactile signals\u2019 translation variance. Different from previous studies, we design a new transformer model to jointly capture the spatio-temporal features of tactile signals for action classification. Transformers for Continuous Signals Transformer models (Vaswani et al. 2017) have achieved great success in continuous signal classification tasks, e.g., videos and multivariate continuous signals (Tong et al. 2022; Zerveas et al. 2021; Zhao et al. 2022). We briefly review related transformers here, especially video transformers, as the input shape of videos is similar to tactile signals. Existing transformer models fail to utilize the spatial and temporal features of tactile signals simultaneously. On the one hand, most video transformers use the visual transformer (Dosovitskiy et al. 2021) as a backbone model, and further propose new masking or input strategies (Arnab et al. 2021a; Bertasius, Wang, and Torresani 2021; Yan et al. 2022a; Liu et al. 2022; Yan et al. 2022b). Recent models propose to capture the spatio-temporal features of videos, e.g., VideoMAE (Tong et al. 2022), SSTSA (Alfasly et al. 2022). However, as video transformers aim to model the translation invariant of videos, they only use position embeddings in encoding, which fail to model the translation variant spatial property of tactile signals. On the other hand, most transformer methods proposed for multivariate continuous signals focus on modeling their temporal features, while ignoring the spatial relations among different signals (i.e., where the signals are collected), such as TST (Zerveas et al. 2021) and the transformer proposed by Hannan et al (2021). Furthermore, most transformer models rely on the masking and reconstruction pretraining task (Devlin et al. 2019; Bao et al. 2022; Tong et al. 2022), which cannot explicitly capture the temporal/spatial features of continuous signals. Although these transformers are not designed for tactile signals, we will use them to verify the effectiveness of STAT. \fFigure 3: An overview of STAT model. Spatial and temporal embeddings are designed to jointly capture both properties. Approach Problem Statement Our goal is to utilize tactile signals to classify user actions, where the data can be collected by various wearable electronics. The wearable devices often arrange sensors as a matrix, so we define the data matrix collected in a specific time point as a frame. Then, the task is defined as follows. Given a tactile signal tensor X \u2208RC\u00d7T \u00d7H\u00d7W , where C represents the number of wearable devices, T represents the length of signal sequences (i.e., T frames), H and W mean the number of sensors in each column/row (i.e., the shape of frames), respectively. An example of tactile signals is shown in Figure 4(a). Xci,tj,hk,wl represents the value collected by the sensor of device ci in position (hk, wl) at time tj. Each tactile segment X has an activity label y, and the total number of activity types is M. Our target is to accurately classify the given tactile signal X to its label y. Overview We propose a spatio-temporal aware tactility transformer for the action classification task based on tactile signals, which is named STAT. A new pretraining task and two extra embeddings are designed to capture the temporal and spatial features of tactile signals jointly in STAT. Firstly, the designed spatio-temporal aware transformer encoder is introduced. We convert the tactile signal tensor to a tubelet sequence. Besides the widely used tubelet and position embeddings, we propose to add spatial and temporal embeddings to capture each tubelet\u2019s temporal and spatial features, respectively. Then, multi-layer transformer encoders are adopted to calculate the representations of tactile signals. Then, the adopted pretraining tasks are defined. Aside from the common masking and reconstruction loss, we designed a temporal pretraining task to explicitly discriminate the time order of tubelet pairs. Finally, we show how to adopt our model for action classifications. Spatio-Temporal Aware Transformer Encoder We will introduce the designed spatio-temporal aware transformer encoder shown in Figure 3. To simplify the notations, we only show the process for handling tactile tensor collected from one wearable device (i.e., X \u2208RT \u00d7H\u00d7W ), as we can easily expand our model to C-channel transformers to utilize signals collected by C devices. Tubelet Inputs As the spatial and temporal dimensions of the tactile signals can be redundant, directly adopting the whole data in classification may result in reduced efficiency. Motivated by previous video transformer models that convert the video clip into tubelets to alleviate the spatiotemporal redundancy, we follow these studies by transferring the tactile signals into a tubelet sequence (Arnab et al. 2021b; Liu et al. 2021; Fan et al. 2021; Tu et al. 2022). We define a tubelet as Q \u2208RL\u00d7P \u00d7P , where L represents its sequence length (i.e., the number of frames) and P represents the patch size (i.e., height and width). Figure 4(b) shows some examples of the converted tubelets, and the total number of tubelets for a tactile signal tensor X is Ntube = THW/(LP 2). Figure 4: (a) Visualization of tactile signal X \u2208RT \u00d7H\u00d7W . (b) Tubelet inputs, where each tubelet Q \u2208RL\u00d7P \u00d7P . \fSpatio-Temporal Enhanced Tubelet Embeddings Most video transformer models adopt the tubelet embeddings and position embeddings as the input of transformer encoders (Dosovitskiy et al. 2021; Tong et al. 2022). However, due to the fact that tactile signals do not have the translation invariance property as images/videos, simply adopting these settings cannot capture the spatial features of tactile signals. Additionally, jointly modeling spatial and temporal features are also essential in distinguishing actions, as shown in Figure 2. Thus, we propose to add spatial embeddings and temporal embeddings for each tubelet to capture the spatiotemporal features of tactile signals jointly. Spatial Embeddings. Each tubelet is collected from a patch of sensors, and the sensors are located in certain positions. We use a spatial embedding espatial k to represent where the tubelet signal is collected from, so that the spatial features will be encoded to explicitly model the translation variance. Tubelets collected by the same batch of sensors will get the same spatial embeddings, and the number of spatial embedding types is Nspace = HW/P 2. Following the traditional calculation of position embeddings (Vaswani et al. 2017), we utilize the sinusoidal positional encoding table to calculate the spatial embedding espatial k , where k represents the spatial position and k \u2208 {1, 2, .., Nspace}. The calculation is defined in Equation (1): espatial (k,2d) = sin( k 10000 2d D ) espatial (k,2d+1) = cos( k 10000 2d D ) (1) Where D represents the embedding dimensions, and espatial (k,d) refers to the d-th dimension of espatial k (d \u2208 {0, 1, 2, 3..., D}). Through this encoding process, the spatial embeddings can provide the transformer encoder with spatial knowledge of the tactile signals, which contributes to modeling the translation variant features. Temporal Embeddings. For tactile signal tubelets, their temporal features are important in distinguishing various actions. We propose temporal embeddings to represent the location of the tubelet in the time sequence, which refers to when the tubelet is collected. Similar to the spatial embeddings, we use the sinusoidal positional encoding Equation (1) to generate the temporal embedding etemporal k . The number of temporal embedding types is Ntemp = T/L, and tubelets collected in the same frames have the same etemporal k , where k \u2208 {1, 2, ..., Ntemp}. These new embeddings are used to enhance the model with additional information about the spatial and temporal properties of tactile signals jointly, and Figure 3 shows an example. Then, we aggregated the proposed two embeddings with tubelet and position embeddings to calculate the input matrix Einput of transformer encoders by Equation (2). Ultimately, the aggregation of embeddings allows for the simultaneous embedding of spatial and temporal properties. Einput = Etubelet + Eposition + Espatial + Etemporal (2) As shown in Figure 3, we append a [CLS] token at the beginning of the tubelet sequence, which is often used to represent the whole embedding sequence in transformer models. The tubelet, position, temporal, and spatial embeddings of this token are randomly initialized and optimized during training. Hence, we have Einput = [Einput [CLS], Einput Q1 , ..., Einput QNtube] \u2208R(Ntube+1)\u00d7D. Transformer Encoders We utilize the classical transformer encoder (Vaswani et al. 2017) as the backbone network, whose effectiveness has been verified in various domains and tasks (Devlin et al. 2019; Arnab et al. 2021b; Dosovitskiy et al. 2021). Our transformer encoder takes in Einput defined in the previous subsection. As the transformer encoder often consists of K transformer layers, we note the primary input Einput as E(0), and E(k) = Transformerk(E(k\u22121)), where k \u2208{1, 2, ..., K}. The output of the final layer TransformerK is the encoded representation of input tokens, and E(K) [CLS] is the final embedding of tactile signals. Pretraining Tasks Pretraining has been verified to be an effective technique to enhance transformer models in various scenarios, e.g., BERT for text (Devlin et al. 2019), BEIT for image (Bao et al. 2022), and VideoMAE for video (Tong et al. 2022). To achieve better classification performances, we choose to pretrain our STAT model before applying it to action classifications. We propose to use two pretraining tasks here, as shown in Figure 5. The first one is the masked tubelet reconstruction (MTR) task, which aims to reconstruct the masked input tubelets, which is also used in previous video transformers (Tong et al. 2022). The other one is our designed temporal pretraining task to explicitly model the temporal features of tactile signal tubelets. Although temporal embeddings are helpful in capturing temporal properties, we prefer to add a specific pretraining task due to the importance of temporal features in distinguishing different actions. Figure 5: Illustrations of the adopted two pretraining tasks. \fMasked Tubelet Reconstruction As shown in Figure 5(a), we use a masked auto-encoder to reconstruct the input signals, which is the MTR task in previous studies (Tong et al. 2022; Arnab et al. 2021a). MTR randomly masks tubelets from videos and reconstructs them in pretraining, and its loss function is defined as follows: LMTR = 1 |M| X t\u2208M |Vt \u2212\u02c6 Vt|2 (3) Where M is the set of masked tubelets\u2019 indexes, V is the input video, and \u02c6 V is the reconstructed one. Specially, we adopt spatial-based random masking instead of randomly masking all tubelets. In this strategy, we randomly select sensor groups from the Nspace types for masking. All signals collected by the chosen masking sensors (i.e., tubelets with the same spatial embeddings) will be masked. The motivation is that this masking strategy will contribute to better utilizing the spatial features among sensors. For the mask ratio, we leave it as a hyper-parameter study. Temporal Pretraining Our self-supervised temporal pretraining task enhances transformer encoders by training to distinguish the time order of two randomly selected tubelets, so that the temporal features can be maintained in the model, as shown in Figure 5(b). Firstly, two tubelets Qi and Qj are randomly selected from the whole set. Note that we should make sure Qi and Qj are collected at different times, so the temporal embeddings of them are different (i.e., etemporal Qi \u0338= etemporal Qj ). Then, we use the encoded embeddings E(K) Qi and E(K) Qj of tubelet Qi and Qj to identify the time order of them. If tubelet Qi is collected earlier than Qj, the identification result should be ytemp i,j = 1, otherwise 0. We choose a simple but effective way to optimize this task. We concatenate the two embeddings E(K) Qi and E(K) Qj , and use a linear layer with a sigmoid activation function to predict the time order \u02c6 ytemp i,j . A binary cross-entropy loss is utilized to optimize this pretraining task. Moreover, for each tactile signal tensor, only randomly selecting one pair of tubelets for pretraining is not enough. So Ncomp tubelet pairs are randomly selected and used in pretraining, i.e., (Qi1, Qj1), ..., (QiNcomp, QjNcomp). Finally, the loss function is formally defined in Equation (4). \u02c6 ytemp in,jn =\u03c3(Wframe(E(K) Qin \u2295E(K) Qjn )\u22a4 Ltemp = \u2212(ytemp in,jn log(\u02c6 ytemp in,jn) + (1 \u2212ytemp in,jn) log(1 \u2212\u02c6 ytemp in,jn)) (4) Where \u2295means vector concatenation, \u03c3 is a sigmoid activation function, n \u2208{1, 2, ..., Ncomp} and Wframe \u2208R1\u00d72D. To simultaneously utilize these two tasks in pretraining, we add an extra setting that only randomly selects unmasked tubelets, so we can optimize them together. Specifically, we aggregate the MTR loss with our temporal loss through a weight coefficient \u03b2, which is a hyper-parameter. The final pretraining loss is defined as follows: Lpretrain = LMTR + \u03b2Ltemp (5) Actions #Samples Actions #Samples Downstairs 4,942 Stand toes 3,978 Jump 3,090 Upstairs 5,025 Lean left 5,047 Walk 6,078 Lean right 5,011 Walk fast 5,360 Stand 5,024 Table 1: Statistics of each type of action. With the spatial-based random mask strategy in the MTR task and the designed temporal pretraining task, we enhance the representation ability of transformer encoders to better capture the spatial and temporal properties of tactile signals jointly. Similar to the pretraining of video transformers, we only use the training set of tactile signals for pretraining, as there lacks large scale open tactile datasets. Fine-Tuning for Action Classification After introducing our STAT model and pretraining tasks, we will show how to train STAT for action classification. We follow the approach of other transformer models by using the embedding of [CLS] token to represent the entire signal sequence. Firstly, we take the embedding of [CLS] token from the last transformer layer block (i.e., E(K) [CLS]), which represents the whole input signal. Then, we add a linear layer on the top of this embedding to classify it into action types (shown in Figure 1). The loss function is: \u02c6 yi =\u03b4(Wc(E(K) [CLS])\u22a4+ bc) L =CrossEntropy(\u02c6 yi, yi) (6) Where \u03b4 is the softmax activation function, Wc \u2208RM\u00d72D and bc \u2208R1\u00d7M, and yi is a one-hot vector where only the index of the true label is 1. Experiments Experimental Settings Dataset As tactile action classification is a promising new application scenario that is under development, there is only one large-scale open dataset by far. So our experiments are conducted on the public tactile signal dataset1, which is collected by individuals with two wearable electronic socks to perform specific actions. The dataset consists of tactile signals with 9 labeled actions, namely walking, leaning on the left foot, leaning on the right foot, climbing downstairs, climbing upstairs, jumping, standing on toes, fast walking, and standing upright. The statistics are shown in Table 1. T, H, and W are set to 45, 32, and 32, respectively. As the sampling frequency is 15HZ, each piece of data is collected in 3 seconds. Following the providers\u2019 settings, 500 and 1,000 samples of each action are used in validation and testing, respectively, and the other samples are used in training (each action type will be sampled to 4,000 samples). Only the training set will be adopted for model pretraining to avoid data leakage. 1http://senstextile.csail.mit.edu/ \fModels ACC@1 ACC@3 Macro-F1 CNN&GRU (Luo et al. 2021) 0.8794\u00b10.0280 0.9497\u00b10.0183 0.8743\u00b10.0319 TST (Zerveas et al. 2021) 0.8701\u00b10.0252 0.9637\u00b10.0147 0.8660\u00b10.0272 VideoMAE (Tong et al. 2022) 0.7705\u00b10.0906 0.9287\u00b10.0177 0.7521\u00b10.1027 STAT w/o pretraining 0.8050\u00b10.0549 0.9528\u00b10.0225 0.7946\u00b10.0652 STAT 0.9033\u00b10.0098 0.9830\u00b10.0081 0.9015\u00b10.0104 Table 2: Overall performances of all models. downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast Predicted label downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast True label 819 66 0 7 69 5 5 19 10 24 930 0 0 42 4 0 0 0 44 224 664 0 0 0 68 0 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 1000 0 0 0 0 0 8 0 0 0 992 0 0 0 98 70 27 82 46 20 530 27 101 4 9 0 0 0 1 56 864 66 1 0 0 0 0 0 9 0 989 (a) CNN&GRU downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast Predicted label downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast True label 865 9 0 15 68 2 27 4 11 153 773 0 6 17 15 0 18 18 23 23 867 0 3 71 14 0 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 1000 0 0 0 162 8 0 22 88 0 611 106 3 4 5 1 0 0 1 20 778 190 6 0 0 1 0 0 56 0 937 (b) TST downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast Predicted label downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast True label 484 46 0 76 264 61 53 2 13 46 689 1 115 108 26 1 1 12 99 0 828 0 15 11 47 0 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 1000 0 0 0 0 44 0 0 10 22 925 0 0 0 159 5 7 62 160 9 441 25 133 5 0 3 0 0 0 99 560 333 0 0 0 0 1 0 6 0 993 (c) VideoMAE downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast Predicted label downstairs jump lean_left lean_right stand stand_toes upstairs walk walk_fast True label 686 39 0 0 190 6 75 3 0 5 920 0 13 40 1 6 7 9 5 0 948 0 28 14 4 2 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 994 0 6 0 137 2 0 21 80 0 721 35 3 5 0 0 0 0 0 20 908 67 1 0 0 0 0 0 33 10 955 (d) STAT Figure 6: Confusion Matrices of all models. Hyper-parameters Values #Comparison Pairs Ncomp 10, 20, 30, 40, 50 Loss Weight \u03b2 0.5, 0.75, 1, 1.5, 2, 2.5 Masking Ratio 0.1 0.9 with step length 0.1 Adam Learning Rate 1e-3, 5e-3, 1e-2 Transformer Layer K 3, 6, 9, 12 Table 3: Summarization of tuned hyper-parameters. Metrics We use accuracy and macro-F1 as evaluation metrics. As there are multiple classes, we report both Top-1 & Top-3 accuracy as in previous studies (Luo et al. 2021). Besides, we add Macro-F1 to show the comprehensive performance on the imbalanced dataset of all models. Baselines To demonstrate the effectiveness of our model, we use several state-of-the-art baselines: \u2022 CNN&GRU (Luo et al. 2021): This method adopts convolution and recurrent networks for action classification with tactile signals; \u2022 TST (Zerveas et al. 2021): TST is a state-of-the-art transformer-based model for continuous multivariate signal classification with pretraining; \u2022 VideoMAE (Tong et al. 2022): This is a state-of-the-art video classification model with masked auto-encoders. Implementation Details We tune hyper-parameters as shown in Table 3. In addition, the tubelet parameters L and P are set to 5 and 4, while the pretraining and fine-tuning epoch is set to 60. The embedding dimension D is set to 768, in which batch size is 64 and weight decay is 1e-4. For baselines, we employ their public implementations and tune them with hyperparameters suggested by their authors. All experiments are implemented by Pytorch 1.7 and executed on 4 Tesla V100 or GeForce RTX 3090 GPUs. Note that only the training data is used for the pretraining of TST, VideoMAE, and STAT to avoid data leakage. Experiments are repeated 5 times with different random seeds. Besides, the total training time of STAT is similar to VideoMAE (10 hours). The code is available at https://github.com/Aressfull/sock classification. Overall Performances Experimental results of our STAT and baselines are reported in Table 2. TST, VideoMAE, and STAT models are pretrained with the training set, and STAT w/o pretraining is directly trained for the classification task. Firstly, our pretrained STAT outperforms all baseline models in all metrics, showing that jointly modeling spatial and temporal features contributes to better action classification results. STAT achieves 2.7%, 2.0%, and 3.1% improvements than the best baseline in ACC@1, ACC@3, and Macro-F1, respectively. Secondly, STAT without pretraining performs worse than most baselines, showing that our pretraining provides significant improvements for STAT in the action classification task. Thirdly, for the baseline models, the widely used tactile CNN&GRU model achieves comparable results as TST, showing that modeling spatial features and temporal features are both important in action classification. However, VideoMAE model performs the worst, which indicates that simply reusing the video transformer will get worse performance. The reason should be videoMAE cannot \f# TE SE TPT ACC@1 ACC@3 Macro-F1 1 \u2713 0.8764 0.9678 0.8715 2 \u2713 0.8417 0.9506 0.8337 3 \u2713 \u2713 0.8947 0.9854 0.8957 4 \u2713 0.8299 0.9507 0.8247 5 \u2713 \u2713 \u2713 0.9033 0.9830 0.9015 Table 4: Experimental results of various ablation strategies. TE: Temporal Embeddings, SE: Spatial Embeddings, and TPT: Temporal Pretraining Task. capture the translation variant of tactile signals, and tactile signals are more dense than videos (VideoMAE only uses 8 frames but uses 45 frames here). To further analyze the performances of different models in various classes, we show the confusion matrices of all models on the test set in Figure 6. From the figures, we have the following observations: Firstly, CNN&GRU performs worse in temporal and spatial sensitive classes, i.e., upstairs and lean left, showing the weaknesses of current tactile classification models. Specifically, CNN&GRU classifies many upstairs samples as walk fast due to their similar spatial features (as shown in Figure 2(b)), while our STAT can distinguish these actions more accurately due to the modeling of temporal features. Secondly, TST performs even worse than CNN&GRU in many actions, indicating that focusing on modeling temporal features is not enough for tactile signals. For example, TST mistakes a number of lean left samples as stand on toes because they have similar temporal features (as shown in Figure 2(a)). Our STAT rarely makes mistakes on these actions as they are distinct in spatial features. Thirdly, our STAT model performs the best in most classes, as we jointly capture both the spatial and temporal properties of tactile signals. Meanwhile, due to the translation variant property of tactile signals, VideoMAE, which is designed for video classifications, is unsuitable for this task. Analyses Ablation Study To verify the effectiveness of the designed pretraining task and embeddings, we conduct ablation studies. Table 4 shows our ablation strategies and their performances. Note that the MTR pretraining task, position, and tubelet embeddings are used in all experiments, as we focus on analyzing the newly designed models here. We have the following observations from the results: Firstly, all designed modules contribute to the classification task, as STAT (Strategy 5) achieves the best performance with all modules in ACC@1 & Macro-F1, and comparable results in ACC@3. Secondly, by comparing Strategies 1,2,3 in pairs, we find that removing any one of the two designed embeddings will result in a large drop in performance. Besides, temporal embeddings are more important than spatial embeddings, as Strategy 1 performs better. Thirdly, STAT with both embeddings (Strategy 3) outperforms STAT with only the temporal task (Strategy 4) in all metrics. This indicates that only adopting the proposed pretraining task cannot make full use of its ability. 10 20 30 40 50 Number of comparison pairs: Ncomp 0.75 0.80 0.85 0.90 0.95 Value ACC@1 ACC@3 Macro F1 Figure 7: Effect of the number of comparison pairs. 0.5 1.0 1.5 2.0 2.5 Loss weight: 0.75 0.80 0.85 0.90 0.95 Value ACC@1 ACC@3 Macro F1 Figure 8: Effect of the weight of temporal pretraining loss. Hyper-parameter Analyses Due to the space limit, we only show two conducted hyper-parameter experiments. Effect of the Number of Comparison Pairs Ncomp. To verify the effect of Ncomp in the temporal pretraining task, we conduct analyses experiments and summarize the results in Figure 7. The best performance is achieved when Ncomp = 30. Fewer comparison pairs perform worse may be caused by insufficient training, while more pairs will not contribute to better results either. Effect of the Loss Weight \u03b2. We adjust the weight \u03b2 for our temporal pretraining task in Equation (5) in different values, and the results are shown in Figure 8. It indicates that a too-low or too-high value of \u03b2 will hurt the performance of our STAT model, and \u03b2 = 1 performs the best." + }, + { + "url": "http://arxiv.org/abs/2312.01556v1", + "title": "Searching Dense Representations with Inverted Indexes", + "abstract": "Nearly all implementations of top-$k$ retrieval with dense vector\nrepresentations today take advantage of hierarchical navigable small-world\nnetwork (HNSW) indexes. However, the generation of vector representations and\nefficiently searching large collections of vectors are distinct challenges that\ncan be decoupled. In this work, we explore the contrarian approach of\nperforming top-$k$ retrieval on dense vector representations using inverted\nindexes. We present experiments on the MS MARCO passage ranking dataset,\nevaluating three dimensions of interest: output quality, speed, and index size.\nResults show that searching dense representations using inverted indexes is\npossible. Our approach exhibits reasonable effectiveness with compact indexes,\nbut is impractically slow. Thus, while workable, our solution does not provide\na compelling tradeoff and is perhaps best characterized today as a \"technical\ncuriosity\".", + "authors": "Jimmy Lin, Tommaso Teofili", + "published": "2023-12-04", + "updated": "2023-12-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "Introduction In so-called dense retrieval models [Karpukhin et al., 2020], queries and passages are both encoded as dense vector representations (often called embeddings), and top-k retrieval is formulated as a nearest neighbour search problem. That is, given a query vector, the system\u2019s task is to retrieve the top-k most similar passage vectors with respect to a simple comparison operation, typically the inner (dot) product. Today, these dense representation vectors are typically derived from transformer-based models fine-tuned on a dataset of relevant query\u2013passage pairs; a common configuration involves models that generate vectors of 768 dimensions. Despite the observation that dense retrieval models and sparse bag-of-words lexical models such as BM25 capture parametric variations of a bi-encoder architecture [Lin, 2021], implementations of top-k retrieval are quite different for the two classes of models. For sparse bag-of-words vectors, the venerable inverted index has served as the workhorse for top-k retrieval dating back many decades. For dense vectors, current best practices take advantage of hierarchical navigable small-world network (HNSW) indexes [Malkov and Yashunin, 2020] to perform approximate nearest neighbour search; the Faiss [Johnson et al., 2019] library provides an implementation that is widely used today. Lin [2021] pointed out that the core retrieval problem can be decomposed into two independent components, what he refers to as the logical scoring model and the physical retrieval model. That is, the generation of vector representations from content is distinct from efficient solutions to the top-k retrieval problem. Of course, there exists a strong affinity between sparse representations and inverted indexes, on the one hand, and dense representations and HNSW indexes, on the other. However, this tight coupling does not necessarily need to be the case. In this work, we explore the contrarian approach of searching dense representations with inverted indexes. Building on previous work [Teofili and Lin, 2019], we apply two types of transformations\u2014 \u201cfake words\u201d and \u201clexical LSH\u201d\u2014that enable dense representations to be captured in standard inverted indexes and we empirically evaluate top-k retrieval using these two techniques. Such a solution is arXiv:2312.01556v1 [cs.IR] 4 Dec 2023 \fpotentially interesting because it enables dense and sparse retrieval using a single infrastructural component, obviating the need to maintain and coordinate different types of indexes. We present experiments on the MS MARCO passage ranking dataset, evaluating three dimensions of interest: output quality, speed, and index size. Results show that it is possible to perform top-k retrieval on dense representations using only inverted indexes: Compared to HNSW indexes, we can achieve reasonable effectiveness with much smaller indexes, but unfortunately, search is impractically slow. Thus, while our proposed techniques are \u201cworkable\u201d, they do not appear to provide a compelling tradeoff in the overall design space. We would characterize them as a \u201ctechnical curiosity\u201d, but still worthwhile for the community to be aware of. Perhaps our efforts will become a part of further breakthroughs that will yield a practical solution. 2 Methods In this work, we examined two techniques for top-k retrieval on dense vectors using inverted indexes. Both techniques were originally implemented in the Anserini toolkit [Yang et al., 2018] using \u201cstock parts\u201d from the open-source Lucene search library, as part of the work described in Teofili and Lin [2019]. However, this previous work pre-dated the advent of dense retrieval models and focused on similarity comparisons with word embeddings, which lacked a concrete task. Here, we applied the same techniques, but to an actual real-world retrieval application. \u201cFake words\u201d. We implemented the approach described in Amato et al. [2016], which encodes the features of a vector as a number of \u201cfake\u201d terms proportional to the feature value according to the following scheme: Given a vector w = (w1, . . . , wm), each feature wi is associated with a unique alphanumeric term \u03c4i such that the document corresponding to the vector w is represented by \u201cfake words\u201d generated by \u222am i=1 \u222a\u230aQ\u00b7wi\u230b j=1 \u03c4i, where Q > 1 is a quantization factor. Thus, the fake words encoding maintains direct proportionality between the float value of a feature and the term frequency of the corresponding fake index term. Feature-level matching for retrieval is achieved by matching on these fake words with scores computed by Lucene\u2019s ClassicSimilarity, which is a tf\u2014idf variant. Finally, for this approach to be effective, vector inner products have to be equivalent to cosine similarity, which can be achieved by normalizing the vectors to unit length. \u201cLexical LSH\u201d. We implemented an approach that lexically quantizes vector components for easy indexing and search in Lucene using LSH [Gionis et al., 1999]. While LSH is, of course, not new, to our knowledge, Teofili [2018] was the first to devise an implementation that directly integrates with inverted indexes inside Lucene. Given a vector w = (w1, . . . , wm), each feature wi is rounded to the d-th decimal place and tagged with its feature index i as a term prefix. For example, consider w = {0.12, 0.43, 0.74}. If d = 1, w is converted into the tokens 1_0.1, 2_0.4, and 3_0.7. In our implementation, tokens are aggregated into n-grams and finally passed to an LSH function, which is implemented in Lucene as MinHashFilter, to hash the n-grams into a configurable number of buckets b. Thus, the vector w is represented as a set of LSH-generated text signatures for tagged and quantized feature n-grams. 3 Experiments While our techniques are agnostic with respect to the actual dense retrieval model, for fair comparisons to HNSW indexes in Lucene, we needed vector representations that are normalized to unit length because Lucene\u2019s implementation is restricted to top-k retrieval using cosine similarity (as opposed to general inner products). Many dense retrieval models generate representations that do not perform this normalization. For their HNSW experiments in Lucene, Ma et al. [2023] had to fine-tune a new embedding model from scratch, which they called cosDPR-distil. To facilitate comparisons to this work, we used the same model. We evaluated top-k retrieval using standard evaluation methodology on the MS MARCO passage ranking test collection [Craswell et al., 2021], comprising three separate sets of queries: the 6980 2 \fdev DL19 DL20 Index Size RR@10 R@1k QPS nDCG@10 R@1k nDCG@10 R@1k (GB) BM25 0.1840 0.8526 426.49 0.5058 0.7501 0.4796 0.7863 2.5 FW (Q = 10) 0.0045 0.0440 940.36 0.0063 0.0167 0.0241 0.0408 0.2 FW (Q = 20) 0.2937 0.9142 18.65 0.5795 0.7238 0.5912 0.7327 1.3 FW (Q = 30) 0.3498 0.9580 5.31 0.6488 0.7788 0.6483 0.7980 2.8 FW (Q = 40) 0.3605 0.9668 2.80 0.6857 0.7902 0.6666 0.8194 4.2 FW (Q = 50) 0.3627 0.9669 1.91 0.6930 0.7957 0.6724 0.8193 5.7 FW (Q = 60) 0.3681 0.9707 1.56 0.6849 0.8005 0.6832 0.8261 7.1 FW (Q = 70) 0.3657 0.9695 1.42 0.6933 0.7979 0.6823 0.8223 8.4 FW (Q = 80) 0.3642 0.9708 1.25 0.6833 0.8015 0.6706 0.8267 9.7 FW (Q = 90) 0.3668 0.9733 1.19 0.7013 0.8006 0.6750 0.8271 11 LexLSH (b = 100) 0.2284 0.8365 6.26 0.4233 0.5614 0.4880 0.6391 0.9 LexLSH (b = 200) 0.2959 0.9267 3.10 0.5810 0.6610 0.5863 0.7485 1.4 LexLSH (b = 300) 0.3180 0.9457 2.06 0.6167 0.7272 0.6265 0.7866 2.1 LexLSH (b = 400) 0.3309 0.9538 1.41 0.6398 0.7450 0.6505 0.7924 2.7 LexLSH (b = 500) 0.3397 0.9569 1.09 0.6443 0.7556 0.6548 0.7992 3.3 LexLSH (b = 600) 0.3457 0.9596 0.83 0.6716 0.7610 0.6569 0.8131 3.9 LexLSH (b = 700) 0.3474 0.9609 0.74 0.6843 0.7735 0.6558 0.8157 4.5 LexLSH (b = 800) 0.3496 0.9611 0.67 0.6778 0.7784 0.6669 0.8181 4.9 LexLSH (b = 900) 0.3496 0.9611 0.66 0.6778 0.7784 0.6669 0.8181 4.9 HNSW (default) 0.3881 0.9732 47.78 0.7159 0.8101 0.6967 0.8391 26 HNSW (optimized) 0.3885 0.9747 387.29 0.7250 0.8222 0.7025 0.8520 26 Table 1: Performance of our proposed fake words and lexical LSH techniques on the MS MARCO passage corpus. queries from the development (dev) set, as well as queries from the TREC 2019 and 2020 Deep Learning Tracks [Craswell et al., 2019, 2020]. Our experiments were performed with Anserini at commit e99c73d (11/25/2023) on a Mac Studio with an M1 Ultra processor containing 20 cores (16 performance and 4 efficiency) and 128 GB memory, running macOS Sonoma 14.1.1 and OpenJDK 11.0.13. Unless otherwise specified, all runs used 16 threads. Results are presented in Table 1, covering the aspects of performance that we are interested in: output quality as measured in standard IR effectiveness metrics, speed in terms of query throughput (measured in queries per second or QPS), and index size (measured with the du -h command). The rows capture results either with the fake words technique (FW), parameterized by Q, or the lexical LSH technique, parameterized by the number of buckets b (with d = 1). Query throughput is measured only on the dev set, which has a sufficient number of queries (6980) to obtain reliable measurements; we observe only small variations from run to run. We report the average of three trials. In all cases, experimental runs used pre-encoded queries\u2014that is, cached representations from neural inference applied to the queries. To better understand the overhead associated with query inference, we refer readers to evaluations in Chen et al. [2023]. For reference, evaluation of BM25 is presented in the top row, and evaluation of HNSW indexes for cosDPR-distil is presented in the final two rows. The \u201cdefault\u201d HNSW condition characterizes performance using Anserini \u201cout of the box\u201d with default parameters (M set to 16, efC set to 100, 16 indexing threads). The \u201coptimized\u201d index was constructed with efC set to 1000 (all other parameters being the same), but optimized down to a single index segment (which is a very time-consuming operation); this is the same exact index instance used in Chen et al. [2023]. This optimization greatly increases search performance, but unless the document collection is static, this step is unrealistic for real-world use. Note that since HNSW indexing is non-deterministic, different index instances (i.e., from running the indexer multiple times) may exhibit small effectiveness variations. We report scores from our specific index instances, which may differ slightly from the official Anserini reproducibility documentation. From Table 1, looking at the fake words technique, it appears that the sweet spot is around Q = 40. A bit higher effectiveness comes at a roughly 30% decrease in QPS, and the index size of 2.8 GB remains quite modest. For the lexical LSH technique, the sweet spot appears to be around b = 400, 3 \fwith a 2.7 GB index. At a high level, it appears that the fake words technique provides better tradeoffs than the lexical LSH technique. Overall, for both techniques we would characterize the effectiveness as \u201cacceptable\u201d, with compact inverted indexes that are much smaller than the HNSW indexes. However, search is impractically slow compared to retrieval based on HNSW indexes. While it would be possible to perform more exhaustive parameter tuning, better parameter selection alone is unlikely to close the performance gap between either technique and HNSW indexes. The much smaller index sizes offered by our techniques definitely present an advantage over HNSW indexes, but we do not see a compelling use case for either of these techniques. Thus, we would characterize the techniques presented here as \u201ctechnical curiosities\u201d, but impractical overall. Although we can imagine a number of further explorations, such as hybrid dense\u2013sparse models with inverted indexes, further pursuit of these avenues does not seem particularly promising at present. It is clear that more breakthroughs are needed, and perhaps this work will represent a step along the way. 4" + }, + { + "url": "http://arxiv.org/abs/2308.14963v1", + "title": "Vector Search with OpenAI Embeddings: Lucene Is All You Need", + "abstract": "We provide a reproducible, end-to-end demonstration of vector search with\nOpenAI embeddings using Lucene on the popular MS MARCO passage ranking test\ncollection. The main goal of our work is to challenge the prevailing narrative\nthat a dedicated vector store is necessary to take advantage of recent advances\nin deep neural networks as applied to search. Quite the contrary, we show that\nhierarchical navigable small-world network (HNSW) indexes in Lucene are\nadequate to provide vector search capabilities in a standard bi-encoder\narchitecture. This suggests that, from a simple cost-benefit analysis, there\ndoes not appear to be a compelling reason to introduce a dedicated vector store\ninto a modern \"AI stack\" for search, since such applications have already\nreceived substantial investments in existing, widely deployed infrastructure.", + "authors": "Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian", + "published": "2023-08-29", + "updated": "2023-08-29", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "Introduction Recent advances in the application of deep neural networks to search have focused on representation learning in the context of the so-called bi-encoder architecture, where content (queries, passages, and even images and other multimedia content) is represented by dense vectors (so-called \u201cembeddings\u201d). Dense retrieval models using this architecture form the foundation of retrieval augmentation in large language models (LLMs), a popular and productive approach to improving LLM capabilities in the broader context of generative AI (Mialon et al., 2023; Asai et al., 2023). The dominant narrative today is that since dense retrieval requires the management of a potentially large number of dense vectors, enterprises require a dedicated \u201cvector store\u201d or \u201cvector database\u201d as part of their \u201cAI stack\u201d. There is a cottage industry of startups that are pitching vector stores as novel, must-have components in a modern enterprise architecture; examples include Pinecone, Weaviate, Chroma, Milvus, Qdrant, just to name a few. Some have even argued that these vector databases will replace the venerable relational database.1 The goal of this paper is to provide a counterpoint to this narrative. Our arguments center around a simple cost\u2013benefit analysis: since search is a brownfield application, many organizations have already made substantial investments in these capabilities. Today, production infrastructure is dominated by the broad ecosystem centered around the open-source Lucene search library, most notably driven by platforms such as Elasticsearch, OpenSearch, and Solr. While the Lucene ecosystem has admittedly been slow to adapt to recent trends in representation learning, there are strong signals that serious investments are being made in this space. Thus, we see no compelling reason why separate, dedicated vector stores are necessary in a modern enterprise. In short, the benefits do not appear to justify the cost of additional architectural complexity. It is important to separate the need for capabilities from the need for distinct software components. While hierarchical navigable small-world network (HNSW) indexes (Malkov and Yashunin, 2020) 1https://twitter.com/andy_pavlo/status/1659740200266870787 arXiv:2308.14963v1 [cs.IR] 29 Aug 2023 \frepresent the state of the art today in approximate nearest neighbor search\u2014the most important operation for vector search using embeddings\u2014it is not clear that providing operations around HNSW indexes requires a separate and distinct vector store. Indeed, the most recent major release of Lucene (version 9, from December 2021) includes HNSW indexing and vector search, and these capabilities have steadily improved over time. The open-source nature of the Lucene ecosystem means that advances in the core library itself will be rapidly adopted and integrated into other software platforms within the broader ecosystem. The growing popularity of so-called embedding APIs (Kamalloo et al., 2023) further strengthens our arguments. These APIs encapsulate perhaps the most complex and resource-intensive aspect of vector search\u2014the generation of dense vectors from pieces of content. Embedding APIs hide model training, deployment, and inference behind the well-known benefits of service-based computing, much to the delight of practitioners. To support our arguments, we demonstrate vector search with OpenAI embeddings (Neelakantan et al., 2022) using the popular MS MARCO passage ranking test collection (Bajaj et al., 2018). Specifically, we have encoded the entire corpus and indexed the embedding vectors using Lucene. Evaluation on the MS MARCO development set queries and queries from the TREC Deep Learning Tracks (Craswell et al., 2019, 2020) show that OpenAI embeddings are able to achieve a respectable level of effectiveness. And as Devins et al. (2022) have shown, anything doable in Lucene is relatively straightforward to replicate in Elasticsearch (and any other platform built on Lucene). Thus, we expect the ideas behind our demonstration to become pervasive in the near future. We make available everything needed to reproduce the experiments described in this paper, starting with the actual OpenAI embeddings, which we make freely downloadable.2 At a high-level, our demonstration shows how easy it is to take advantage of state-of-the-art AI techniques today without any AI-specific implementations per se: embeddings can be computed with simple API calls, and indexing and searching dense vectors is conceptually identical to indexing and searching text with bag-of-words models that have been available for decades. 2 From Architecture to Implementation The central idea behind the bi-encoder architecture (see Figure 1) is to encode queries and passages into dense vectors\u2014commonly referred to as \u201cembeddings\u201d\u2014such that relevant query\u2013passage pairs receive high scores, computed as the dot product of their embeddings. In this manner, search can be reformulated as a nearest neighbor search problem in vector space: given the query embedding, the system\u2019s task is to rapidly retrieve the top-k passage embeddings with the largest dot products (Lin, 2021). Typically, \u201cencoders\u201d for generating the vector representations are implemented using transformers, which are usually fine-tuned in a supervised manner using a large dataset of relevant query\u2013passage pairs (Karpukhin et al., 2020; Xiong et al., 2021). This formulation of search, in terms of comparisons between dense vectors, differs from \u201ctraditional\u201d bag-of-words sparse representations that rely on inverted indexes for low-latency query evaluation. Instead, nearest neighbor search in vector space requires entirely different techniques: indexes based on hierarchical navigable small-world networks (HNSW) (Malkov and Yashunin, 2020) are commonly acknowledged as representing the state of the art. The Faiss library (Johnson et al., 2019) provides a popular implementation of HNSW indexes that is broadly adopted today and serves as a standard baseline. Despite conceptual similarities (Lin, 2021), it is clear that top-k retrieval on sparse vectors and dense vectors require quite different and distinct \u201csoftware stacks\u201d. Since hybrid approaches that combine both dense and sparse representations have been shown to be more effective than either alone (Ma et al., 2022b; Lin and Lin, 2023), many modern systems combine separate retrieval components to achieve hybrid retrieval. For example, the Pyserini IR toolkit (Lin et al., 2021a) integrates Lucene and Faiss for sparse and dense retrieval, respectively. Recognizing the need for managing both sparse and dense retrieval models, the dominant narrative today is that the modern enterprise \u201cAI stack\u201d requires a dedicated vector store or vector database, alongside existing fixtures such as relational databases, NoSQL stores, event stores, etc. A vector store would handle, for example, standard CRUD (create, read, update, delete) operations as well as nearest neighbor search. Many startups today are built on this premise; examples include Pinecone, Weaviate, Chroma, Milvus, Qdrant, just to name a few. This is the narrative that our work challenges. 2https://github.com/castorini/anserini/blob/master/docs/experiments-msmarco-passage-openai-ada2.md 2 \fRanked List \u201cDocuments\u201d Query Doc Encoder Query Encoder Top-k Retrieval Figure 1: A standard bi-encoder architecture, where encoders generate dense vector representations (embeddings) from queries and documents (passages). Retrieval is framed as k-nearest neighbor search in vector space. Modern enterprise architectures are already exceedingly complex, and the addition of another software component (i.e., a distinct vector store) requires carefully weighing costs as well as benefits. The cost is obvious: increased complexity, not only from the introduction of a new component, but also from interactions with existing components. What about the benefits? While vector stores no doubt introduce new capabilities, the critical question is whether these capabilities can be provided via alternative means. Search is a brownfield application. Wikipedia defines this as \u201ca term commonly used in the information technology industry to describe problem spaces needing the development and deployment of new software systems in the immediate presence of existing (legacy) software applications/systems.\u201d Additionally, \u201cthis implies that any new software architecture must take into account and coexist with live software already in situ.\u201d Specifically, many organizations have already made substantial investments in search within the Lucene ecosystem. While most organizations do not directly use the open-source Lucene search library in production, the search application landscape is dominated by platforms that are built on top of Lucene such as Elasticsearch, OpenSearch, and Solr. For example, Elastic, the publicly traded company behind Elasticsearch, reports approximately 20,000 subscriptions to its cloud service as of Q4 FY2023.3 Similarly, in the category of search engines, Lucene dominates DB-Engines Ranking, a site that tracks the popularity of various database management systems.4 There\u2019s a paucity of concrete usage data, but it would not be an exaggeration to say that Lucene has an immense install base. The most recent major release of Lucene (version 9), dating back to December 2021, includes HNSW indexing and search capabilities, which have steadily improved over the past couple of years. This means that differences in capabilities between Lucene and dedicated vector stores are primarily in terms of performance, not the availability of must-have features. Thus, from a simple cost\u2013benefit calculus, it is not clear that vector search requires introducing a dedicated vector store into an already complex enterprise \u201cAI stack\u201d. Our thesis: Lucene is all you need. We empirically demonstrate our claims on the MS MARCO passage ranking test collection, a standard benchmark dataset used by researchers today. We have encoded the entire corpus using OpenAI\u2019s ada2 embedding endpoint, and then indexed the dense vectors with Lucene. Experimental results show that this combination achieves effectiveness comparable to the state of the art on the development queries as well as queries from the TREC 2019 and 2020 Deep Learning Tracks. 3https://ir.elastic.co/news-events/press-releases/press-releases-details/2023/ Elastic-Reports-Fourth-Quarter-and-Fiscal-2023-Financial-Results/default.aspx 4https://db-engines.com/en/ranking/search+engine 3 \fOur experiments are conducted with Anserini (Yang et al., 2018), a Lucene-based IR toolkit that aims to support reproducible information retrieval research. By building on Lucene, Anserini aims to bridge the gap between academic information retrieval research and the practice of building real-world search applications. Devins et al. (2022) showed that capabilities implemented by researchers in Anserini using Lucene can be straightforwardly translated into Elasticsearch (or any other platform in the Lucene ecosystem), thus simplifying the path from prototypes to production deployments. Our demonstration further shows the ease with which state-of-the-art vector search can be implemented by simply \u201cplugging together\u201d readily available components. In the context of the bi-encoder architecture, Lin (2021) identified the logical scoring model and the physical retrieval model as distinct conceptual components. In our experiments, the logical scoring model maps to the OpenAI embedding API\u2014whose operations are no different from any other API endpoint. What Lin calls the physical retrieval model focuses on the top-k retrieval capability, which is handled by Lucene. In Anserini, vector indexing and search is exposed in a manner that is analogous to indexing and retrieval using bag-of-words models such as BM25. Thus, the implementation of the state of the art in vector search using generative AI does not require any AI-specific implementations, which increases the accessibility of these technologies to a wider audience. 3 Experiments Experiments in this paper are relatively straightforward. We focused on the MS MARCO passage ranking test collection (Bajaj et al., 2018), which is built on a corpus comprising approximately 8.8 million passages extracted from the web. Note that since the embedding vectors are generated by OpenAI\u2019s API endpoint, no model training was performed. For evaluation, we used the standard development queries as well as queries from the TREC 2019 and TREC 2020 Deep Learning Tracks. In our experimental setup, we utilized the OpenAI ada2 model (Neelakantan et al., 2022) for generating both query and passage embeddings. This model is characterized by an input limit of 8191 tokens and an output embedding size of 1536 dimensions. However, to maintain consistency with the existing literature (Pradeep et al., 2021; Ma et al., 2022a), we truncated all passages in the corpus to 512 tokens. It is unknown whether OpenAI leveraged the MS MARCO passage corpus during model development, but in general, accounting for data leakage is extremely challenging for large models, especially those from OpenAI that lack transparency. Using tiktoken, OpenAI\u2019s official tokenizer, we computed the average token count per passage in our corpus to be 75.2, resulting in a total of approximately 660 million tokens. In order to generate the embeddings efficiently, we queried the API in parallel while respecting the rate limit of 3500 calls per minute. We had to incorporate logic for error handling in our code, given the high-volume nature of our API calls. Ultimately, we were able to encode both the corpus and the queries, the latter of which are negligible in comparison, in a span of two days. As previously mentioned, all our retrieval experiments were conducted with the Anserini IR toolkit (Yang et al., 2018). The primary advantage of Anserini is that it provides direct access to underlying Lucene features in a \u201cresearcher-friendly\u201d manner that better comports with modern evaluation workflows. Our experiments were based on Lucene 9.5.0, but indexing was a bit tricky because the HNSW implementation in Lucene restricts vectors to 1024 dimensions, which was not sufficient for OpenAI\u2019s 1536-dimensional embeddings.5 Although the resolution of this issue, which is to make vector dimensions configurable on a per codec basis, has been merged to the Lucene source trunk,6 this feature has not been folded into a Lucene release (yet) as of early August 2023. Thus, there is no public release of Lucene that can directly index OpenAI\u2019s ada2 embedding vectors. Fortunately, we were able to hack around this limitation in an incredibly janky way.7 Experimental results are shown in Table 1, where we report effectiveness in terms of standard metrics: reciprocal rank at 10 (RR@10), average precision (AP), nDCG at a rank cutoff of 10 (nDCG@10), and recall at a rank cutoff of 1000 (R@1k). The effectiveness of the ada2 embeddings is shown in the 5https://github.com/apache/lucene/issues/11507 6https://github.com/apache/lucene/pull/12436 7The sketch of the solution is as follows: We copy relevant source files from the Lucene source trunk directly into our source tree and patch the vector size settings directly. When we build our fatjar, the class files of our \u201clocal versions\u201d take precedence, and hence override the vector size limitations. 4 \fdev DL19 DL20 RR@10 R@1k AP nDCG@10 R@1k AP nDCG@10 R@1k Unsupervised Sparse Representations BM25 (Ma et al., 2022a)\u2217 0.184 0.853 0.301 0.506 0.750 0.286 0.480 0.786 BM25+RM3 (Ma et al., 2022a)\u2217 0.157 0.861 0.342 0.522 0.814 0.301 0.490 0.824 Learned Sparse Representations uniCOIL (Ma et al., 2022a)\u2217 0.352 0.958 0.461 0.702 0.829 0.443 0.675 0.843 SPLADE++ ED (Formal et al., 2022)\u2217 0.383 0.983 0.505 0.731 0.873 0.500 0.720 0.900 Learned Dense Representations TAS-B (Hofst\u00e4tter et al., 2021) 0.340 0.975 0.712 0.845 0.693 0.865 TCT-ColBERTv2 (Lin et al., 2021b)\u2217 0.358 0.970 0.447 0.720 0.826 0.475 0.688 0.843 ColBERT-v2 (Santhanam et al., 2022) 0.397 0.984 Aggretriever (Lin et al., 2023)\u2217 0.362 0.974 0.435 0.684 0.808 0.471 0.697 0.856 OpenAI ada2 0.343 0.984 0.479 0.704 0.863 0.477 0.676 0.871 Table 1: Effectiveness of OpenAI ada2 embeddings on the MS MARCO development set queries (dev) and queries from the TREC 2019/2020 Deep Learning Tracks (DL19/DL20), compared to a selection of other models. \u2217indicates results from Pyserini\u2019s two-click reproductions (Lin, 2022) available at https://castorini.github.io/pyserini/2cr/msmarco-v1-passage.html, which may differ slightly from the original papers. All other results are copied from their original papers. last row of the table. Note that due to the non-deterministic nature of HNSW indexing, effectiveness figures may vary slightly from run to run. For comparison, we present results from a few select points of reference, classified according to the taxonomy proposed by Lin (2021); OpenAI\u2019s embedding models belong in the class of learned dense representations. Notable omissions in the results table include the following: the original OpenAI paper that describes the embedding model (Neelakantan et al., 2022) does not report comparable results; neither does Izacard et al. (2021) for Contriever, another popular learned dense representation model. Recently, Kamalloo et al. (2023) also evaluated OpenAI\u2019s ada2 embeddings, but they did not examine any of the test collections we do here. Looking at the results table, our main point is that we can achieve effectiveness comparable to the state of the art using a production-grade, completely off-the-shelf embedding API coupled with Lucene for indexing and retrieval. To complete our experimental results, we provide performance figures on a server with two Intel Xeon Platinum 8160 processors (33M Cache, 2.10 GHz, 24 cores each) with 1 TB RAM, running Ubuntu 18.04 with ZFS. This particular processor was launched in Q3 of 2017 and is no longer commercially available; we can characterize this server as \u201chigh end\u201d, but dated. Indexing took around three hours with 16 threads, with the parameters M set to 16 and efC set to 100, without final segment optimization. Using 32-bit floats, the raw 1536-dimensional vectors should consume 54 GB on disk, but for convenience we used an inefficient JSON text-based representation. Therefore, our collection of vectors takes up 109 GB as compressed text files (using gzip). For vector search, using 16 threads, we were able to achieve 9.8 queries per second (QPS), fetching 1000 hits per query with the efSearch parameter set to 1000. These results were obtained on the MS MARCO development queries, averaged over four separate trials after a warmup run. 4 Discussion Our demonstration shows that it is possible today to build a vector search prototype using OpenAI embeddings directly with Lucene. Nevertheless, there are a number of issues worth discussing, which we cover below. Jank. We concede that getting our demonstration to work required a bit of janky implementation tricks. Even though all the required features have been merged to Lucene\u2019s source trunk, no official release has been cut that incorporates all the patches (at least at the time we performed our experiments in early August, 2023). Quite simply, the complete feature set necessary for production deployment is not, as they say, ready for prime time. However, to use another clich\u00e9, this is a small matter of programming (SMOP). We see no major roadblocks in the near future: the next official release of 5 \fLucene will incorporate the necessary features, and after that, all downstream consumers will begin to incorporate the capabilities that we demonstrate here. Nevertheless, Lucene has been a relative laggard in dense retrieval. Despite this, we believe that recent developments point to substantial and sustained investments in the Lucene ecosystem moving forward. For example, in its Q4 FY 2023 report, Elastic announced the Elasticsearch Relevance Engine, \u201cpowered by built-in vector search and transformer models, designed specifically to bring the power of AI innovation to proprietary enterprise data.\u201d A recent blog post8 from Amazon Web Services explained vector database capabilities in OpenSearch, providing many details and reference architectures. These are just two examples of commitments that help bolster the case for Lucene that we have articulated here. Overall, we are optimistic about the future of the ecosystem. Performance. Lucene still lags alternatives in terms of indexing speed, query latency and throughput, and related metrics. For example, Ma et al. (2023) recently benchmarked Lucene 9.5.0 against Faiss (Johnson et al., 2019). Experiments suggest that Lucene achieves only around half the query throughput of Faiss under comparable settings, but appears to scale better when using multiple threads. Although these results only capture a snapshot in time, it would be fair to characterize Lucene as unequivocally slower. However, Faiss is relatively mature and hence its headroom for performance improvements is rather limited. In contrast, we see many more opportunities for gains in Lucene. Coupled with signs of strong commitment (discussed above), we believe that the performance gap between Lucene and dedicated vector stores will decrease over time. Alternatives. We acknowledge a number of competing alternatives that deserve consideration. Note that the core argument we forward is about cost\u2013benefit tradeoffs: In our view, it is not clear that the benefits offered by a dedicated vector store outweigh the increased architectural complexity of introducing a new software component within an enterprise. From this perspective, we can identify two potentially appealing alternatives: \u2022 Fully managed services. One simple way to reduce architectural complexity is to make it someone else\u2019s problem. Vespa9 is perhaps the best example of this solution, providing both dense retrieval and sparse retrieval capabilities in a fully managed environment, eliminating the need for users to explicitly worry about implementation details involving inverted indexes, HNSW indexes, etc. Vepsa provides a query language that supports a combination of vector search, full-text search, as well as search over structured data. Our main question here concerns traction and adoption: as a brownfield application, we\u2019re not convinced that enterprises will make the (single, large) leap from an existing solution to a fully managed service. \u2022 Vector search capabilities in relational databases. In the same way that vector search grows naturally out of an already deployed and mature text search platform (e.g., Elasticsearch), we can see similar arguments being made from the perspective of relational databases. Despite numerous attempts (spanning decades) at toppling its lofty perch (Stonebraker and Hellerstein, 2005; Pavlo et al., 2009), relational databases remain a permanent fixture in enterprise \u201cdata stacks\u201d. This means that by building vector search capabilities into relational databases, enterprises gain entr\u00e9e into the world of dense retrieval (essentially) for free. A great example of this approach is pgvector,10 which provides open-source vector similarity search for Postgres. We find the case compelling: if your enterprise is already running Postgres, pgvector adds vector search capabilities with minimal additional complexity. It\u2019s basically a free lunch. 5" + }, + { + "url": "http://arxiv.org/abs/2304.01019v1", + "title": "Simple Yet Effective Neural Ranking and Reranking Baselines for Cross-Lingual Information Retrieval", + "abstract": "The advent of multilingual language models has generated a resurgence of\ninterest in cross-lingual information retrieval (CLIR), which is the task of\nsearching documents in one language with queries from another. However, the\nrapid pace of progress has led to a confusing panoply of methods and\nreproducibility has lagged behind the state of the art. In this context, our\nwork makes two important contributions: First, we provide a conceptual\nframework for organizing different approaches to cross-lingual retrieval using\nmulti-stage architectures for mono-lingual retrieval as a scaffold. Second, we\nimplement simple yet effective reproducible baselines in the Anserini and\nPyserini IR toolkits for test collections from the TREC 2022 NeuCLIR Track, in\nPersian, Russian, and Chinese. Our efforts are built on a collaboration of the\ntwo teams that submitted the most effective runs to the TREC evaluation. These\ncontributions provide a firm foundation for future advances.", + "authors": "Jimmy Lin, David Alfonso-Hermelo, Vitor Jeronymo, Ehsan Kamalloo, Carlos Lassance, Rodrigo Nogueira, Odunayo Ogundepo, Mehdi Rezagholizadeh, Nandan Thakur, Jheng-Hong Yang, Xinyu Zhang", + "published": "2023-04-03", + "updated": "2023-04-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Cross-lingual information retrieval (CLIR) is the task of searching documents in one language with queries from a different language\u2014 for example, retrieving Russian documents using English queries. Typically, a CLIR system exists as part of an overall pipeline involving machine translation, related human language technologies, and sometimes human experts, that together help users satisfy information needs with content in languages they may not be able to read. Research on cross-lingual information retrieval dates back many decades [11, 16, 31, 43], but there has been a recent revival of interest in this challenge [13, 49], primarily due to the advent of multilingual pretrained transformer models such as mBERT [10] and XLM-R [6]. A nexus of recent research activity for cross-lingual information retrieval is the TREC NeuCLIR Track, which ran for the first time at TREC 2022 but has plans for continuing in 2023 and perhaps beyond. The track provides a forum for a community-wide evaluation of CLIR systems in the context of modern collections and systems, dominated today by neural methods. NeuCLIR topics (i.e., information needs) are expressed in English, and systems are tasked with retrieving relevant documents from corpora in Chinese, Persian, and Russian. Perhaps as a side effect of the breakneck pace at which the field is advancing, we feel that there remains a lack of clarity in the IR community about the relationship between different retrieval methods (e.g., dense vs. sparse representations, \u201clearned\u201d vs. \u201cheuristic\u201d vs. \u201cunsupervised\u201d, etc.) and how they should be applied in different retrieval settings. Furthermore, the increasing sophistication of today\u2019s retrieval models and the growing complexity of modern software stacks create serious challenges for reproducibility efforts. This not only makes it difficult for researchers and practitioners to compare alternative approaches in a fair manner, but also creates barriers to entry for newcomers. These issues already exist for mono-lingual retrieval, where documents and queries are in the same language. With the added complexity of cross-lingual demands, the design choices multiply (choice of models, training regimes, application of translation systems, etc.), further muddling conceptual clarity and experimental reproducibility. Contributions. Our work tackles these challenges, specifically focused on helping both researchers and practitioners sort through the panoply of CLIR methods in the context of modern neural retrieval techniques dominated by deep learning. Our contributions can be divided into a \u201cconceptual\u201d and a \u201cpractical\u201d component: Conceptually, we provide a framework for organizing different approaches to cross-lingual retrieval based on the general design of multi-stage ranking for mono-lingual retrieval. These architectures comprise first-stage retrievers that directly perform top-\ud835\udc58retrieval over an arbitrarily large collection of documents, followed by one or more reranking stages that refine the rank order of candidates generated by the first stage. Recently, Lin [23] proposed that retrieval techniques can be characterized by the representations that they manipulate\u2014whether dense semantic vectors or sparse lexical vectors\u2014and how the weights are assigned\u2014whether heuristically, as in the case of BM25, or by a neural network that has been trained with labeled data. Translated into the cross-lingual case, this leads naturally to three main approaches to first-stage retrieval: document translation, query translation, and use of language-independent representations. While these approaches date back many decades, there are \u201cmodern twists\u201d based on learned representations that take advantage of powerful pretrained transformer models. arXiv:2304.01019v1 [cs.IR] 3 Apr 2023 \fResults Documents Query Doc Encoder Query Encoder Top-k Retrieval Results Documents f Query f Doc Encoder Query Encoder Top-k Retrieval Query e translation Results Documents e Query e Doc Encoder Query Encoder Top-k Retrieval Documents f translation Results Documents f Query e Doc Encoder Query Encoder Top-k Retrieval (a) (b) (c) (d) Figure 1: Different retrieval architectures: (a) a mono-lingual bi-encoder architecture that captures both dense and sparse retrieval methods; (b) bi-encoder adapted for document translation, where all documents are translated into \ud835\udc52and queries remain in \ud835\udc52; (c) bi-encoder adapted for query translation, where query \ud835\udc52is translated into \ud835\udc53and issued against documents in \ud835\udc53; (d) bi-encoder where the encoders can project content from multiple languages into the same representation space. For mono-lingual retrieval, a standard multi-stage architecture applies rerankers to the output of first-stage retrievers, like those discussed above. In a cross-lingual context, we describe how crosslingual rerankers can be designed and built using existing multilingual models. Results fusion forms the final component of our conceptual framework. Within a multi-stage architecture, there arises a natural question of when fusion should be performed: this manifests in the early vs. late fusion techniques that we examine. Practically, we provide a number of reproducible baselines in the context of the above conceptual framework for the TREC 2022 NeuCLIR test collection, including variants of the highest-scoring runs that were submitted to the evaluation. These reproducible baselines have been incorporated into the Anserini and Pyserini IR toolkits. Our efforts are built on a collaboration of the two teams that submitted the most effective runs to the TREC evaluation. We hope that this work provides a solid foundation for future work, both in terms of offering a conceptual framework and reference implementations that the community can further build on. 2 MONO-LINGUAL RETRIEVAL OVERVIEW Since mono-lingual retrieval architectures provide the starting point for cross-lingual retrieval, it makes sense to begin with an overview of modern mono-lingual methods. Here, we adopt the standard formulation of the (mono-lingual) retrieval task (also called ad hoc retrieval). From a finite but arbitrarily large collection of documents C = {\ud835\udc511,\ud835\udc512 . . . ,\ud835\udc51\ud835\udc5b}, the system\u2019s task, given query \ud835\udc5e, is to return a top-\ud835\udc58ranking of documents that maximizes some metric of quality such as nDCG or average precision. Rerankers. The earliest applications of neural networks to tackle ad hoc retrieval in a data-driven manner date back to the mid 2000s in the context of learning to rank [5]. Since then, search engine design has been dominated by multi-stage ranking architectures [30, 44], where a first-stage retriever (often, just BM25 retrieval) generates candidate documents that are then reranked by one or more stages, typically by machine-learned models. In the \u201ctransformer era\u201d, for example, BERT [32, 34] and T5 [33] can be used in exactly this manner. Use of pretrained transformers for reranking requires feeding the model both the query and the candidate text, and this style of model application is known as a cross-encoder. Bi-encoder architectures. An important recent innovation for passage retrieval was the introduction of so-called dense retrieval models that take advantage of a bi-encoder design (contrasted with the cross-encoder design discussed above): DPR [19] and ANCE [45] are two early examples. With sufficient labeled data, we can learn encoders (typically, transformer-based models) that project queries and documents into a dense (semantic) representation space (e.g., 768 dimensions) where relevance ranking can be recast as nearestneighbor search over representation vectors. After the introduction of dense retrieval models, researchers soon realized that transformer-based encoders could also be coaxed to generate sparse representations, where the vector basis, for example, spans the input vocabulary space. Another way to view these so-called sparse retrieval models is to contrast them with BM25: whereas BM25 term weights are assigned using a heuristic scoring function, sparse retrieval models assign term weights that are learned using pretrained transformers such as BERT. Examples of these learned sparse retrieval models include DeepImpact [29], uniCOIL [24, 53], SPLADE [12], as well as many others. Recently, Lin [23] made the observation that dense retrieval models, sparse retrieval models, and traditional bag-of-words models (e.g., BM25) are all parametric variations of a bi-encoder architecture, which is shown in Figure 1(a). In all three classes of models, \u201cencoders\u201d take queries or documents and generate vector representations. There are two major axes of differences, the first of which lies in the basis of the representation vector: dense retrieval models \fgenerate dense (semantic) representations whereas sparse retrieval models and bag-of-words model ground their representation vectors in lexical space. The other major axis of variation is whether these representations are learned: yes in the case of dense and sparse retrieval models, but no in the case of traditional bag-of-words models. The conceptual framework for mono-lingual retrieval provides us with a basis for organizing cross-lingual retrieval approaches, which we discuss next. 3 CROSS-LINGUAL RETRIEVAL METHODS The cross-lingual information retrieval task is formalized in a similar manner as the mono-lingual retrieval task. We assume a collection of documents C\ud835\udc53in language \ud835\udc53comprised of {\ud835\udc511,\ud835\udc512 . . . ,\ud835\udc51\ud835\udc5b}. The system is given a query \ud835\udc5ein language \ud835\udc52, which we denote \ud835\udc5e\ud835\udc52 for clarity, and its task is to return a top-\ud835\udc58ranking of documents from C\ud835\udc53that maximizes some metric of quality such as nDCG or average precision. Throughout this work, \ud835\udc52refers to English and \ud835\udc53 refers to some non-English language (e.g., Russian), but this need not be the case in general. Building from the design of the mono-lingual retrieval architecture presented in the previous section, our discussions begin with three possible designs for first-stage retrieval: document translation, query translation, and the use of language-independent representations. We then overview cross-encoders for reranking the output of first-stage retrievers and finally conclude with some thoughts about fusion techniques. To further ground cross-lingual retrieval techniques, we provide some details about the TREC 2022 NeuCLIR evaluation. Given English queries, participants are tasked with retrieving from three separate corpora comprising Persian, Russian, and Chinese newswire documents curated from the Common Crawl between August 1, 2016 and July 31, 2021. The corpora are modest in size, with 2.23 million documents in Persian, 4.63 million documents in Russian, and 3.18 million documents in Chinese. Information needs (i.e., topics, in TREC parlance) were developed following a standard process for building retrieval test collections [15, 42]. The organizers released 114 topics, originally developed in English, which were then translated into Persian, Russian, and Chinese\u2014both by humans and automatically by Google Translate. The topics comprise \u201ctitle\u201d and \u201cdescription\u201d fields, where the former are akin to keyword queries and the latter are roughly sentence-long articulations of the information need. By design, all topics are aligned, in the sense that for each topic, we have translations in all three languages. However, it was not the case that all topics were evaluated for all languages: In total, the organizers released relevance judgments for 46 topics in Persian, 45 topics in Russian, and 49 topics in Chinese. 3.1 Document Translation A very simple approach to cross-lingual information retrieval is known as document translation: Given \ud835\udc5e\ud835\udc52in language \ud835\udc52and the corpus C\ud835\udc53in language \ud835\udc53, we can translate the entire corpus into language \ud835\udc52, i.e., {Translate(\ud835\udc51\ud835\udc56)}, and then perform mono-lingual retrieval in language \ud835\udc52. This design is shown in Figure 1(b), where the primary addition is a document translation phase that feeds into the document side of the bi-encoder architecture. While translating the entire corpus can be time-consuming, it only needs to be performed once and can be viewed as an expensive pre-processing step, like other computationally demanding document expansion techniques such as doc2query [35]. Any translation technique can be used, including off-the-shelf MT systems. Generally, since documents are comprised of well-formed sentences, automatic translation output can be quite fluent, depending on the quality of the underlying system. This stands in contrast to query translation (see below), where quality often suffers because queries are usually much shorter (hence lacking context) and systems are not usually trained on such inputs. Once C\ud835\udc53has been translated into C\ud835\udc52, we now have a monolingual retrieval task since queries are also in \ud835\udc52. In our case, the three corpora are in Persian, Russian, and Chinese, and we used the English translations provided by the NeuCLIR Track organizers, generated by the SockEye MT system. From the NeuCLIR topics, we extracted three types of English queries: only the \u201ctitle\u201d field, only the \u201cdescription\u201d field, and both. Our experiments used two retrieval models and pseudo-relevance feedback: BM25. Despite the advent of numerous neural ranking models, this traditional \u201cbag-of-words\u201d model remains a robust baseline. SPLADE. We chose SPLADE++ Ensemble Distil [12] due to its zero-shot capabilities. The SPLADE family of models is a sparse neural retrieval model that learns both document and query expansion controlled by a regularization term. Pseudo-relevance feedback (PRF). On top of results from both BM25 and SPLADE, we apply pseudo-relevance feedback. While RM3 is a popular choice and has been well studied in the context of neural methods [48], in this work we instead apply Rocchio feedback, for two reasons: First, Rocchio feedback has been demonstrated to be an effective pseudo-relevance feedback approach for dense vector representations, and applying Rocchio to lexical representations provides conceptual unity. In contrast, there is no equivalent RM3 variant for dense vectors, which makes comparing sparse and dense PRF more difficult. Second, previous work has shown that Rocchio is at least as effective as RM3 [26], so we gain simplicity and consistency without sacrificing effectiveness. 3.2 Query Translation The flip side of document translation is known as query translation: Given \ud835\udc5e\ud835\udc52in language \ud835\udc52and the corpus C\ud835\udc53in language \ud835\udc53, we can translate the query into language \ud835\udc53, i.e., Translate(\ud835\udc5e\ud835\udc52) = \ud835\udc5e\ud835\udc53, and then perform mono-lingual retrieval in language \ud835\udc53. This design is shown in Figure 1(c), where we add a query translation component that feeds the query side of the bi-encoder architecture. Query translation is much more computationally efficient than document translation, but has the disadvantages already discussed\u2014 queries may be more difficult to translate given that they may not be well-formed sentences. However, this approach enables more rapid experimentation since the introduction of a new translation model does not require re-translation of the entire corpus. One challenge of query translation is that we need a good monolingual retrieval model in \ud835\udc53, which by definition is non-English. While BM25 can provide a baseline (in the bag-of-words space of language \ud835\udc53), effective learned retrieval models are more difficult \fto come by since less manually labeled data are available in nonEnglish languages. Our experiments consider both human and machine translations of the topics provided by the track organizers. From each type of translation, we can create three types of queries: \u201ctitle\u201d, \u201cdescription\u201d, and \u201cboth\u201d (similar to the document translation case above). Thus, we have a total of six variations: {human translation, machine translation} \u00d7 {title, description, both}. With these conditions, we experimented with two different retrieval models as well as pseudo-relevance feedback: BM25. Again, this traditional \u201cbag-of-words\u201d model remains a robust baseline. SPLADE. To build SPLADE models in non-English languages, we first need to start with a good pretrained language model for that language. Thus, the models used here are first trained from scratch with the MLM+FLOPS loss [20] using a corpus concatenation of (i) the NeuCLIR corpus of the target language, (ii) the MS MARCO translations [4] for the target language, and (iii) the Mr. TyDi [51] corpus of the target language (if available). Finally, we fine-tuned on the target language version of MS MARCO, expecting to have similar zero-shot properties as similar experiments in English. A separate model was created for each language.1 Pseudo-relevance feedback. As in the document translation case, we can apply pseudo-relevance feedback on top of either BM25 or SPLADE. For the same reasons discussed above, Rocchio was chosen as the feedback method. 3.3 Language-Independent Representations Starting from the bi-encoder design for mono-lingual retrieval shown in Figure 1(a), one might wonder if it were possible for the document and query encoders to generate some sort of languageindependent semantic representation that would support direct relevance matching across languages. With the advent of pretrained multilingual transformers, this is indeed possible. For example, we can apply the document encoder to documents in C\ud835\udc53(in language \ud835\udc53), and apply the query encoder to a query in \ud835\udc52, and directly conduct relevance ranking on the representations. Thus, we can perform cross-lingual retrieval without explicit query or document translation. This is shown in Figure 1(d). The most straightforward implementation of this approach is to train a DPR model [19], but starting from a multilingual transformer backbone such as mBERT. To our knowledge, Asai et al. [1] was the first to propose such an approach. More recently, Zhang et al. [52] built on this basic design and introduced different approaches to exploit cross-lingual transfer by \u201cpre\u2013fine-tuning\u201d on English data before further fine-tuning on the target languages using non-English data. Although Zhang et al. focused on monolingual retrieval in non-English languages, many of the lessons learned are applicable to the cross-lingual case as well. Specifically, for this work, we pre\u2013fine-tuned a multilingual DPR model initialized from an XLM-R [6] backbone,2 dubbed xDPR. The 1SPLADE and pretrained models are made available at https://huggingface.co/naver/ modelname with modelname = neuclir22-{pretrained,splade}-{fa,ru,zh} 2https://huggingface.co/xlm-roberta-large model was trained on the MS MARCO passage dataset [2], where both query and passage encoders share parameters. With this trained model, we separately encoded the corpora in Persian, Russian, and Chinese. It is perhaps worth emphasizing that the same model was used in all three cases. For query encoding, we have a number of design choices. Similar to document translation and query translation, we can use \u201ctitle\u201d, \u201cdescription\u201d, or \u201cboth\u201d. Furthermore, we can encode queries either in \ud835\udc52or \ud835\udc53. In the first case, we are asking the encoder to directly project \ud835\udc52queries into the semantic space occupied by the \ud835\udc53documents. In the second case, the query starts off in \ud835\udc53, so the model is encoding a sequence in \ud835\udc53 into the semantic space occupied by \ud835\udc53documents. Thus, for each language, we arrive at a total of nine variations: {original query, human translation, machine translation} \u00d7 {title, description, both}. Finally, on top of xDPR retrieved results, we can apply pseudorelevance feedback using Rocchio\u2019s method, following the work of Li et al. [21, 22]. Thus, combined with Liu [26], we are able to implement Rocchio feedback consistently across both dense and sparse retrieval models. 3.4 Reranking In a standard multi-stage ranking architecture, the first-stage retriever generates a ranked list of candidates that are then processed by one or more reranking stages that aim to improve the ranking. Reranking is also applicable in the cross-lingual case, but depending on the first-stage retriever, the candidate query/document pairs may either be in \ud835\udc52or \ud835\udc53. In cases where both the queries and documents are in \ud835\udc52, we can use a mono-lingual English reranker. For the first-stage retrievers based on document translation, our experiments used monoT5, which is based on T5 [37]. Reranking is performed in English with the following prompt: Query: {query_text} Document: {doc_text} Relevant: The model is asked to generate either the \u201ctrue\u201d or \u201cfalse\u201d token, from which we can extract the probability of relevance used to sort the candidates. When the monoT5 model is fine-tuned on the MS MARCO passage dataset, it achieves state-of-the-art results on the TREC Deep Learning Tracks [8, 9], as well as impressive zero-shot effectiveness on BEIR [17] and many other datasets [38\u201340, 50]. For reranking first-stage retrievers based on query translation, we used a variant based on the multilingual version of T5 called mT5, which was pretrained on the multilingual mC4 dataset [46]; otherwise, we use the same reranking approach. To fine-tune mT5 for reranking, we employed a similar strategy as Bonifacio et al. [4] using mMARCO, the multilingual version of the MS MARCO dataset. For our experiments, we used the XXL model with 13B parameters. 3.5 Fusion Researchers have known for many decades that fusion techniques, which combine evidence from multiple individual runs, can improve effectiveness [3, 41]. Fusion works particularly well when the individual runs are based on different underlying techniques, such as in the case of dense vs. sparse retrieval models [14, 27]. Given that our first-stage retrievers are all based on very different approaches, \fwe would expect fusion to yield substantial boosts in effectiveness, although this does not appear to be borne out experimentally. Within a multi-stage architecture, there arises a natural question of when fusion should be performed. One possible approach is to independently rerank the output of each first-stage retriever, and then fuse those results; we call this late fusion. Another possible approach is to first fuse the output of the first-stage retrievers, and then rerank the combined results; we call this early fusion. The effectiveness difference between the two approaches is an empirical question, but late fusion is more computationally intensive because it requires more reranking. 4 IMPLEMENTATION DETAILS All the first-stage and fusion retrieval conditions described in this paper are implemented in Anserini [47] and Pyserini [25]. Anserini is a Java-based toolkit built around the open-source Lucene search library to support reproducible information retrieval research. Pyserini provides a Python interface to Anserini and further augments its capabilities by including support for dense retrieval models. Together, the toolkits are widely adopted by researchers in the IR and NLP communities. For document translation using BM25, our implementation uses Lucene\u2019s default analyzer for English, which performs tokenization, stemming, etc. Retrieval is performed with Pyserini\u2019s default BM25 parameters (\ud835\udc581 = 0.9, \ud835\udc4f= 0.4). For query translation, note that since we are indexing non-English text, analyzers in \ud835\udc53are required. Fortunately, Lucene already has analyzers implemented for all three languages, which we used out of the box. The same BM25 parameters were used. All SPLADE models were implemented in Lucene using the standard \u201cfake documents\u201d trick [28]. Token weights were used to generate synthetic documents where the token was repeated a number of times equal to its weight (after quantizing into integers). For example, if \u201ccar\u201d receives a weight of ten from the encoder, we simply repeat the token ten times. These fake documents are then indexed with Anserini as usual, where the weight is stored in the term frequency position of the postings in the inverted index. Top-\ud835\udc58 retrieval is implemented by using a \u201csum of term frequency\u201d scoring function in Lucene, which produces exactly the same output as ranking by the inner product between query and document vectors. Anserini provides the appropriate abstractions that hide all these implementation details. Support for dense retrieval is provided in Pyserini with the Faiss toolkit [18]; all xDPR runs were conducted with flat indexes. For both BM25 and SPLADE models, Anserini exposes the appropriate bindings for performing retrieval in Python, and Pyserini provides appropriate interfaces that abstract over and unify retrieval using dense and sparse models (i.e., they are merely parametric variations in the command-line arguments). Pyserini additionally provides implementations of reciprocal rank fusion, and thus the entire infrastructure makes mixing-and-matching different experimental conditions quite easy. 5 RESULTS Our results are organized into following progression: first-stage retrievers, reranking, and fusion. We report retrieval effectiveness in terms of nDCG@20, the official metric of the NeuCLIR evaluation, and recall at a cutoff of 1000 hits (recall@1000), which quantifies the effectiveness upper bound of reranking. The organizers also measured mean average precision (MAP) as a supplemental metric; we followed this procedure as well. Overall, the findings from nDCG@20 and MAP were consistent, and so for brevity we omit the MAP results in our presentation. In Section 3, we describe a vast design space for first-stage variants that can feed many reranking and fusion approaches. It is not practical to exhaustively examine all possible combinations, and thus our experiments were guided by progressive culling of \u201cuninteresting\u201d settings, as we\u2019ll describe. Finally, a word on significance testing: We are of course cognizant of its importance, but we are equally aware of the dangers of multiple hypothesis testing. Due to the large number of conditions we examine, a standard technique such as the Bonferroni correction is likely too conservative to detect significant differences, especially given the relatively small topic size of NeuCLIR. For most of our experiments, we did not perform significance testing and instead focused on general trends that are apparent from our large numbers of experimental conditions. We applied significance testing more judiciously, to answer targeted research questions. To be clear, the results we report are the only tests we conducted\u2014that is, we did not cherry-pick the most promising results. In all cases, we used paired \ud835\udc61-tests (\ud835\udc5d\u22640.05) with the Bonferroni correction. 5.1 First-Stage Retrievers We begin by examining the output of individual first-stage retrievers. Tables 1 and 2 present results in terms of nDCG@20 and recall@1000, respectively. Each block of rows is organized by the general approach. The columns show metrics grouped by language, and within each block, we report the results of using queries comprised of the \u201ctitle\u201d field, the \u201cdescription\u201d field, and both. Document translation. Recall that in the document translation condition, we are indexing the machine-translated documents provided by the NeuCLIR organizers, which are in English. The BM25 conditions in rows (1ab) and the SPLADE conditions in rows (2ab) differ only in the retrieval model applied to the translated corpus. For BM25, we see that \u201ctitle\u201d and \u201cboth\u201d query conditions yield about the same effectiveness (both metrics) on Persian and Chinese, but \u201cboth\u201d is worse on Russian. For all languages, it appears that \u201cdescription\u201d queries perform worse. For SPLADE, interestingly, for Persian and Chinese, there does not appear to be much of an effectiveness gap between the three types of queries for both metrics. This is likely because the retrieval model includes query expansion, and so the benefits from having richer descriptions of the information need diminish. The comparisons between (a) vs. (b) rows highlight the impact of pseudo-relevance feedback. We see that, at best, PRF yields a small improvement for BM25 in terms of nDCG@20, and for SPLADE, PRF actually decreases effectiveness. However, looking at the recall figures in Table 2, it does appear that PRF boosts recall. This behavior is expected, as PRF is primarily a recall-enhancing device. Query translation. With BM25, shown in rows (3a)\u2013(3d), we see that \u201ctitle\u201d and \u201cboth\u201d conditions are generally on par for Russian \fnDCG@20 Persian Russian Chinese PRF title desc both title desc both title desc both document translation \u2014 BM25 (1a) official Sockeye translation \u2717 0.3665 0.2889 0.3670 0.3693 0.2060 0.3080 0.3705 0.3070 0.3723 (1b) official Sockeye translation \u2713 0.3532 0.3127 0.3720 0.3589 0.2627 0.3188 0.3802 0.3206 0.3806 document translation \u2014 SPLADE (2a) official Sockeye translation \u2717 0.4627 0.4618 0.4802 0.4865 0.4193 0.4573 0.4233 0.4299 0.4236 (2b) official Sockeye translation \u2713 0.4438 0.4675 0.4645 0.4836 0.4243 0.4604 0.4204 0.4142 0.4206 query translation \u2014 BM25 (3a) human translation (HT) \u2717 0.3428 0.2843 0.3429 0.3668 0.3138 0.3665 0.2478 0.2068 0.2572 (3b) machine translation (MT) \u2717 0.3331 0.2974 0.3700 0.3564 0.2972 0.3605 0.1830 0.1498 0.1754 (3c) human translation (HT) \u2713 0.3356 0.2885 0.3408 0.3572 0.3366 0.3630 0.2544 0.1985 0.2734 (3d) machine translation (MT) \u2713 0.3374 0.3300 0.3612 0.3426 0.3257 0.3764 0.1861 0.1464 0.1785 query translation \u2014 SPLADE (4a) human translation (HT) \u2717 0.4301 0.4413 0.4788 0.4594 0.3922 0.4214 0.3110 0.2935 0.3143 (4b) machine translation (MT) \u2717 0.4437 0.4300 0.4728 0.4452 0.3792 0.4156 0.2843 0.2527 0.2929 (4c) human translation (HT) \u2713 0.4348 0.4232 0.4146 0.4322 0.4133 0.4316 0.3198 0.2926 0.3077 (4d) machine translation (MT) \u2713 0.4193 0.4121 0.4444 0.4337 0.3965 0.4075 0.2920 0.2562 0.3029 language-independent representations \u2014 xDPR (5a) \u27e8d: original corpus, q: English\u27e9 \u2717 0.1522 0.1847 0.1804 0.2967 0.2913 0.2866 0.2200 0.2192 0.2185 (5b) \u27e8d: original corpus, q: HT\u27e9 \u2717 0.2776 0.2900 0.2953 0.3350 0.3276 0.3307 0.3197 0.3129 0.3035 (5c) \u27e8d: original corpus, q: MT\u27e9 \u2717 0.2721 0.2968 0.3055 0.3619 0.3348 0.3542 0.3025 0.2785 0.3013 (5d) \u27e8d: original corpus, q: English\u27e9 \u2713 0.1694 0.1996 0.1993 0.3116 0.3085 0.3045 0.2442 0.2343 0.2312 (5e) \u27e8d: original corpus, q: HT\u27e9 \u2713 0.3083 0.2988 0.3197 0.3349 0.3544 0.3578 0.3376 0.3463 0.3380 (5f) \u27e8d: original corpus, q: MT\u27e9 \u2713 0.3136 0.3012 0.3181 0.3727 0.3690 0.3793 0.3268 0.3041 0.3345 Table 1: Main results table reporting nDCG@20 for various first-stage retrievers. and Chinese for both metrics. For SPLADE, shown in rows (4a)\u2013 (4d), there does not appear to be a consistent finding: in some cases, \u201cboth\u201d beats \u201ctitle\u201d, and the opposite in other cases. However, it does appear that \u201cdescription\u201d alone is generally less effective in terms of nDCG@20. With query translation, there is a natural comparison between human translations and machine translations. In rows (3) and (4), these are the (a) and (c) conditions versus the (b) and (d) conditions. It does not appear that for Persian and Russian, machine-translated queries are consistently less effective than human translations, for both BM25 and SPLADE. In some cases, we actually observe machine-translated queries outperforming their human-translation counterparts. For BM25, note that since the queries are bags of words, the fluency of the translations is not important, so long as the correct content terms are present. For SPLADE, the model appears to be robust to possibly disfluent translations. In Chinese, however, there does seem to be a noticeable gap between human and machine translations, with the human translations generally yielding better results. Finally, consistent with the document translation case, pseudorelevance feedback does not appear to improve nDCG@20, but does improve recall. Once again, this is expected. Language-Independent Representations. The final blocks in Tables 1 and 2 show the effectiveness of xDPR. Recall our experimental design: on the document end, the original corpus in \ud835\udc53is encoded with the model. On the query end, there are three options: directly encode the English query, encode the human-translated (HT) query, or encode the machine-translated (MT) query. These are shown in rows (5a), (5b), and (5c), respectively. We see quite a big difference in effectiveness between row (5a) and row (5b), which indicates that there is a big loss in trying to encode queries in \ud835\udc52directly into the semantic space occupied by documents in \ud835\udc53, compared to encoding queries in \ud835\udc53. Clearly, the model is not able to adequately encode text with the same meaning in different languages (the query translations) into the same semantic space. Regardless of configuration, the dense retrieval models appear to be far less effective than the BM25 and SPLADE models, for both translation types, across both metrics. However, we see that pseudo-relevance feedback does appear to increase effectiveness, which is consistent with previous work [21, 22] on vector PRF. 5.2 Reranking In the previous section, we examined first-stage retrieval settings for 18 \u00d7 3 = 54 different conditions, for each language. It is impractical to report reranking results for every single condition, and thus we made a few choices to focus our attention: We considered only conditions that take advantage of both title and description fields, which appear to be more robust than title-only queries. We \fRecall@1000 Persian Russian Chinese PRF title desc both title desc both title desc both document translation \u2014 BM25 (1a) official Sockeye translation \u2717 0.7335 0.6319 0.7652 0.7409 0.5780 0.7255 0.7567 0.6639 0.7567 (1b) official Sockeye translation \u2713 0.8111 0.7638 0.8248 0.7908 0.6780 0.7798 0.8129 0.7404 0.8011 document translation \u2014 SPLADE (2a) official Sockeye translation \u2717 0.8478 0.8796 0.8860 0.8538 0.8376 0.8513 0.7997 0.7597 0.7922 (2b) official Sockeye translation \u2713 0.8592 0.8735 0.8703 0.8686 0.8238 0.8544 0.8038 0.7623 0.8067 query translation \u2014 BM25 (3a) human translation (HT) \u2717 0.7128 0.7027 0.7373 0.7125 0.6655 0.7421 0.4759 0.4577 0.4940 (3b) machine translation (MT) \u2717 0.7254 0.6815 0.7424 0.7332 0.6210 0.7373 0.3829 0.2989 0.4028 (3c) human translation (HT) \u2713 0.7691 0.7520 0.8092 0.7381 0.7276 0.7770 0.5230 0.5113 0.5327 (3d) machine translation (MT) \u2713 0.7672 0.7033 0.7829 0.7439 0.7136 0.7959 0.4361 0.3748 0.4341 query translation \u2014 SPLADE (4a) human translation (HT) \u2717 0.7652 0.8173 0.8239 0.7739 0.7200 0.7612 0.6803 0.6602 0.6551 (4b) machine translation (MT) \u2717 0.8045 0.8172 0.8437 0.7725 0.7150 0.7669 0.6424 0.5919 0.6312 (4c) human translation (HT) \u2713 0.7897 0.8175 0.8245 0.7946 0.7209 0.7776 0.7100 0.7205 0.7029 (4d) machine translation (MT) \u2713 0.8099 0.8117 0.8350 0.7918 0.7090 0.7590 0.6861 0.6096 0.6535 language-independent representations \u2014 xDPR (5a) \u27e8d: original corpus, q: English\u27e9 \u2717 0.4910 0.5445 0.5393 0.5704 0.5627 0.5834 0.4161 0.4359 0.4386 (5b) \u27e8d: original corpus, q: HT\u27e9 \u2717 0.6288 0.6780 0.7088 0.6196 0.5825 0.6368 0.5773 0.5841 0.6031 (5c) \u27e8d: original corpus, q: MT\u27e9 \u2717 0.6333 0.6453 0.6850 0.6285 0.5649 0.6300 0.5420 0.5382 0.5873 (5d) \u27e8d: original corpus, q: English\u27e9 \u2713 0.4702 0.4981 0.5347 0.6251 0.5971 0.6212 0.4330 0.4714 0.4593 (5e) \u27e8d: original corpus, q: HT\u27e9 \u2713 0.6409 0.6612 0.7212 0.6541 0.5915 0.6346 0.6088 0.5939 0.6310 (5f) \u27e8d: original corpus, q: MT\u27e9 \u2713 0.6686 0.6516 0.7071 0.6784 0.6032 0.6475 0.5744 0.5375 0.6109 Table 2: Main results table reporting recall@1000 for various first-stage retrievers. also focused on runs without PRF, since PRF represents additional computational costs (both latency and index size). For each language, this reduces the number of first-stage retrievers under consideration to nine. We applied reranking on these runs, including the title and description fields in the input template to the reranking models. We informally, but not exhaustively, examined other conditions, but they did not appear to alter our overall findings. For example, we tried reranking the first-stage retrieval results with pseudo-relevance feedback, but the results were not noticeably better (even though they exhibited higher recall). Reranking results are shown in Table 3. Under the effectiveness of the first-stage retriever (\u201c1st\u201d columns), we report (nDCG@20, recall@1000): the first quantifies candidate ranking quality and the second quantifies the upper bound effectiveness of a reranker. We see that reranking improves effectiveness by large margins, but this is expected as the effectiveness of cross-encoders in various settings is well known (see Section 3.4). One interesting observation, however, is that reranking reduces the effectiveness gap between the best and worst first-stage retrievers. For example, starting with BM25, which is clearly less effective than SPLADE, the reranker is able to \u201cmake up\u201d for the lower quality candidates, such that the end-to-end effectiveness is relatively close to reranking SPLADE results (at least in terms of nDCG). In fact, in some cases, reranking xDPR results yields scores that are even higher than reranking BM25 results. While \u201ccoupling effects\u201d between the first-stage retriever and reranker have been previously noted in the literature [14, 36], this finding affirms the need for further explorations. 5.3 Fusion With fusion, the design space of possible combinations is immense and impractical to exhaustively explore. To provide continuity, we focus only on the first-stage retrievers in the reranking experiments. In the space of fusion techniques, we settled on reciprocal rank fusion, which is a simple, effective, and robust approach [7]. With these considerations, we experimented with the following fusion conditions in Table 4: (6a) document translation combining BM25 and SPLADE; (6b) query translation combining BM25 and SPLADE; (6c) combining document and query translation with BM25; (6d) combining SPLADE document and query translation; (6e) combining all lexical approaches; (6f) combining both dense approaches; (6g) combining everything. The top block of Table 4 repeats the effectiveness of the first-stage retrievers for convenience. In the bottom block of the table, cases in which the fusion results are worse than the best input are shown in red. In these cases, fusion provides no value over just selecting the best individual run. \fnDCG@20 Persian Russian Chinese 1st rerank 1st rerank 1st rerank document translation \u2014 BM25 (1a) official Sockeye translation (0.3670, 0.7652) 0.5350 (0.3080, 0.7255) 0.5662 (0.3723, 0.7567) 0.4955 document translation \u2014 SPLADE (2a) official Sockeye translation (0.4802, 0.8860) 0.5545 (0.4573, 0.8513) 0.5714 (0.4236, 0.7922) 0.5026 query translation \u2014 BM25 (3a) human translation (HT) (0.3429, 0.7373) 0.5346 (0.3665, 0.7421) 0.5745 (0.2572, 0.4940) 0.4300 (3b) machine translation (MT) (0.3700, 0.7424) 0.5551 (0.3605, 0.7373) 0.5742 (0.1754, 0.4028) 0.3831 query translation \u2014 SPLADE (4a) human translation (HT) (0.4788, 0.8239) 0.5722 (0.4214, 0.7612) 0.5823 (0.3143, 0.6551) 0.4980 (4b) machine translation (MT) (0.4728, 0.8437) 0.5932 (0.4156, 0.7669) 0.5767 (0.2929, 0.6312) 0.5132 language-independent representations \u2014 xDPR (5a) \u27e8d: original corpus, q: English\u27e9 (0.1804, 0.5393) 0.4630 (0.2866, 0.5834) 0.5305 (0.2185, 0.4386) 0.4440 (5b) \u27e8d: original corpus, q: HT\u27e9 (0.2953, 0.7088) 0.5614 (0.3307, 0.6368) 0.5617 (0.3035, 0.6031) 0.5008 (5c) \u27e8d: original corpus, q: MT\u27e9 (0.3055, 0.6850) 0.5644 (0.3542, 0.6300) 0.5337 (0.3013, 0.5873) 0.5087 Table 3: Results of reranking various first-stage retrievers (nDCG@20). Under the column \u201c1st\u201d we repeat the (nDCG@20, Recall@1000) metrics from the first-stage retriever for convenience. In all cases we used both titles and descriptions as queries in first-stage retrieval (with no pseudo-relevance feedback) and reranking. nDCG@20 Recall@1000 Persian Russian Chinese Persian Russian Chinese (1a) DT\u2013BM25 0.3670 0.3080 0.3723 0.7652 0.7255 0.7567 (2a) DT\u2013SPLADE 0.4802 0.4573 0.4236 0.8860 0.8513 0.7922 (3b) QT\u2013BM25 0.3700 0.3605 0.1754 0.7424 0.7373 0.4028 (4b) QT\u2013SPLADE 0.4728 0.4156 0.2929 0.8437 0.7669 0.6312 (5a) dense\u2013e 0.1804 0.2866 0.2185 0.5393 0.5834 0.4386 (5c) dense\u2013f 0.3055 0.3542 0.3013 0.6850 0.6300 0.5873 (6a) RRF(1a, 2a): DT\u2013BM25, DT\u2013SPLADE 0.4462 0.4180 0.4189 0.8936 0.8670 0.8536 (6b) RRF(3b, 4b): QT\u2013BM25, QT\u2013SPLADE 0.4610 0.4598 0.2981 0.8703 0.8368 0.6692 (6c) RRF(1a, 3b): DT\u2013BM25, QT\u2013BM25 0.3795 0.3635 0.2736 0.7901 0.7686 0.7366 (6d) RRF(2a, 4b): DT\u2013SPLADE, QT\u2013SPLADE 0.5165 0.4921 0.4178 0.9009 0.8508 0.7938 (6e) RRF(1a, 2a, 3b, 4b): DT, QT 0.4897 0.4857 0.4397 0.9285\u2020 0.8880 0.8637\u2020 (6f) RRF(5a, 5c): dense 0.2640 0.3469 0.2731 0.6814 0.6493 0.5693 (6g) RRF(1a, 2a, 3b, 4b, 5a, 5c): DT, QT, dense 0.4926 0.5142\u2020 0.4541 0.9291\u2020 0.8818 0.8704\u2020 Table 4: Results of different fusion combinations. Scores of individual first-stage retrievers are repeated for convenience. In all cases we used both titles and descriptions as queries, with no pseudo-relevance feedback. Red shows cases where fusion performed worse than the best single input run. \u2020 represents a significant improvement over (2a). From these results, it appears that for Persian and Russian, the best effectiveness can be achieved by fusing both document translation and query translation SPLADE models, row (6d), although for Chinese, the same fusion is a bit worse than just document translation SPLADE. Fusing all the lexical runs, row (6e), is a bit worse than fusing just SPLADE runs in Persian and Russian, but it improves Chinese. Finally, incorporating evidence from the languageindependent dense retrieval techniques appears to provide value over simply fusing the lexical results, as we see comparing (6g) and (6e). This is surprising given that by themselves, the dense retrieval runs are quite poor. Overall, we were somewhat surprised by the finding that fusion did not improve effectiveness as robustly as we had hoped. In Table 4, the figures in red represent all the cases in which fusion actually hurt effectiveness, i.e., fusion performed worse than the best single input run. We attribute this finding to the large differences in effectiveness between the runs, in that RRF does not work as well if one of the fusion inputs is much better than the others. \fPersian Russian Chinese 1st early late 1st early late 1st early late (4a) QT\u2013SPLADE = best single 0.4728 0.5932 0.4156 0.5767 0.2929 0.5132 (6c) RRF(1a, 3b): DT\u2013BM25, QT\u2013BM25 0.3795 0.5869 0.5723 0.3635 0.5788 0.5890 0.2736 0.5257\u2020 0.4150 (6d) RRF(2a, 4b): DT\u2013SPLADE, QT\u2013SPLADE 0.5165 0.5823 0.6122 0.4921 0.5729 0.5915 0.4178 0.5379 0.5272 (6e) RRF(1a, 2a, 3b, 4b): DT, QT 0.4897 0.5901 0.5911 0.4857 0.5728 0.5853 0.4397 0.5394 0.5058 (6f) RRF(5a, 5c): dense 0.2640 0.5621\u2020 0.4573 0.3469 0.5438 0.5162 0.2731 0.5077\u2020 0.4470 (6g) RRF(1a, 2a, 3b, 4b, 5a, 5c): DT, QT, dense 0.4926 0.5893 0.5626 0.5142 0.5676 0.5840 0.4541 0.5340 0.5295 Table 5: Comparisons between early and late fusion. \u2020 represents a significant improvement over late fusion. To more rigorously test this observation, we performed significance testing comparing the document translation SPLADE model, row (2a) in Table 4, against fusion of SPLADE models, row (6d), fusion of all lexical models, row (6e), and fusion of all lexical and dense models, row (6g). These comparisons answer the following questions, starting from the single best first-stage retriever: Does SPLADE fusion provide any additional value? What about BM25? Dense retrieval? The conclusion, reported in Table 4 with the symbol \u2020, is that most of the fusion combinations are not statistically significantly better than document translation with SPLADE, the single best first-stage retriever. For nDCG@20, the largest ensemble is significantly better than DT\u2013SPLADE only on Russian; for recall@1000 we see more significant improvements, but only on Persian and Chinese. Notably, combining evidence from both document and query translation with SPLADE, row (6d), is not significantly better than DT\u2013SPLADE alone. In our final set of experiments, we compared the effectiveness between early and late fusion for a subset of the conditions in Table 4. These results are reported in Table 5. In this case, we use QT\u2013SPLADE as the point of comparison, which appears to provide the best single-stage retriever and reranking combination. For Persian, late fusion appears to be either about the same or slightly better, with the exception of (6f); this appears to be the case for Russian also, although the late fusion margin of improvement seems to be smaller. Chinese results are a bit more mixed, with early beating late in some cases. To more rigorously compare early vs. late fusion, we performed significance tests comparing all pairs. Only a few of these differences are significant, and they only happen for cases where early fusion is better than late fusion. Two of the three cases, however, occurred for the dense models, which are less effective to begin with. Overall, these experiments are inconclusive with respect to the question of which fusion strategy is better. To provide additional context, the best runs from the NeuCLIR 2022 evaluation were from members of our group, but were generated under the time pressure of deadlines and thus it was not possible to carefully consider all configurations as we did in Table 5. The best runs were (nDCG@20 scores): (i) Persian: p2.fa.rerank, 0.588; (ii) Russian: p3.ru.mono, 0.567; (iii) Chinese: p2.zh.rerank, 0.516. Comparing those runs to the best conditions reported here, we verify that just by carefully studying the various effects of different system components, improvements are possible across all languages, achieving new state-of-the-art effectiveness with (i) Persian: 6d late-fusion 0.612 (+0.024); (ii) Russian: 6d late-fusion 0.592 (+0.025); (iii) Chinese: 6e early-fusion 0.539 (+0.023). 6" + }, + { + "url": "http://arxiv.org/abs/2212.13534v1", + "title": "Building a Culture of Reproducibility in Academic Research", + "abstract": "Reproducibility is an ideal that no researcher would dispute \"in the\nabstract\", but when aspirations meet the cold hard reality of the academic\ngrind, reproducibility often \"loses out\". In this essay, I share some personal\nexperiences grappling with how to operationalize reproducibility while\nbalancing its demands against other priorities. My research group has had some\nsuccess building a \"culture of reproducibility\" over the past few years, which\nI attempt to distill into lessons learned and actionable advice, organized\naround answering three questions: why, what, and how. I believe that\nreproducibility efforts should yield easy-to-use, well-packaged, and\nself-contained software artifacts that allow others to reproduce and generalize\nresearch findings. At the core, my approach centers on self interest: I argue\nthat the primary beneficiaries of reproducibility efforts are, in fact, those\nmaking the investments. I believe that (unashamedly) appealing to self\ninterest, augmented with expectations of reciprocity, increases the chances of\nsuccess. Building from repeatability, social processes and standardized tools\ncomprise the two important additional ingredients that help achieve\naspirational ideals. The dogfood principle nicely ties these ideas together.", + "authors": "Jimmy Lin", + "published": "2022-12-27", + "updated": "2022-12-27", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CY" + ], + "main_content": "Introduction I am passionate about making research reproducible by building and sharing, together with members of my research group and our collaborators around the world, software artifacts that others can use to recreate both our own work and the work of other researchers. In my experience, reproducibility is an ideal that no researcher would dispute \u201cin the abstract\u201d: like puppies, kittens, and rainbows, who could argue against it? So why aren\u2019t we \u201cdoing better\u201d when it comes to reproducibility? To be fair, the state of affairs has much improved, compared to, say, a decade ago. Today, it\u2019s fairly standard practice (and even expected?) in arti\ufb01cial intelligence and deep learning research that each paper is accompanied by a code repository that could (in theory) be used to reproduce the results. Many researchers make model checkpoints publicly available (e.g., on the Huggingface model hub). Yet, we are still falling short. For example, Voorhees et al. (2016) examined 79 so-called \u201cOpen Runs\u201d to TREC 2015\u2014de\ufb01ned as a TREC submission self-identi\ufb01ed as being backed by a software repository that can be used to recreate the exact run\u2014and found that none of them were reproducible. With improved tooling, one might expect the situation to be better today. For example, computational notebooks are often touted as a solution to reproducibility because of their shareable and self-documenting nature. However, in a large-scale study encompassing over 800k executions of publicly accessible valid Python notebooks on GitHub, Pimentel et al. (2019) found that only 24% executed without errors and only 4% produced the same results. The truth is that while reproducibility is a noble goal, it is aspirational, not obligatory. Unless, for example, publication venues enforce reproducibility (which would be very dif\ufb01cult to operationalize) competing priorities will take precedence. Work on a new paper or create a reproduction package arXiv:2212.13534v1 [cs.IR] 27 Dec 2022 \ffor an already published paper? The choice is usually clear for most. When aspirations meet the cold hard reality of the academic grind, it\u2019s almost inevitable that they lose. Hence the (sad) state of reproducibility today. The goal of this essay is to offer a possible path forward towards building a culture of reproducibility. My approach can be summarized in these two high-level bits of advice: \u2022 Appeal to self interest instead of altruism. \u2022 Engineer social processes to promote virtuous cycles and build standardized tools to reduce technical barriers. The \ufb01rst point tackles the incentive structure of \u201cwhy should I do this?\u201d Once that\u2019s been addressed, efforts should focus on promoting virtuous cycles and reducing technical barriers to make it easier to put reproducibility into practice. In a way, both these points can be subsumed under the software development principle of \u201ceat your own dog food\u201d. Before proceeding any further, it is necessary to circumscribe the scope (and possible limitations) of the advice I\u2019m offering. At a high level, my research has been driven by the quest to develop techniques and build tools that connect users to relevant information. Most of my focus has been on text, and so, cast into academic silos, my work lies at the intersection of natural language processing (NLP) and information retrieval (IR). Nearly all the work from my research group can be characterized as applied and empirical in nature. My latest interests lie in using pretrained transformer models to tackle text ranking and related challenges (Lin et al., 2021b), particularly from the perspective of representation learning (Lin, 2021). Despite differences in research agendas, I do believe that much of my advice is applicable to empirical research across many sub-disciplines in computer science beyond NLP and IR, for example, computer vision and data mining. Finally, it is worth noting that this essay represents an academic perspective: speci\ufb01cally, one of my most important roles is to mentor students. Although much of what I write doesn\u2019t really apply to employees in a corporate context, some lessons can still be adapted. This essay is organized around three main questions: \u2022 Why? (Section 2) Why should I care and why should I do it?1 \u2022 What? (Section 3) Okay, you\u2019ve convinced me. But what does it actually mean to make research reproducible? \u2022 How? (Section 4) Okay, you\u2019ve shown me the end goal. But how do I get there? Before concluding, Section 5 discusses a sm\u00f6rg\u00e5sbord of related issues. One \ufb01nal preamble before getting underway. To be precise, I use the term reproducibility in the sense articulated by the ACM in its Artifact Review and Badging Policy,2 characterized as \u201cdifferent team, same experimental setup\u201d. Speci\ufb01cally, \u201cthis means that an independent group can obtain the same result using the author\u2019s own artifacts.\u201d In contrast, replicability can be characterized as \u201cdifferent team, different experimental setup\u201d, and operationally, \u201cthis means that an independent group can obtain the same result using artifacts which they develop completely independently.\u201d Finally for completeness, repeatability can be characterized as \u201csame team, same experimental setup\u201d, i.e., \u201ca researcher can reliably repeat her own computation.\u201d I use these terms in the way prescribed by this ACM policy.3 2 Why? Broadly speaking, there are two main categories of arguments for why reproducibility is important and a worthwhile goal: the \ufb01rst is \u201cgood science\u201d and the second is \u201cgood citizenship\u201d. Both appeal to one\u2019s sense of altruism. 1The \u201cI\u201d in this case mostly refers to students. The real challenge is: How does an advisor convince students to actually do these things? 2https://www.acm.org/publications/policies/artifact-review-and-badging-current 3A confusing detail: A previous version of the same ACM policy swapped the meaning of reproducibility and replicability. 2 \fThe \ufb01rst broad category of arguments, \u201cgood science\u201d, goes something like this: Science represents a systematic attempt to accumulate and organize knowledge about the world in the form of testable explanations and predictions (paraphrased from Wikipedia). In the computational sciences, reproducibility and replicability are the mechanisms by which researchers can build on each other\u2019s results to accumulate knowledge. Reproducible and replicable \ufb01ndings increase the veracity of the underlying scienti\ufb01c claims. Conversely, the inability to reproduce or replicate a \ufb01nding casts doubts over whether it can be reliably built on or extended. If science is metaphorically the process of standing on the shoulder of giants in order to see further, reproducibility and replicability are the processes by which we test the stability of those shoulders we\u2019re attempting to climb on. This is the general sentiment expressed in Sonnenburg et al. (2007), but see Drummond (2009) for counterpoints. The second broad category of arguments, \u201cgood citizenship\u201d, goes something like this: A large part of the funding for research is provided by various governments via tax dollars, and thus, it behooves researchers to share their results in the broadest way possible. This usually entails making publications and associated research data publicly accessible\u2014because, after all, it belongs to the people who ultimately supported the research (i.e., taxpayers). In Canada, this is enshrined in policy: the tri-agencies, which are federal granting agencies that promote and support research, hold the of\ufb01cial position \u201cthat research data collected through the use of public funds should be responsibly and securely managed and be, where ethical, legal and commercial obligations allow, available for reuse by others.\u201d4 For research data management, the agencies support the guiding principles commonly known as FAIR, which stands for Findable, Accessible, Interoperable, and Reusable (Wilkinson et al., 2016). The U.S. National Science Foundation5 and the European Commission6 hold similar positions. While the original policies were formulated speci\ufb01cally with research data in mind, increasingly, software artifacts are given similar treatments (Lamprecht et al., 2020; Barker et al., 2022), and hence there is an increasing emphasis by funders (and by extension the researchers they support) on reproducibility within the broad umbrella of data management and stewardship. While both categories of arguments are undeniably persuasive, the unfortunate downside is that they appeal primarily to the researcher\u2019s altruism. Even in the relatively narrow cases where there are clear mandates and directives (e.g., from funding agencies), unless there is alignment with self interest, researchers will tend to do the minimum required. When \u201cnoble aspirations\u201d come into competition with the daily grind of the academic existence with its numerous other priorities, the (sad) truth is that reproducibility often takes a back seat. When faced with the choice of working on a new paper or cleaning up a paper that\u2019s already been published for reproducibility purposes, what do you think a student would do? To better help researchers prioritize competing demands, I offer another motive for reproducibility that instead appeals to self interest: \u2022 I say to a student: you\u2019re not doing reproducibility for others; you\u2019re doing it for yourself. \u2022 I say to myself: effort invested in reproducibility will help my research group iterate more rapidly and thus become more productive overall. To be precise, according to the ACM Artifact Review and Badging Policy, \u201cself-reproducibility\u201d is more accurately called repeatability, so I will use this term in the discussions below. In my nearly two decades as a faculty, I have had the following scenario happen countless times: A student is no longer able to recreate the experimental results obtained a short while ago.7 That is, the results are not repeatable. Perhaps there was a bug in the original implementation? A parameter setting that \u201cleaked\u201d the test data? An \u201coracle\u201d assumption that was later removed? There are numerous reasons why this could be so. If the results have not been made public (i.e., part of ongoing work that has not yet been published), then the student can (and should) \u201cstart over\u201d. A common case is a rejected paper where reviewers suggested new experimental conditions (e.g., ablation study). If the old experiments are not repeatable, then the implementation should be checked again carefully and all experiments should be run anew. 4https://www.science.gc.ca/eic/site/063.nsf/eng/h_97610.html 5https://www.nsf.gov/bfa/dias/policy/dmp.jsp 6https://open-research-europe.ec.europa.eu/for-authors/data-guidelines 7Here, I\u2019m not talking about \u201cnoise\u201d that can be attributed to, for example, non-determinism in GPU execution. These are cases where something clearly \u201cworked\u201d, but now doesn\u2019t. 3 \fAnd if the previously observed gains have now disappeared, it just means that the innovation was never real to begin with. However, more vexing is the situation where the results have already been reported in a published paper. That is, a student cannot repeat an experiment that has already been \u201censhrined\u201d in the literature. What to do then? A common scenario is that the student is trying to repeat the previously published experiment in order to perform follow-up work or to use those results as the basis of new research. Another common scenario is when the student is integrating several threads of already published work into a thesis, and the additional experiments are critical to knitting the otherwise disparate threads together. Whatever the case, proceeding without attempting to rectify the repeatability failure is scienti\ufb01cally dubious. In empirical research, experimental results always need points of reference for meaningful comparison (e.g., baselines, ablations, etc.), and referencing results that cannot be recreated is just bad science. At this point, I typically urge in the strongest possible way that the student \ufb01rst resolve the repeatability failure, which can, unfortunately, involve quite a bit of effort. The original experiments may have been conducted months ago, and in the frantic dash to a paper deadline, the student may not have been meticulous taking notes in a lab notebook.8 So, the starting point of the repeatability effort may be a directory containing \ufb01les with names like config3-bugfix-trial2.yaml, or worse yet, just a bunch of poorly named result \ufb01les (created by complex command-line invocations that weren\u2019t properly recorded). Thus, I explain to students the importance of repeatability today to prevent future frustration: the consequences can range from a missed opportunity for another paper to a graduation roadblock. As the saying goes, \u201cfuture you will thank you!\u201d Note that, critically, the motivation for the student is self interest, not altruism. Repeatability is a good \ufb01rst step towards reproducibility. In fact, I would characterize the combination along these lines: repeatability + social processes + standardized tools = reproducibility (1) Given repeatability as a starting point, we can \u201cget to\u201d reproducibility by engineering social processes to promote virtuous cycles and building standardized tools to reduce technical barriers. These I argue are the key ingredients to building a culture of reproducibility. Section 4 explains these two elements in much more detail. 3 What? Having covered the \u201cwhy\u201d, I\u2019ll move on to cover the \u201cwhat\u201d. Speci\ufb01cally, I\u2019ll try to answer the question: What does \u201cgood reproducibility\u201d look like? That is, I\u2019ll start at the end by describing my \u201caspirational ideal\u201d of what the end result of a reproducibility effort should be. The broader context of this discussion is what information retrieval researchers call the \u201ccore\u201d ranking problem (also called ad hoc retrieval). I\u2019ll just wholesale lift the de\ufb01nition from Lin (2021): Given an information need expressed as a query q, the task is to return a ranked list of k documents9 {d1, d2 . . . dk} from an arbitrarily large but \ufb01nite collection of documents D = {di} that maximizes a metric of interest, for example, nDCG, AP, etc. These metrics vary, but they all aim to quantify the \u201cgoodness\u201d of the results with respect to the information need. The retrieval task is also called top-k retrieval (or ranking), where k is the length of the ranked list. Generically, a retrieval model is a software artifact that addresses the core ranking problem, and one main focus of many researchers is to build better such models. This is primarily an empirical endeavor, as the dominant way of demonstrating (i.e., in academic publications) that one model is better than another is based on measurements using test collections. In the context of retrieval (or ranking) models, I believe that reproducibility efforts should yield easy-to-use, well-packaged, and self-contained software artifacts with clearly de\ufb01ned scope that allow the broadest possible audience to reproduce research \ufb01ndings by recreating experimental results. Ideally, the artifact should support what I call \u201ctwo-click reproductions\u201d, where a user can reproduce a reported result (for example, from a paper) with only two clicks: one click to copy a command-line 8To students: You do have a lab notebook, right? 9Consistent with parlance in information retrieval, I use \u201cdocument\u201d in a generic sense to refer to the unit of retrieved text, even though in truth it may be a passage, a web page, a PDF, or some arbitrary span of text. 4 \finvocation from a source (for example, a documentation page) and another click to paste the command into a shell. How to ensure that the two-click reproductions \u201cwork as advertised\u201d will be discussed in Section 4. I believe that my research group\u2014with the help of external collaborators\u2014has achieved this \u201caspirational ideal\u201d in many parts of our two IR toolkits, Anserini (Yang et al., 2017, 2018) and Pyserini (Lin et al., 2021a). Anserini is built around the open-source Lucene search library, the most widely adopted solution for tackling search problems in deployed real-world applications (typically, via platforms such as Elasticsearch). Our goal in building a research toolkit around Lucene is to facilitate a two-way exchange between academia and industry (Devins et al., 2022). Pyserini provides Python bindings to the capabilities offered in Anserini and integration with neural retrieval models built on industry-standard packages such as Huggingface Transformers, PyTorch, and Faiss. Pyserini includes many dense and sparse retrieval models built on transformer-based encoders as well as traditional \u201cbag-of-words\u201d models such as BM25 and relevance feedback techniques. To provide a speci\ufb01c example with Pyserini, experiments applying our uniCOIL model (Lin and Ma, 2021) to the development queries of the MS MARCO passage ranking task (Bajaj et al., 2018) can be accomplished with the following command: python -m pyserini.search.lucene \\ --index msmarco-passage-unicoil-d2q \\ --topics msmarco-passage-dev-subset \\ --encoder castorini/unicoil-msmarco-passage \\ --output runs/run.msmarco-passage.unicoil.tsv \\ --output-format msmarco \\ --batch 36 --threads 12 \\ --hits 1000 \\ --impact In terms of ease of use, a researcher who wishes to reproduce the results reported in our paper can do so with a single command, via two clicks: copy and paste (i.e., \u201ctwo-click reproduction\u201d). The above command calls the main driver program pyserini.search.lucene in a package that is published in the Python Package Index (PyPI),10 thus making the software artifact well-packaged, since it can be installed with standard tools such as pip. The two-click reproduction described above is self-contained because it has no other dependencies\u2014 many details are handled \u201cbehind the scenes\u201d. For example: \u2022 The option --index speci\ufb01es an inverted index of a commonly used corpus in IR research (the MS MARCO passage corpus) that Pyserini already \u201cknows about\u201d, along with dozens of other common corpora. On \ufb01rst invocation, Pyserini downloads a copy of the index from a known location (servers at the University of Waterloo) and caches it locally. \u2022 The option --topics speci\ufb01es a standard set of queries that is already included in Pyserini, so the user doesn\u2019t need to visit a separate website to download them. \u2022 The option --encoder refers to a transformer model for encoding the queries, which is hosted on the Huggingface model hub; Pyserini downloads and caches the model locally. The execution of the above command yields a run \ufb01le in the MS MARCO document format that can then be fed into the of\ufb01cial MS MARCO scoring script to arrive at the of\ufb01cial evaluation metric, reciprocal rank at cutoff 10. This scoring script is also conveniently packaged in Pyserini: python -m pyserini.eval.msmarco_passage_eval \\ msmarco-passage-dev-subset \\ runs/run.msmarco-passage.unicoil.tsv The output should be the \ufb01gure that appears in Table 2 of Lin and Ma (2021). Very small differences are sometimes observed due to the inherent non-determinism associated with neural inference (e.g., CPU vs. GPU inference, and even across different GPUs). Let me try to further unpack this ideal of \u201ctwo-click reproduction\u201d. The high-level goal is to reduce friction for users who wish to reproduce a particular result, for example, a \ufb01gure that is reported in 10https://pypi.org/project/pyserini/ 5 \fthe results table of a paper. Even if code associated with the paper is available, there\u2019s no easy way to separate different phases of the experiment, e.g., training the model from scratch vs. inference using a publicly shared model checkpoint. Even focusing on inference (i.e., ranking), the complexity of modern IR evaluation methodology means that there are a gazillion tiny details to keep track: Where do I get the index? Where do I get the topics? Where do I get the model itself? What versions of each, exactly? Is this the right version that goes with that? Sorting through these details is not intellectually challenging, but can be confusing for a novice (e.g., a student just getting into IR) or even a seasoned IR researcher who\u2019s never worked with this speci\ufb01c collection before. As a simple example, there are often different versions of a particular set of queries: what everyone calls the 6,980 queries in the MS MARCO passage ranking development set is actually only a subset of the \u201creal\u201d full development set. My other favorite example is TREC-COVID (Roberts et al., 2020), which has no less than a dozen different sets of relevance judgments. All of them are useful, but for answering different research questions. Which one do you use? Quite simply, the goal of \u201ctwo-click reproduction\u201d is to relieve the user of all these burdens via a simple, self-contained command that can be copied and pasted into a shell to reproduce an experimental result of interest. Providing a fully self-contained mechanism with a well-packaged artifact reduces friction; this is about as \u201ceasy to use\u201d as you can get. We even provide two-click reproductions for work by others. For example, the following command reproduces the results of DPR (Karpukhin et al., 2020) on the Natural Questions (NQ) dataset: python -m pyserini.search.faiss \\ --topics dpr-nq-test \\ --index wikipedia-dpr-multi-bf \\ --encoded-queries dpr_multi-nq-test \\ --output runs/run.dpr.nq-test.multi.bf.trec \\ --batch-size 36 --threads 12 See Ma et al. (2022b) for additional explorations. Perhaps the best testament to our efforts is that Pyserini is referenced by Karpukhin et al. (2020) in their of\ufb01cial repo11 as the preferred implementation to replicate their work. The software artifact should have a clearly de\ufb01ned scope, in terms of what it does and, just as importantly, what it doesn\u2019t do. In this case, Pyserini allows a researcher to reproduce a run on the MS MARCO passage corpus using a speci\ufb01c ranking model. Retrieval is performed using a speci\ufb01c model checkpoint: training a model from scratch is out of scope (although we\u2019ve shared other code to enable reproducible model training). Retrieval uses a pre-built index: building an index from scratch or searching another corpus is also out of scope in this speci\ufb01c instance (although Pyserini does provide tools for indexing and searching arbitrary corpora). However, this reproduction command does provide generalizability to different queries, since the encoder model can be applied to arbitrary queries for retrieval. The scope of a reproducibility effort can often be de\ufb01ned in terms of abstractions encoded in software artifacts. The application of neural networks can be divided into the training of the retrieval models and inference using those models. Quite explicitly, Pyserini does not provide any code for training neural models; it is focused on neural inference at search time. Related to the issue of scope is the explicit acknowledgement in this two-click reproduction ideal that the software artifact and the reproduction commands comprise an abstraction barrier. The contract is simply that, \u201cif you run this command, you\u2019ll be able to reproduce these results.\u201d No promises are made about the quality of the code behind the scenes, which may be a pile of spaghetti. This, I believe, is a feature, not a bug. Internally, it would be desirable that, once the cover is lifted, the internals of the artifact are beautifully engineered, but this should not be a barrier to reproducibility. I can\u2019t count the number of times I\u2019ve heard something along the lines of \u201cthe code is really messy, I want to clean it up \ufb01rst before I open source it.\u201d Mentally, that translates in my mind into \u201cit\u2019ll never happen\u201d, and usually I\u2019m right. It is dif\ufb01cult to tell if a researcher is using this line as an excuse or if it\u2019s uttered in good faith. In the latter case, other priorities usually intervene, and the net effect is the same. Code never sees the light of day. 11https://github.com/facebookresearch/DPR 6 \fThe high-level point is that messy code should not be an impediment to reproducibility, as long as the right abstractions are established\u2014in this case, a PyPI artifact. Of course, clean internal implementations will make the packaging easier, but janky code that generates the correct behavior is much preferred to elegant code that doesn\u2019t work or not having any open-source code at all. With a messy but functional implementation, there exists a starting point for refactoring down the road if so desired. There are, literally, entire tomes written about best practices for doing so in a sane manner; for example, I recommend the recent book by Riccomini and Ryaboy (2021) for practical advice and an entry point into this vast literature. 4 How? Having covered \u201cwhy\u201d and \u201cwhat\u201d, I move on to cover \u201chow\u201d. Repeating from the introduction, my approach can be summarized in two high-level bits of advice: (1) motivate the importance of reproducibility by appealing to self interest instead of altruism, and (2) engineer social processes to promote virtuous cycles and build standardized tools to reduce technical barriers. The implementation of these two points should be guided by the dogfood principle, or the directive of \u201ceat your own dog food\u201d, which refers to the colloquialism of using one\u2019s own \u201cproduct\u201d. In the context of a research group, it means that members of the group should be actively using the software artifacts developed by the group. Quite simply, software artifacts that are used tend to become re\ufb01ned over time, or at the very least, bugs get \ufb01xed, because otherwise research would grind to a standstill. My group uses Pyserini, Anserini, and a few other packages we\u2019ve developed as the foundation for ongoing work. Many new research ideas build on, hook into, or otherwise depend on Pyserini. In turn, improved capabilities in Pyserini spur further advances. To provide an example, the two-click reproductions described in the previous section solve a number of problems for the community, including ourselves (invoking self interest again). Speci\ufb01cally, they provide competitive baselines for comparisons and a solid foundation for \ufb01rst-stage retrieval in a multi-stage ranking architecture. For example, students focused on building better rerankers need not waste time worrying about the proper setup of the \ufb01rst-stage ranker. They simply follow the prescriptions in our two-click reproductions as the starting point. Across the research group, this ensures consistency and reduces the possibilities of bugs: We can be con\ufb01dent that every reranker implementation is consuming exactly the same set of candidates and thus the comparisons are fair. More generally, Pyserini makes it easy to run experiments on standard IR test collections; the toolkit handles much of the boilerplate, such as bindings to query sets, relevance judgments, and evaluation scripts. Many capabilities come for \u201cfree\u201d, for example, general techniques such as rank fusion and pseudo-relevance feedback can be applied with little effort. This means that (a) the student writes less code (appealing to self interest again) and (b) the veracity of results increases due to greater consistency in experimental design. Thus, with the dogfood principle, it\u2019s clear to see that reproducibility is driven by self interest. It allows my students and collaborators to more easily build on each other\u2019s results and iterate more rapidly, enhancing their productivity and thus leading to more publications. We are the primarily bene\ufb01ciaries, and the community bene\ufb01ts as a nice side effect. Here\u2019s a sketch of how all these ideas \u201ctie together\u201d, with repeatability as a starting point. I\u2019ll \ufb01rst describe the social processes that I\u2019ve engineered to promote virtuous cycles and then move on to discuss infrastructure that reduces the technical barriers to reproducibility. 4.1 Social Processes: From Repeatability to Reproducibility As a starting point, a student is motivated to make experiments repeatable, for all the reasons already discussed in Section 2. This involves documenting all the steps necessary to produce experimental results, including con\ufb01guration settings, command-line invocations, etc. The documentation that describes how to repeat an experiment (written by the student who initially ran the experiment), when shared, becomes what I call a reproducibility guide (or a \u201crepro guide\u201d for short). In many cases, these guides are just markdown \ufb01les in the docs/ directory of the GitHub repository that contains the code. They contain, at a minimum, the sequence of command-line invocations that are necessary to reproduce a particular set of experimental results, with accompanying descriptions in prose. The goal 7 \fis that copying and pasting commands from the guide into a shell should succeed in reproducing the same experimental results (modulo issues like non-determinism in GPU execution). The \ufb01nal step is to actually get another person to \u201ctry out\u201d the guide, i.e., follow exactly the prescribed steps and make sure that they work as expected. Who does this and why would they?12 I have two tools: again, appealing to self interest, but augmented this time with reciprocity, and a new trick\u2014providing an onboarding path to new students. For students who are already actively contributing to shared code, there are multiple incentives. Assisting with a reproduction gives the student \ufb01rst access to a new feature, one that could potentially serve as the basis of follow-up work. Additionally, the social pressures of reciprocity can be an effective motivation: students are the bene\ufb01ciaries of previous group members who \u201cpaved the way\u201d and thus it behooves them to write good documentation to support future students. Self interest and reciprocity intersect as well, because students know that at some future point in time, I will demand a check on their reproduction guide. It\u2019s nice to be able to say, \u201cPlease help me out here, since I did the same for you the last time.\u201d The other appeal is that reproduction guides provide onboarding paths. For prospective students who wish to become involved in our group\u2019s research, performing reproductions offers exposure to our work and an introduction to our codebase. These exercises are particularly suitable for undergraduates as their \ufb01rst step in learning about research. Students are quite motivated (out of self interest) and the group bene\ufb01ts from more people checking the quality of our work. Sometimes, I try out the reproduction guides myself, and this, too, is motivated by self interest. The typical scenario is documentation written by a graduating student as a \ufb01nal \u201cwrap up\u201d of the work. From experience, I know that if I ever want another student to build on this work, I had better make sure it works, because once a student graduates, broken code becomes much harder to \ufb01x. I can\u2019t emphasize how important it is to actually have someone try out and follow the reproducibility guide, as opposed to just passively reading the document. A very common scenario: A: \u201cThis doesn\u2019t work, I get the following error...\u201d B: \u201cOh, sorry about that, I forget to check in this \ufb01le.\u201d There is simply no substitute for a hands-on attempt to catch bugs, spot missing details, and uncover hidden assumptions. Many of these reproduction guides are associated with a \u201creproduction log\u201d at the bottom of the page, which contains a record of individuals who have successfully reproduced the results and the commit id of the code version used. With these reproduction logs, if some functionality breaks, it becomes much easier to debug, by rewinding the code commits back to the previous point where it last worked. The social processes that I have described promote and sustain a virtuous cycle, and here we have made the jump from repeatability to reproducibility. Students begin by recognizing the value of \u201cpackaging\u201d up their research so that they can repeat their own experiments. These reproduction guides are made publicly accessible, and are independently con\ufb01rmed to be functional. Code that works provides a solid foundation to build on\u2014by both the original authors as well as others. This in turn accelerates experimental iterations and facilitates rapid explorations of novel ideas built using existing components, ultimately leading to greater productivity. Students see the payoff of reproducibility efforts and are inclined to sustain their contributions. And around and around we go. 4.2 Standardized Tools: From Reproducibility to Two-Click Reproductions At a high-level, we can divide reproducibility into the social and technical aspects. I believe the \ufb01rst is much more important, because any tool to support reproducibility will either be ignored or circumvented unless there are social processes to promote their usage. The previous section focused on exactly these, and that gets us from repeatability to reproducibility. Here, I discuss tooling to further lower barriers. If social processes stimulate \u201cthe will\u201d, standardized tools provide \u201cthe way\u201d. The core idea is to make investments in tooling to automate the process of \u201cgoing through\u201d the reproduction guides. In Anserini and Pyserini, we have built extensive regression tests with elaborate 12Faculty can offer carrots or wave sticks; generally, the former is far more effective. 8 \ftest harnesses (sometimes called jigs). These regressions are tightly integrated with the two-click reproductions discussed in the previous section: in fact, in many cases, the regression tests simply wrap the execution of the two-click reproduction commands and verify that the outputs match stored speci\ufb01cations. The periodic execution of these regressions ensures that the two-click reproductions continue to \u201cwork as advertised\u201d. In Anserini, we have implemented a test harness called run_regression.py that takes as input a YAML con\ufb01guration \ufb01le for a set of experimental conditions on a standard IR test collection, for example, the MS MARCO V2 passage test collection (Craswell et al., 2021; Ma et al., 2022a).13 Here\u2019s a sample invocation: python src/main/python/run_regression.py \\ --index --verify --search --regression msmarco-v2-passage The script runs through the following steps: It builds the index from scratch (i.e., the raw corpus), veri\ufb01es index statistics (e.g., number of documents and terms processed), performs retrieval runs using different retrieval models (e.g., BM25 with Rocchio feedback), evaluates the outputs (e.g., with trec_eval), and checks effectiveness \ufb01gures against expected results (stored in the con\ufb01guration). That is, the execution of this script veri\ufb01es the reproducibility of a set of experimental conditions in a fully automatic manner. Upon each successful execution, the regression script generates a documentation page from an associated template, populating the results (e.g., average precision) from the trial.14 All of this happens without any human intervention. The script depends on having access to the raw corpus, which is stored on our group\u2019s servers at known \ufb01le system locations. However, the corpus path is a con\ufb01gurable parameter, so anyone can run the same regression test if they have a copy of the corpus. There are currently around three hundred such tests, which take several days to run end to end (orchestrated by yet another script). The largest of these builds a 10 TB index on all 733 million pages of the ClueWeb12 collection. Although it is not practical to run these regression tests for each code change, we try to run them as often as possible, resources permitting, to catch new commits that break existing functionalities early so they are easier to debug. These regression tests are always run before a formal release of the toolkit, prior to publishing an artifact on Maven central, to ensure that released jars produce the expected experimental results. On top of the regression framework in Anserini, further tests in Pyserini compare its output against Anserini\u2019s output to verify that the Python interface does not introduce any bugs. These are written as Python unit tests and, for example, check different parameter settings from the command line, ensure that single-threaded and multi-threaded execution yield identical results, that pre-built indexes can be successfully downloaded. In Pyserini, experimental conditions are gathered together and organized into what we call a reproduction matrix, an example of which is shown in Figure 1 for the MS MARCO V2 passage corpus. Each row illustrates a particular experimental condition, while the columns show evaluation metrics with respect to different sets of queries. A row can be expanded to reveal the commands necessary to generate those evaluation \ufb01gures. These are exactly the two-click reproductions described in Section 3, organized in an easy-to-consume format. Finally, the reproduction matrix is backed by another script that programmatically iterates through all rows (experimental conditions), performs retrieval with the speci\ufb01ed invocations, and veri\ufb01es that the evaluation scores are as expected (i.e., checks each cell in the table). In this case, the command is: python scripts/repro_matrix/run_all_msmarco.py --collection v2-passage In fact, the reproduction matrix webpage is automatically generated by the above script upon successful completion. Once again, these regression experiments are quite computationally expensive, and collectively they take several days to run. Nevertheless, these checks are performed before every artifact release on the Python Package Index. Thus, we are con\ufb01dent of the reproducibility of various retrieval models implemented in Pyserini, to the extent that they are covered by these tests. 13https://github.com/castorini/anserini/blob/master/src/main/resources/regression/ msmarco-v2-passage.yaml 14https://github.com/castorini/anserini/blob/master/docs/regressions-msmarco-v2-passage. md 9 \fFigure 1: The reproduction matrix for the MS MARCO V2 passage corpus, which is available at https://castorini.github.io/pyserini/2cr/msmarco-v2-passage.html. Each row represents an experimental condition and is associated with two-click reproduction commands. As I\u2019ve previously argued, reproducibility is a continual process, not a \u201cone and done\u201d deal (Lin and Zhang, 2020). The testing infrastructure described here ensures that our two-click reproductions continue to work even as the entire codebase evolves and gains new features. What I\u2019ve described here can be characterized as a custom continuous integration/continuous delivery (CI/CD) framework, adapted to the unique characteristics of research. This might all sound like a lot of work to set up initially, and indeed it was. However, all the upfront engineering costs have already been \u201cpaid for\u201d. For a student building a test case for a new experimental condition, the effort is relatively modest, and the process consists mainly of writing con\ufb01guration \ufb01les and hooking into the test infrastructure. Once connected, the student can be con\ufb01dent that the code will continue to generate the expected retrieval results. Down the road, when it is time to write up the thesis, there\u2019s no need to \u201cdust off\u201d the code to make sure it \u201cstill works\u201d. The regressions tests ensure that it never stopped working. In summary, reproducibility has become ingrained as a shared norm in our group, operationalized in social processes and facilitated by technical infrastructure. I think that this has allowed us to nicely balance the demands of reproducibility with the ability to iterate rapidly. 5 Other Considerations As promised in the introduction, this section discusses a sm\u00f6rg\u00e5sbord of issues that don\u2019t \ufb01t neatly into the \u201cwhat\u201d, \u201cwhy\u201d, and \u201chow\u201d narrative. 5.1 Scoping and Timing of Reproducibility Efforts In truth, written reproduction guides and automated regression testing lie along a spectrum of \u201creproduction rigor\u201d, with different cost/bene\ufb01t tradeoffs. Although we aim to have reproduction guides for every paper, only a relatively small fraction of our group\u2019s research becomes \u201censhrined\u201d in automated regressions that are maintained over time. 10 \fWe currently do not have clear-cut criteria as to which retrieval models or experimental results receive the regression treatment, but as a rough heuristic, we use the following question as a guide: Is this work we\u2019d like to extend further? If so, then we would go about building an appropriate regression. Similarly, if we \ufb01nd a paper by others that we\u2019d like to build on (as in the case of DPR), we would make the investment to replicate the work and to build regressions into our codebase. As already mentioned in Section 3, scoping the effort is an important part of the reproducibility discussion. Consider the common case of a modeling advance that is described in a paper, i.e., we proposed a novel retrieval model that appears to be better than previous work, and the contribution represents a fruitful line of inquiry that the group hopes to push further. In this case, building an appropriate regression makes sense. However, to balance cost and reward, we do not construct regression tests for every experimental result reported in the paper. Instead, we are guided by the question: In a follow-up paper, which of the existing experimental conditions from this paper would serve as the baseline for comparison? That becomes the target for integration into our regression framework. We \ufb01nd that building tests for ineffective contrastive settings or ablation conditions provides little value relative to the amount of effort required. Part of the scoping exercise is to determine what aspects of the proposed model should be included in which codebase. If the original experiments were performed with Pyserini to begin with, then the answer is straightforward: Model checkpoints are made public (e.g., on the Huggingface model hub) and two-click reproductions are directly integrated into Pyserini. However, since our toolkit (by design) does not include code for model training, reproduction guides for that aspect of the work must go elsewhere, typically in another code repository. In some cases, a novel retrieval model does not neatly \ufb01t into the design of Pyserini. These model implementations (both training and inference) usually begin their lives in a separate repository, but as part of the reproducibility planning exercise, we debate whether it is worthwhile to import the model inference code into Pyserini so that end-to-end retrieval experiments can be conducted alongside all the other available models in a seamless manner (as part of a reproduction matrix, see Section 4.2). These decisions are made on a case-by-case basis. It is worth explicitly noting that any inclusion to the constantly growing test suites in Pyserini and Anserini represents an open-ended maintenance commitment for the life of the project. Any addition, in essence, incurs a permanent liability on the group (and as I\u2019ll discuss in Section 5.3, this burden usually falls on me). There\u2019s no point in adding a model to the regression framework unless there\u2019s the intention of keeping the code functional in the long term. Once added, we almost never abandon a regression test, except in very rare circumstances, for example, a failure due to changes in underlying code that we depend on but have no control over. Operationally, continual expansion of test suites means that the complete set of regressions takes longer and longer to run, which has the practical effect of slowing down release iterations (e.g., on PyPI). However, I don\u2019t think this has impacted the iteration speed of individual students since components in the codebase are largely decoupled. Nevertheless, servers continue to get faster and more powerful, so I think our current operations are sustainable. I\u2019ve found that the best time to make investments in long-term reproducibility is the window between the acceptance noti\ufb01cation of a paper and the \ufb01nal camera-ready deadline. This provides an opportunity to perform a \u201c\ufb01nal check\u201d on the results and to plan for the long-term maintenance of the model. Work during this time window also ensures that the evaluation results reported in the \ufb01nal paper version match the \ufb01gures that can be recreated with our two-click reproductions. In some cases, the journey from reproduction guides to automated tests is circuitous. For example, we might not have found a particular thread of work suf\ufb01ciently promising to have integrated it into our regression framework, but subsequent developments changed our minds. It is never too late, but I have encountered cases of \u201creproducibility debt\u201d, much like the notion of \u201ctechnical debt\u201d in software engineering. The complexities of modern software stacks create hidden dependencies that often break retrieval models in subtle ways as code evolves. Especially if someone has not tried out a reproduction guide in a while, it might be discovered later that the results have changed. Repeatability is a \ufb01ckle beast. 11 \f5.2 Bootstrapping Reproducibility What about the cold start process? The foregoing discussions describe the operational aspects of reproducibility in my group in \u201csteady state\u201d, where virtuous cycles have already been established. Existing software artifacts are already functional and the bene\ufb01ts of using them are evident. Processes and shared norms are in place, and tools to simplify the routine have been built. Once again, motivated by self interest, the value that can be extracted by participating in, for example, our Pyserini reproducibility ecosystem is greater than the costs. What if this isn\u2019t the case? How can a research group start the \ufb02ywheel spinning from a standstill?15 Before addressing this point, it is important to recognize that the reproducibility narrative I\u2019ve articulated here does not work for everyone. Only certain \u201cstyles\u201d of systems-oriented research organized around software artifacts are conducive to the treatment described in this essay. However, to anyone who wishes to replicate a similar culture of reproducibility: I admit that getting the \ufb02ywheel spinning is hard, and the truthful answer is: I don\u2019t really know how, at least in a replicable manner. I began my academic career as an assistant professor in 2004 and have started countless research projects that involve building and sharing software artifacts. Only recently have I successfully pulled together the elements that sustain a culture of reproducibility. Of course, I could construct a story of our success, but it would merely be a post-hoc narrative. The casual factors are too complex and the training examples are too few to build an explanatory model. Nevertheless, I will share some ideas that are independently worthwhile, regardless of their actual contributions to reproducibility. First, adopt software engineering best practices. A research environment is of course different from a production shop geared towards delivering products and services to customers, but this doesn\u2019t mean that research groups should ignore mature and well-established processes. Pyserini and Anserini both adopt standard best practices in open-source software development. The code is available on GitHub, issues are used to describe proposed feature enhancements and bugs, and code changes are mediated via pull requests that are code reviewed. Second, look for opportunities where a long-term research agenda aligns with the construction of software artifacts that can be generalized into a toolkit, library, or platform. Reproducibility efforts have substantial upfront costs that need to be amortized over several years to achieve a net gain. The (planned) software toolkit should ideally provide the basis for several related research projects that yield multiple publications. Without this long-term vision and commitment to a shared codebase, the group might never reap the rewards of the initial investment. With a plan in place, it is possible to make progress incrementally. For example, the multi-layered regression frameworks in Anserini and Pyserini evolved over many years. However, the commitment to build a toolkit for tackling the core ranking problem in information retrieval was made on day one. How does one identify these opportunities? For junior faculty, their own research statements provide the inspiration! As an integral part of the application process for academic positions, the research statement should contain a coherent, multi-year research agenda. Look there for possible alignment, as the vision should have already been articulated clearly. Third, the richness of the modern software ecosystem means that opportunities for contributing software artifacts can happen at many different layers in the stack, and specialized niches abound. For example, Pyserini relies on PyTorch and Anserini relies on Lucene. It\u2019d make little sense for our group to try and build the equivalent of either PyTorch or Lucene from scratch. Similarly, it might not make sense for another research group to build an independent IR toolkit from scratch. Instead, join us and build on top of our toolkits to handle a niche that is not currently well served. We\u2019d welcome your contributions! Finally, leadership is critical and deserves a dedicated section. I turn to this next. 5.3 The Critical Role of Leadership I am the overall architect of Pyserini and Anserini. I am the person who usually runs the regression tests, shepherds the release of software artifacts, and generally keeps tabs on everything that is going on. In software development terms, I am not only the engineering manager but also the tech lead. 15To use an analogy attributed to Jeff Bezos: Virtuous cycles are like \ufb02ywheels; they hold a lot of energy and are dif\ufb01cult to slow down, but they\u2019re even harder to spin up initially. 12 \fThis makes sense from multiple perspectives, as I am in the best position to serve as the long-term institutional memory of the research group. Students come and go, but my presence (hopefully) remains constant. Many of the retrieval models in Pyserini were built before the arrival of my most recent cohort of students, and some models will continue to be re\ufb01ned even after they graduate. I have the most complete view of what everyone is working on, and this allows me to coordinate multiple overlapping research projects. For example, I regularly introduce students to existing features in Anserini and Pyserini that can expedite their research. In many cases, showing students how to use a model is as simple as pointing them to the corresponding reproduction guide and asking them to go through it, or even better, directing them to the documentation that provides the two-click reproduction commands. Another common pattern is that I arrange \u201chandoffs\u201d from a graduating student to a new student who wishes to continue pursing a related line of work. If the practices described here are faithfully executed, this is a relatively seamless process. It is important for the group leader to assume the roles described above\u2014in simple terms, serving as both the engineering manager and the tech lead. Having this mindset in my opinion is one key to sustaining a culture of reproducibility. For example, I have written most of the test harness code in Pyserini and Anserini, and in many cases, I end up writing unit tests for students. This can be characterized as a \u201cservant leadership\u201d style: writing testing frameworks certainly isn\u2019t glamorous, but it\u2019s critically important. Working on these bits of code is the best use of my time from the perspective of bene\ufb01ting the entire group\u2014as investments in reproducibility pay dividends for everyone using the codebase\u2014and ful\ufb01lls my personal desire to stay technically engaged with students. Starting an academic research group has been analogized to running a startup, with the faculty member as the CEO. In the context of the North American academic system, this analogy is apt, as faculty members generally lead their own groups. In the beginning, they must do everything, including assuming the roles of an engineering manager (e.g., hiring and mentoring students) as well as the tech lead (e.g., guiding students\u2019 technical progress and examining their implementations). However, as a faculty rises through the academic ranks and grows a research group, a common organizational pattern is to cede the role of the tech lead to a senior graduate student. While certainly workable, this comes with associated risks. Students (eventually) graduate (hopefully!) and the next stage of their careers may no longer have room for the role. Arranging for succession \u201chandoffs\u201d become more dif\ufb01cult if the group leader is not intimately involved. This is not unlike what happens in a corporate environment: When a tech lead departs, management needs to \ufb01nd a replacement, typically elevating another member from the same project. In the academic context, this is likely another senior graduate student working on similar research problems, provided one exists. Needless to say, sustaining research momentum is much easier if there is continuity, as in the case where the group leader also serves as the tech lead. 6" + }, + { + "url": "http://arxiv.org/abs/2211.00734v1", + "title": "On the Interaction Between Differential Privacy and Gradient Compression in Deep Learning", + "abstract": "While differential privacy and gradient compression are separately\nwell-researched topics in machine learning, the study of interaction between\nthese two topics is still relatively new. We perform a detailed empirical study\non how the Gaussian mechanism for differential privacy and gradient compression\njointly impact test accuracy in deep learning. The existing literature in\ngradient compression mostly evaluates compression in the absence of\ndifferential privacy guarantees, and demonstrate that sufficiently high\ncompression rates reduce accuracy. Similarly, existing literature in\ndifferential privacy evaluates privacy mechanisms in the absence of\ncompression, and demonstrates that sufficiently strong privacy guarantees\nreduce accuracy. In this work, we observe while gradient compression generally\nhas a negative impact on test accuracy in non-private training, it can\nsometimes improve test accuracy in differentially private training.\nSpecifically, we observe that when employing aggressive sparsification or rank\nreduction to the gradients, test accuracy is less affected by the Gaussian\nnoise added for differential privacy. These observations are explained through\nan analysis how differential privacy and compression effects the bias and\nvariance in estimating the average gradient. We follow this study with a\nrecommendation on how to improve test accuracy under the context of\ndifferentially private deep learning and gradient compression. We evaluate this\nproposal and find that it can reduce the negative impact of noise added by\ndifferential privacy mechanisms on test accuracy by up to 24.6%, and reduce the\nnegative impact of gradient sparsification on test accuracy by up to 15.1%.", + "authors": "Jimmy Lin", + "published": "2022-11-01", + "updated": "2022-11-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR" + ], + "main_content": "INTRODUCTION 2 e\ufb00ect is more pronounced in image-classi\ufb01cation models than text classi\ufb01cation models. \u2022 In the presence of noise, gradient compression in smaller models can sometimes recover some of the accuracy lost to noise. \u2022 The use of aggressive gradient compression when training smaller models can result in a reduced sensitivity to gradient noise. We explain these observations by analyzing the error which DPSGD and compression introduces to the average gradient estimation. In particular, we observe the following: \u2022 Reduced accuracy in training can be explained through mean-squared error estimating the average gradient. \u2022 The error is mostly made up of variance, and a small amount of bias. \u2022 The variance component of the error is mostly introduced by the noise addition in the Gaussian mechanism as part of implementing di\ufb00erential privacy guarantees. \u2022 Both sparsi\ufb01cation and rank reduction leads to a large reduction of variance in exchange for a small amount of bias, but leading to an overall decrease in mean-squared error, hence it can lessen the reduction in test accuracy in private training. 1.2 Background In this section, we describe the de\ufb01nition of di\ufb00erential privacy mechanisms and gradient compression algorithms used in our study. 1.2.1 Di\ufb00erential Privacy To test the e\ufb00ects of di\ufb00erential privacy, we adopt the Opacus [25] library. It implements a combination of the Gaussian Mechanism [15] for (\u03f5, \u03b4)-di\ufb00erentially private queries, DPSGD [26] for di\ufb00erentially private deep learning, and Renyi di\ufb00erential privacy accountant [11] for privacy accounting. (\u03f5, \u03b4)-di\ufb00erential privacy [15] is de\ufb01ned so: Let function M : X \u2192Y be a mapping from domain X to range Y. De\ufb01ne the adjacency of to any two sets of data d, d\u2032 \u2208X to mean that max(|d \u2212d\u2032|, |d\u2032 \u2212d|) \u22641 (i.e. d and d\u2032 di\ufb00er by at most a single sample.) M is de\ufb01ned to be (\u03f5, \u03b4)-di\ufb00erential privacy if it follows the following statement \u2200d, d\u2032 \u2208X s.t. max(|d \u2212d\u2032|, |d\u2032 \u2212d|) \u22641 \u2200S \u2286Y Pr[M(d) \u2208S] \u2264e\u03f5Pr[M(d\u2032) \u2208S] + \u03b4 (1.1) An intuitive understanding of this de\ufb01nition is: any observation made about the output M(d) \u2208S can be alternatively explained by M(d\u2032) \u2208S with a lower-bounded probability. Giving d (and by symmetry, d\u2032) plausible deniability when an output in S is observed. The likelihood of any alternative \fCHAPTER 1. INTRODUCTION 3 explanation Pr[M(d\u2032) \u2208S] is lower-bounded relative to original explanation Pr[M(d) \u2208S] the bounded by a factor e\u03f5 and constant \u03b4. As \u03f5 and \u03b4 approaches 0, the likelihood of alternative explanations become just as likely the original explanation and perfect anonymity is guaranteed. To guarantee that any arbitrary mapping M satis\ufb01es such an inequality, the most widely accepted implementation is to replace M with a new function M\u2032 that applies the following post-processing steps to the outputs of M M\u2032(x) = projectC(M(x)) + N(0, \u03c3C) (1.2) The \ufb01rst step is to project all outputs of M into a spherical region centered at the origin with a bounded radius C \u22650. All projections are then perturbed by adding independently sampled noise from a Gaussian distribution N(0, \u03c3C) with a standard deviation \u03c3C proportional to C. The proportionality factor \u03c3 controls the level of privacy guarantee available per observation, with larger values guaranteeing higher privacy. The resulting M\u2032 satis\ufb01es an in\ufb01nite set of (\u03f5, \u03b4)-di\ufb00erential privacy where \u03b4 \u22654 5e \u2212(\u03c3\u03f5)2 2 and \u03f5 < 1. This particular post-processing is referred to as a Gaussian Mechanism in the di\ufb00erential privacy literature. The choice of C doesn\u2019t impact the privacy guarantee, however it does e\ufb00ect the accuracy of training. In our work we tried a few di\ufb00erent values for each task and picked the one that gives the best accuracy after a single epoch. More advanced strategies exist such as adaptive clipping [14] work which proposes the an adaptive clipping radius C dynamically set to a di\ufb00erentially private estimate of a \ufb01xed quantile of the gradient norms. Traditionally, the Gaussian mechanism is applied to queries on a data base. DPSGD [26] introduced this method to deep learning by applying it to each gradient computation with respect to each individual data sample in the training data set. This guarantees a quanti\ufb01able amount of privacy between participating data samples and, by extension, the individuals supplying those data samples. To account for the accumulating privacy cost during training, they implement a privacy accountant to keep track of the increasing values of \u03f5. The de\ufb01nition of di\ufb00erential privacy can be changed in many ways. For example, de\ufb01ning the set of data belonging to an individual user as a units to anonymize [21], as opposed to individual samples being treated as units. 1.2.2 Gradient Compression In this work we focus on two di\ufb00erent approaches to gradient compression: Deep Gradient Compression [3] and PowerSGD [7] Deep Gradient Compression is an algorithm that produces a layer-wise sparse representation of the gradient vector by representing only the elements with relatively larger magnitudes in each layer. The unrepresented elements are not communicated, with the receiver interpreting them as zeros. This algorithm further compresses all represented elements by removing the low-order bits in their \ufb02oating point representation. The receiver also interprets the removed bits to be zero. This algorithm works under the assumption that most elements of a gradient vector are close to zero, and that the loss function is smooth. Under this assumption, approximating many near-zero values as zero produces a permissibly small change to the model update which produces a permissibly small change in loss for a smooth loss function. There exists other variants of sparsi\ufb01cation such as one using an entropy-based criteria [24] for selecting coordinates to approximate as zero. However, \fCHAPTER 1. INTRODUCTION 4 we focus on using Deep Gradient Compression as an example of sparsi\ufb01cation. PowerSGD is an algorithm that reshapes the layer-wise gradient vector into a square matrices and learns a low-rank factorization of these matrices. The resulting factors are communicated in lieu of the original matrices when it would result in a lower bandwidth usage. This algorithm works under the assumption that the rows of each square matrix are coordinates that span a much smaller set of dimensions than the number of columns in the square matrix. Under this assumption, an approximate low-rank factorization of the square matrix can be produced and used to reconstruct the square matrix without large amounts of error. Notably, for this assumption to be true, there would have to be high correlation between the coordinates of the original gradient vector, which implies an approximate low-rank representation for all gradient vectors. This is similar to the assumption of near-zero values in gradient vectors, however the assumption is generalized to assume near-zero projections along a large set of directions in the parameter space. This reveals that rank reduction can be viewed as a non-axis-aligned generalization of sparsi\ufb01cation. Due to this connection between sparsi\ufb01cation and rank reduction, both sparsi\ufb01cation and rank reduction tend to bias the gradients towards the origin and reduce their variance.. Coordinate-wise quantization is another approach to gradient compression however, it is usually not used in isolation since its compression rate is upper-bounded by 64 (\ufb02oating point values are generally represented in 64 bits, and the minimum coordinate-wise representation is 1 bit.) For this reason, coordinate-wise quantization is not the most competitive approach in literature. While it is possible quantization may interact with di\ufb00erential privacy di\ufb00erently than sparsi\ufb01cation and rank reduction, we leave this direction as an area for future study. 1.3 Related Work In this section, we discuss related works which involve both di\ufb00erential privacy and gradient compression. There exists a growing body of work that is focused on improving the e\ufb03ciency of di\ufb00erential privacy and compression. These include testing the e\ufb00ectiveness of various compression algorithms in di\ufb00erentially private training. DP-SCAFFOLD [4] applies the work of SCAFFOLD [20] to DPSGD, and \ufb01nd that the control variates designed to reduce the impact of non-IID data partitions can also reduce the variance introduced by di\ufb00erential privacy mechanisms. Q-DPSGD [12] explores the e\ufb00ectiveness of gradient quantization applied before and after the Gaussian mechanism and benchmarks it to be computationally faster than SDM-DSGD [27] which applies a randomized unbiased sparsi\ufb01cation after the Gaussian mechanism. FL-CS-DP [2] explores the use of compressive sensing, where they view the gradient vector as a time series that can be transformed into frequency space, keeping only the low-frequency values. They propose a novel formulation of the compression optimization to improve upon traditional DCT (Discrete Cosine Transform) compression. The works above attempt to \ufb01nd combinations of di\ufb00erential privacy and compression mechanisms that achieve the greatest resource e\ufb03ciency, with the resource being time, bandwidth, or privacy budget. Our work\u2019s main focus is to o\ufb00er insight on the relationship between compression, di\ufb00erential privacy, and accuracy. We hope that these insights inspire novel ideas that result in greater resource e\ufb03ciency. \fCHAPTER 1. INTRODUCTION 5 There also exists a number of works that explore compression mechanisms which already introduce noise. These mechanisms can be modi\ufb01ed to provide di\ufb00erential privacy guarantees on top of the pre-existing compression capabilities. Count-Sketch [17, 8] is one such mechanism that inherently introduces randomness through random hash functions. Dithered quantization [5] is another approach which adds noise before quantization. MVU [13] adds this noise after quantization by sampling from discrete distribution. They also formulation an optimization that minimizes the distribution variance while satisfying di\ufb00erential privacy guarantees. These work explore the potential of re-purposing pre-existing randomness in compression algorithms towards di\ufb00erential privacy. Similarly, their goal is to prove the e\ufb00ectiveness of this approach against a baseline, less so to provide a deep analysis of how their compression interacts with di\ufb00erential privacy mechanisms. Additionally, there is research in di\ufb00erential privacy and compression in contexts other than deep learning such as data base queries [23]. While the same privacy and compression mechanisms can often used across many contexts, their interaction can be dependent on the type of information being protected and compressed. The assumptions one can make regarding gradient vectors in deep learning can\u2019t be generally made about arbitrary data base queries. We study the speci\ufb01c context of deep learning in hopes of \ufb01nding unique insights that would otherwise be hidden in a more general context. \fChapter 2 Methods In this section, we describe the tasks, models, and hyperparameter settings used to conduct our experiments. We also de\ufb01ne some metrics used in our results. Refer to the following repository for an exmaple of how to run these experiments: https://github.com/Jimmy-Lin/privacy-ml-systems 2.1 Tasks and Models To evaluate the generality of our insight across di\ufb00erent tasks in deep learning, we train 4 models on 4 di\ufb00erent tasks: Surnames, CIFAR-10, SNLI, CIFAR-100 2.1.1 Surnames Task The goal of this task is to classify the language associated with an alphabetic surname, given a choice of 18 di\ufb00erent languages. We train with a learning rate of 2.0 and a batch size of 32 for 100 epochs. Refer to the following URL to \ufb01nd a copy of the data set https://github.com/spro/ practical-pytorch/tree/master/data/names. The model we train is a 256-character-set LSTM model with a single LSTM layer of 64 embedding dimensions and 128 output dimensions, followed by a fully connected layer of 18 classes and a softmax activation. Refer to [16] for details on the LSTM cell architecture. 2.1.2 CIFAR-10 Task The goal of this task is to classify the object at the centre of a 32x32 coloured image, given a choice of 10 di\ufb00erent object classes. We train with a learning rate of 0.1 and a batch size of 128 for 100 epochs. Refer to [9] for more information on this data set. The model we train is a 3-block CNN, each block containing a biased convolution layer of kernel size 3 and stride 1. Each convolution is followed by an instance normalization with a momentum value of 0.1, a ReLU activation, an average pooling with a pool size of 2, and a spatial dropout with probability 0.1. The 3 blocks di\ufb00er only by their number of output \ufb01lters: 32, 64, and 128. After the 3 blocks, we follow with 2 biased hidden layers of 256 and 512 units respectively, each using ReLU activation and dropout with probability 0.25. Lastly, the model \ufb01nishes with a fully connected layer into 10 classes and a softmax activation. 6 \fCHAPTER 2. METHODS 7 2.1.3 SNLI Task The goal of this task is to classify the logical relation between a pair of English sentences, the relation can be either \u201dentailment\u201d, \u201dcontradiction\u201d, or \u201dneutral\u201d. We train with a learning rate of 0.05 and a batch size of 32 for 1 epoch. Refer to [10] for more information on this data set. We \ufb01ne-tune a pre-trained \u201dbert-based-case\u201d model which can be found at https://huggingface. co/bert-base-cased. We freeze all parameters except for the classi\ufb01er, pooling, and \ufb01nal layer of the encoder. Refer to [22] for details on the BERT architecture. 2.1.4 CIFAR-100 The goal of this task is to classify the object at the centre of a 32x32 coloured image, given a choice of 100 di\ufb00erent object classes. We train with a learning rate of 1.0 and a batch size of 64 for 100 epochs. Refer to [9] for more information on this data set. We train a modi\ufb01ed version of ResNet-18. Speci\ufb01cally, we set the global average pooling that follows the stack of residual blocks to output 2x2 channels instead of 1x1. We \ufb01nd this modi\ufb01cation results in much better accuracy on this data set. Refer to [19] for details on the ResNet architecture. 2.2 Di\ufb00erential Privacy and Compression Hyperparameter Settings In this section, we describe how we con\ufb01gure the di\ufb00erential privacy mechanism and compression algorithms. 2.2.1 Di\ufb00erential Privacy Mechanism Settings For the Surnames task, we use a clipping radius of 3.0 and a \u03b4 value of 0.00008. For the CIFAR-10 task, we use a clipping radius of 5.0 and a \u03b4 value of 0.00001. For the SNLI task, we use a clipping radius of 21.0 and a \u03b4 value of 1 549361. For the CIFAR-100 task, we use a clipping radius of 1000.0 and a \u03b4 value of 0.00001. To vary the privacy level, we use noise multiplier values of 0.0, 0.4, and 0.8. Note that we clip the gradients even in the non-private training so that the changes in privacy guarantee is solely attributed to the noise addition. Gradient clipping in non-private training is common practice for the purpose of limiting the impact of exploding gradients. Clipping radius is selected based on an approximate median of the gradient norm at the \ufb01rst iteration. This is based on the observation of adaptive clipping [14] that clipping approximately 50% of the gradients in a batch appear to work well. However, we keep a \ufb01xed value instead of adjusting it over the course of training. While this selection may be unlikely in practice, it serves as a good way to standardize across tasks that exhibit di\ufb00erent gradient norms. We select \u03b4 values by picking a value roughly on the order of 1 n where n is the number of training samples in the training data set. This is the recommended upper bound on \u03b4 [15] for (\u03f5, \u03b4)-di\ufb00erential privacy in literature. 2.2.2 Compression Algorithm Settings To vary the compression rate of Deep Gradient Compression (DGC) we con\ufb01gure the DGC algorithm to compression rates of 1, 16, and 256. To vary the compression rate of PowerSGD, we con\ufb01gure \fCHAPTER 2. METHODS 8 the PowerSGD algorithm to use approximation ranks of 1 and 16. In practice, PowerSGD doesn\u2019t o\ufb00er low compression rates as approximation ranks above 16 tend to incur severely large compute overhead. Due to this computational overhead and the large size of layers in ResNet, we set the approximation ranks to 1 only for the CIFAR-100 tasks instead of 1 and 16. 2.3 Metric De\ufb01nitions 2.3.1 Accuracy We measure the test accuracy of a model at the end of every epoch and use a average of the last 10 epochs to represent the \ufb01nal model accuracy. In the case of tasks which train for only 1 epoch, we simply take the single measurement of test accuracy. 2.3.2 Bandwidth Usage Since upstream network capacity is generally far more scarce in wide area networks than downstream network capacity, we measure only the upstream network usage which consists of mainly the gradient vectors uploaded to parameter servers per client. We assume all vectors are transmitted in COO format when estimating the number of bytes sent. 2.3.3 Privacy Bound For measurements of privacy bound, we defer to the Opacus library\u2019s implementation of Renyi di\ufb00erential privacy accountant to track the \u03f5 value. We \ufb01x \u03b4 as a hyperparameter and quantify di\ufb00erences in privacy guarantee solely through \u03f5. Smaller values of \u03f5 indicate a stronger di\ufb00erential privacy guarantee. \fChapter 3 Results In this section, we discuss the results of training each task at di\ufb00erent combinations of di\ufb00erential privacy guarantees, compression algorithms, and compression rates. We acknowledge that higher test accuracy may be achievable through state-of-the-art architectural designs. Since the goal of this study is to characterize the relationship between di\ufb00erent con\ufb01gurations and well-studied architectures, it is unnecessary to \ufb01nd the most optimal architecture and con\ufb01guration for each data set. 3.1 E\ufb00ects on Test Accuracy In this section, we focus on the e\ufb00ects we observe on test accuracy. 3.1.1 Surnames Task Figure 3.1 (top) shows the accuracy measurements grouped by their di\ufb00erential privacy bound (\u03f5). While most groups experience a small decrease in accuracy of a few percent, we observe that the group with \u03f5 = 50.92 experiences a slight increase in accuracy of 5.1%. Figure 3.1 (bottom) shows the accuracy measurements grouped by their upstream network usage (Gb). Within each group we see a large decrease in accuracy when the noise multiplier is increased. The uncompressed group which used 44.22Gb experiences an accuracy drop of 40.4%. The compressed group experiences an accuracy drop of 34.0%. In \ufb01gure 3.2, we observe very similar patterns to \ufb01gure 3.1. Remarkably, the accuracy increase in the group with \u03f5 = 50.92 is even larger at a 10.4% increase. Overall we observe that this task is relatively robust to gradient compression, losing only a few percent in accuracy. Surprisingly, an increase is accuracy is sometimes observed with increasing compression rate. This observation was more noticeable when using the PowerSGD compression 3.1.2 CIFAR-10 Task Figure 3.3 (top) shows the accuracy measurements grouped by their di\ufb00erential privacy bound (\u03f5). We observe that this task is noticeably more sensitive to compression than the Surnames task. Compression can decrease the accuracy by as much as 25.0%. Once again, we observe an increase in 9 \fCHAPTER 3. RESULTS 10 Figure 3.1: Test accuracy averaged over last 10 epochs after 100 epochs of training the Surnames task using DGC. (Top) grouped by di\ufb00erential privacy bound (\u03f5). (Bottom) grouped by upstream network usage (Gb). accuracy with higher compression rate, but this time it occurs within the group with \u03f5 = 4.86 and the increase in accuracy is 11.6%. Figure 3.3 (bottom) shows the accuracy measurements grouped by their upstream network usage (Gb). We notice that the in-group range of accuracy decreases as the amount of bandwidth used decreases. This starts at a range of 54.5% (bandwidth = 105.25Gb) to a range of 17.9% (bandwidth = 1.22Gb). In \ufb01gure 3.4, we observe very similar patterns to \ufb01gure 3.3. Overall we observe that this task is relatively more sensitive to compression than the Surnames task. The the increase in accuracy in private training when compression rate is increased is observed again, similar to what we observe in the Surnames task. This time the increase is similar between compression algorithms 3.1.3 SNLI Task Figure A.1 (top) shows the accuracy measurements grouped by their di\ufb00erential privacy bound (\u03f5). We observe that this task, in the non-private \u03f5 = \u221ecase, is very robust to gradient compression. It is inconclusive whether this can be said about the private training cases, since the models produced are of similar accuracy to a random prediction. This is due to this task being very sensitive to the noise added by the di\ufb00erential privacy mechanism. We do see a very slight increase in accuracy of 1.9% when \u03f5is0.89, however this is not a very signi\ufb01cant amount. Figure A.1 (bottom) shows the accuracy measurements grouped by their upstream network usage (Gb). In every group, the accuracy lost due to noise is the dominant factor in changes to accuracy. We do see a slight decrease \fCHAPTER 3. RESULTS 11 Figure 3.2: Test accuracy averaged over last 10 epochs after 100 epochs of training the Surnames task using PowerSGD. (Top) grouped by di\ufb00erential privacy bound (\u03f5). (Bottom) grouped by upstream network usage (Gb). in sensitivity to noise from 48.9% to 44.7% but this amount is not very conclusive. In \ufb01gure A.2, we observe very similar patterns to \ufb01gure A.1. We observe that this task is very sensitive to noise, with it being the dominant factor in observable loss in accuracy. We do observe the increase in accuracy correlated with compression and a reduction in noise sensitivity when compression is added. However, the amount is much smaller this time and it is hard to use this as conclusive evidence. We attribute this to the fact that noise is so dominant in it\u2019s e\ufb00ect on accuracy for this task. 3.1.4 CIFAR-100 Task Figure A.3 (top) shows the accuracy measurements grouped by their di\ufb00erential privacy bound (\u03f5). We observe that this task is very sensitive to noise and compression. Similar to the SNLI task, the model\u2019s accuracy is no better than random prediction when noise is added by the di\ufb00erential privacy mechanism. In the non-private case, we see a 22.2% decrease in accuracy after compression. Figure A.3 (bottom) shows the accuracy measurements grouped by their upstream network usage (Gb). We observe that noise dominates the decrease in accuracy in the non-compressed group with bandwidth = 417.5Gb. However, compression also plays a role in decreasing the accuracy in non-private training. In \ufb01gure A.4, we observe very similar patterns to \ufb01gure A.3. We observe that this task is very sensitive to noise, but also compression. No interesting pattern can be observed from the experiments ran on this task, due to most trials resulting in minimal \fCHAPTER 3. RESULTS 12 Figure 3.3: Test accuracy averaged over last 10 epochs after 100 epochs of training the CIFAR-10 task using DGC. (Top) grouped by di\ufb00erential privacy bound (\u03f5). (Bottom) grouped by upstream network usage (Gb). accuracy. 3.1.5 General Observations We observe that the larger models used in the SNLI and CIFAR-100 tasks are more sensitive to noise than the smaller models. While smaller models such as the LSTM and CNN do experience loss of accuracy due to noise, they aren\u2019t immediately rendered par with random predictions. Additionally, the image classi\ufb01cation tasks are noticeably more sensitive to compression than the text classi\ufb01cation tasks. This could be attributed to the data type being classi\ufb01ed, or potentially common architectural components in image classi\ufb01cation vs text classi\ufb01cation (eg. convolution, normalization, pooling vs embedding, LSTM, self-attention). We observe that in the tasks involving smaller models (Surnames and CIFAR-10), at some levels of di\ufb00erential privacy guarantees (\ufb01nite \u03f5 value), increasing the compression rate can increase the model accuracy. We also observe that compressing the gradient appears to reduce the sensitivity of accuracy to noise. We hypothesize that compression has a way of reducing the negative impact of noise. In the tasks involving larger models (SNLI, CIFAR-100), we observe either a very weak form of the trend or no such trend at all. We attribute this to their relatively higher sensitivity to noise. \fCHAPTER 3. RESULTS 13 Figure 3.4: Test accuracy averaged over last 10 epochs after 100 epochs of training the CIFAR10 task using PowerSGD. (Top) grouped by di\ufb00erential privacy bound (\u03f5). (Bottom) grouped by upstream network usage (Gb). 3.2 Convergence Analysis In this section, we analyze the changes in test accuracy over the course of training which is measured after ever epoch. We omit the SNLI task from this analysis since it is trained for only 1 epoch, and thus has no further information to show when visualized as a time series. 3.2.1 Surnames Task Figure B.1 shows the progression of test accuracy over the course of training the Surnames task. We observe that all trials actually reach their plateau within the \ufb01rst 10 epochs. The accuracy of private training trials exhibit very large variability over time, but their compressed counterparts appear to reduce this variability. Finally, the non-compressed, non-private training trial loses test accuracy after an initial peak within the \ufb01rst 10 epochs. It\u2019s compressed counterpart doesn\u2019t exhibit this behaviour but it also doesn\u2019t exceed it in \ufb01nal accuracy. Figure B.2 provides a smoothed view of the time series for better comparison of the private training trials. To achieve this smoothing, we use a mean convolution over the time axis with width \fCHAPTER 3. RESULTS 14 20 and no padding at end points. The same observations can be made in B.4 and B.3 when using the PowerSGD compression algorithm. 3.2.2 CIFAR-10 Figure 3.5: Test Accuracy over 100 Epochs of the CIFAR-10 task. Figure 3.5 shows the progression of test accuracy over the course of training the CIFAR-10 task. We observe that the variability in test accuracy over time is consistently small for all trials. This time non-compressed trials in private training are the ones that decrease in test accuracy after an initial peak within the \ufb01rst 10 epochs. Furthermore, we \ufb01nd that this drop in accuracy appears to explain why the non-compressed trials show lower accuracy than their compressed accuracy in \ufb01gure 3.3. When we look at the accuracy in the \ufb01rst 10 epochs, the accuracy is higher when compression is lower. When we look at the last 10 epochs, the accuracy is higher when the compression is higher. We see a similar e\ufb00ect when applying the PowerSGD compression algorithm in \ufb01gures B.5 and 3.4. 3.2.3 CIFAR-100 Figures B.6 and B.7 show the progression of test accuracy over the course of training the CIFAR-100 task. We observe that the variability over time is very small for this task as well, and accuracy is mostly increasing steadily over the course of training (if increasing at all). \fCHAPTER 3. RESULTS 15 3.2.4 General Observations We observe that the Surnames task show much higher variability in accuracy over time when noise is added. Additionally, trials not using compression sometimes experience a peak early in training, followed by gradual loss of accuracy. This e\ufb00ect is lessened by compression. In some instances, this leads to the non-compressed trial \ufb01nishing with lower accuracy than the compressed trial. 3.3 Gradient Error In this section, we analyze the correlation between gradient error and test accuracy and break down the error to better understand what contributes to our observed decrease in accuracy. We de\ufb01ne gradient error as the mean squared error between a gradient vector average prior to an application of the di\ufb00erential privacy mechanism and gradient compression and it\u2019s counterpart after the di\ufb00erential privacy mechanism and gradient compression. Speci\ufb01cally, we measure the gradient error at the beginning of training. We target empirical average gradient as follows: Let {Xi, yi}B i=1 denote a set of B input-output pairs (Xi, yi) randomly sampled from the training data samples. Let L denote a loss function we wish to optimize with respect to \u03b8, an m-dimensional vector of parameters. We de\ufb01ne the mdimensional vector g as the empirical average gradient over B training data samples as a target we wish to estimate through possibly noisy and/or biased samples. g = 1 B B X i=1 \u2207\u03b8L(Xi, yi) (3.1) We de\ufb01ne a mechanism F : Rb,m \u21d2Rm as a function that takes as input the set of B gradient samples and outputs an estimate of \u02c6 g. This mechanism is allowed to be stochastic. We view the composition of our di\ufb00erential privacy mechanism and gradient compression as one such mechanism. \u02c6 g = F({\u2207\u03b8L(Xi, yi)}B i=1) (3.2) Since F can be stochastic, it may produce a di\ufb00erent estimate each time. For this reason, we measure the mean-squared di\ufb00erence between an estimate \u02c6 g and the target g for n independent instances of \u02c6 g. We measure mean-squared error of the average gradient estimate as follows: MSE(F, g) = 1 n n X i=1 ||\u02c6 gi \u2212g||2 2 (3.3) In the results that follow we use n = 100 as the sample size for estimating the gradient error. 3.3.1 Correlation between Gradient Error and Test Accuracy In this section, we show scatter plots between test accuracy and gradient error for each task and compression algorithm. In \ufb01gure 3.6, we observe a trend that decrease in test accuracy coincides with increase in gradient error. The gradient error has been plotted on a log scale to better illustrate this. This correlation exists in all other tasks which we demonstrate in \ufb01gures C.1 to C.7. \fCHAPTER 3. RESULTS 16 Figure 3.6: Test accuracy vs gradient error (log base-10 scale) for the CIFAR-10 task with DGC. 3.3.2 E\ufb00ects on Gradient Error In this section, we analyze how gradient error relates to the level of noise added by di\ufb00erential privacy mechanisms and gradient compression algorithms. In \ufb01gure 3.7, we observe that the gradient error is mostly contributed by the di\ufb00erential privacy mechanism\u2019s addition of noise (up to 353.0 units). While the compression algorithm can contribute to gradient error (up to 0.2 units), it often reduces the gradient error already contributed by the di\ufb00erential privacy mechanism (from 353.0 units down to 14.0 units). We believe this to be related to the correlation between compression rate and test accuracy in private training of small models. This can also be observed in other tasks in \ufb01gures D.1 to D.7. Speci\ufb01cally, if compression reduces gradient error, and lower gradient error correlates with higher test accuracy, then it isn\u2019t unreasonable that compression correlates with higher test accuracy through the lowering of gradient error. We do see the same error reduction in large models, but no increase in test accuracy. It\u2019s possible that the error reduction is simply not strong enough to overcome the e\ufb00ect of noise. 3.3.3 Gradient Error Breakdown: Bias vs Variance In this section, we analyze a breakdown of the gradient error into bias and variance. It is well known that the expected squared error of an estimator, in this case the expected gradient, can be decomposed into a bias component and a variance component. With su\ufb03ciently large n, this is also true of the empirical average of squared error. E \u0002 || \u02c6 g \u2212g ||2 2 \u0003 = || E \u0002\u02c6 g \u0003 \u2212g ||2 2 + E \u0002 || \u02c6 g \u2212E \u0002\u02c6 g \u0003 ||2 2 \u0003 (3.4) The bias component is simply the deviation caused by the mechanism F, and the variance is the variability across di\ufb00erent instances of estimates due to the stochasticity of F. \fCHAPTER 3. RESULTS 17 Figure 3.7: Gradient Error vs Privacy Bound (\u03f5) and Bandwidth (Gb) for the CIFAR-10 task with DGC. Here we observe through \ufb01gures 3.8, 3.9 the following: Clipping (left bar in each subplot) at the current con\ufb01guration introduces relatively minimal gradient error, and when it does it tends to introduce bias (orange) not variance (blue). Noising (middle bar in each subplot) introduces a very signi\ufb01cant amount of error and when it does it is overwhelmingly variance (blue) and not bias (orange). Compression (right bar in each subplot) introduces some amount of bias (orange) but not variance (blue) and it is relatively small compared to the variance introduced by noising. Furthermore, compression reduces the variance introduced by noising. This is also supported in other tasks shown in \ufb01gures E.1 to E.6. A high-level way of interpreting this is that the two compression algorithms we tested introduce a bias towards the origin, but in doing so they reduce the variance of our average gradient estimation. In the context of di\ufb00erentially private training where noise contributes a large amount of variance, reducing a large amount of variance in exchange for a small amount of bias can reduce the overall error when estimating the average gradient. It can be said that the compression has a regularizing e\ufb00ect on our estimation of the average gradient. \fCHAPTER 3. RESULTS 18 Figure 3.8: Gradient Error vs Privacy Bound (\u03f5) and Bandwidth (Gb) for the Surnames task with DGC. 3.4 Optimizing the Bias-Variance Trade-O\ufb00 In this section, we show the suboptimality of selecting clipping values based on the 50th percentile of gradient norms. We demonstrate why it makes sense to drastically reduce the clipping value for larger models, and that the clipping value has an optimal value related to the shrinkage coe\ufb03cient of the James-Stein estimator [18]. 3.4.1 Minimal-Error Clipping In this section, we empirically test the clipping value that minimizes gradient error as well as provide an approximate theoretical model for guessing the optimal clipping value. Figures 3.10 and 3.11 demonstrate that the strategy of setting the clipping radius to be the median of gradient norms [14] does not minimize the gradient error. We propose the following theoretical model of the gradient error. More examples of this are shown in \ufb01gures F.1 and F.2. Approximate Error = max(0, (||g||2 \u2212C)2) + mC2\u03c32 (3.5) We further simplify this model into a convex di\ufb00erential form as follows to allow us to directly \fCHAPTER 3. RESULTS 19 Figure 3.9: Breakdown of gradient error into bias and variance at di\ufb00erent stages (after clipping, after noising, and after compression) for the SNLI task with DGC. solve for the minimum by di\ufb00erentiating with respect to C. Di\ufb00erentiable Approximate Error = (||g||2 \u2212C)2 + mC2\u03c32 (3.6) C\u2217= argminC \u0000(||g||2 \u2212C)2 + mC2\u03c32\u0001 = ||g|| 1 + m\u03c32 (3.7) Shown in \ufb01gures 3.10, and 3.11, the minimum for this model generally leads to at least a couple orders of magnitude of decrease in gradient error. Of course, the quality of such a model depends on the knowledge of the median gradient norm ||g||2. The advantage this model provides is a better utilization of the knowledge of ||g||2, should it be available either exactly or approximately with di\ufb00erential privacy guarantees in the case of [14]. It is a better strategy than simply setting C = ||g||2, as it takes into account the e\ufb00ect of dimensionality m and the noise multiplier \u03c3. More examples of this are shown in \ufb01gures F.1 and F.2. Worth noting in \ufb01gure 3.11 (left), is that the assumed knowledge of the median gradient norm ||g||2 appears to be produce a bad theoretical model. This results in the theoretical minimum being much larger than the empirical model. We believe this is due to a large di\ufb00erence between the norm \fCHAPTER 3. RESULTS 20 Figure 3.10: Relationship between gradient error and clipping value for SNLI task. of the average gradient and the median of the gradient norms, and that perhaps the norm of the average gradient would be better at informing the theoretical model. Another use for this model is providing e\ufb03cient clipping values that are well-tuned across di\ufb00erent levels of di\ufb00erential privacy. With the use of this model, it can be shown that our methodology of keeping the same clipping threshold for di\ufb00erent levels of di\ufb00erential privacy is actually suboptimal. However, to our best knowledge, there\u2019s no work in the literature of di\ufb00erential privacy that currently suggests the optimal clipping threshold depends on the dimensionality and privacy level. While it is possible to manually tune the clipping parameter with many repeated trials, this can be infeasible in the context of di\ufb00erentially private training. This is because training a model to convergence, even if it uses a suboptimal clipping value, can incur a large privacy cost. It is much more e\ufb03cient to perform a di\ufb00erentially private query of the average gradient norm, and compute a reasonable clipping radius directly. Figure 3.12 shows that the there is some bene\ufb01t to optimizing the clipping value in most cases with gains of up to 6.5% accuracy. However, not all tasks bene\ufb01t from this. The CIFAR-10 task experienced worse accuracy after the change in clipping. 3.4.2 Bias-Variance Optimization through Intermediate Processing In this section, we investigate the e\ufb00ectiveness reducing the error by processing the di\ufb00erentially private gradient estimates after the privacy mechanism as opposed to directly adjusting the privacy mechanism. Our proposed mechanism requires no prior knowledge about the gradients before the privacy mechanism. So there is no need to perform di\ufb00erentially private queries for information such \fCHAPTER 3. RESULTS 21 Figure 3.11: Relationship between gradient error and clipping value for CIFAR-100 task. Figure 3.12: Result of optimizing clipping based on theoretical model as the median of gradient norms. We propose this new algorithm which functions reduce the error introduced by noise addition and compression: Denoise. In this algorithm, both the sender and receiver keep track of a velocity term which is the an exponential average of the past average gradients. At each iteration, the sender updates it\u2019s velocity using the new average gradient. We de\ufb01ne this change in velocity as acceleration. The sender then performs a top-k sparsi\ufb01cation of both the acceleration and the velocity vector, and compares the norm of the error produced by sparsi\ufb01cation in both cases. The sender sends either the sparse velocity or the sparse acceleration to the receiver along, whichever results in the least compression error, along with a single-bit \ufb02ag to signal whether the message contains a velocity vector or acceleration vector. The sender accumulates a residual term to feed back into the next message, but the residual is decayed by some factor. The receiver updates it\u2019s velocity by replacing it with a new velocity vector or adding the acceleration vector. The key to our compression algorithm \fCHAPTER 3. RESULTS 22 is the use of temporal averaging to reduce noise and the option to choose between either velocity or acceleration, one of which may incur less compression error at any given round. Details of this algorithm are written in 1 Figure 3.13: E\ufb00ectiveness of introducing our algorithm to variance combinations of di\ufb00erential privacy guarantees and gradient compression (sparsi\ufb01cation). Figures 3.13 to 3.16 show the impact of di\ufb00erentially private training with sparsi\ufb01cation (DGC) compared to di\ufb00erentially private training with model-wise sparsi\ufb01cation and denoise applied in between. In the Surnames, CIFAR-10, and SNLI tasks, the addition of denoise improves accuracy for every combination of privacy and compression except for CIFAR-10 with \u03f5 = 62.09 and bandwidth = 1.23Gb. In the non-private cases, we see that allowing the compression algorithm to choose between two di\ufb00erent messages allows us to compress more aggressively with less reduction in accuracy (3.5% less in Surnames, 15.1% in CIFAR-10, 1.4% in SNLI, and 12.4% in CIFAR-100). In the non-compressed cases, we see that the second clipping reduces the variance enough to enable stronger di\ufb00erential \fCHAPTER 3. RESULTS 23 Figure 3.14: E\ufb00ectiveness of introducing our algorithm to variance combinations of di\ufb00erential privacy guarantees and gradient compression (sparsi\ufb01cation). privacy guarantees without as much reduction in accuracy (23.2% in Surnames, 24.6% in CIFAR10, 14.6% in SNLI, and 0.0% in CIFAR-100). Unfortunately, this e\ufb00ect was not observable in the CIFAR-100, due to it being particularly sensitive to noise. In the CIFAR-100 task, we observe greater compressibility with denoise, but the noise sensitivity did not see observable di\ufb00erences. We show results of running with a lower clipping radius of 70.0 in \ufb01gure 3.16 (suggested as the empirical optimum from testing minimal-error clipping) but saw the same results. The function of our algorithm is orthogonal to di\ufb00erential privacy and compression. For this reason, it can be used in conjunction with any gradient noising mechanism and any gradient compression algorithm, and is not in direct competition with the prior in either area. 3.5" + }, + { + "url": "http://arxiv.org/abs/2110.01529v2", + "title": "A Proposed Conceptual Framework for a Representational Approach to Information Retrieval", + "abstract": "This paper outlines a conceptual framework for understanding recent\ndevelopments in information retrieval and natural language processing that\nattempts to integrate dense and sparse retrieval methods. I propose a\nrepresentational approach that breaks the core text retrieval problem into a\nlogical scoring model and a physical retrieval model. The scoring model is\ndefined in terms of encoders, which map queries and documents into a\nrepresentational space, and a comparison function that computes query-document\nscores. The physical retrieval model defines how a system produces the top-$k$\nscoring documents from an arbitrarily large corpus with respect to a query. The\nscoring model can be further analyzed along two dimensions: dense vs. sparse\nrepresentations and supervised (learned) vs. unsupervised approaches. I show\nthat many recently proposed retrieval methods, including multi-stage ranking\ndesigns, can be seen as different parameterizations in this framework, and that\na unified view suggests a number of open research questions, providing a\nroadmap for future work. As a bonus, this conceptual framework establishes\nconnections to sentence similarity tasks in natural language processing and\ninformation access \"technologies\" prior to the dawn of computing.", + "authors": "Jimmy Lin", + "published": "2021-10-04", + "updated": "2021-12-28", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "Introduction For the past half a century, information retrieval has been dominated by bag-of-words exact-match scoring models such as BM25 executed at scale using inverted indexes and ef\ufb01cient query-at-atime retrieval algorithms. Even in the context of feature-based learning to rank and, more recently, neural models, these bag-of-words models remain of fundamental importance because they provide potentially relevant texts for downstream reranking in the context of multi-stage pipelines. This role is usually referred to as \ufb01rst-stage retrieval or candidate generation. Multi-stage ranking architectures have been studied extensively by academic researchers [Matveeva et al., 2006, Cambazoglu et al., 2010, Wang et al., 2011, Tonellotto et al., 2013, Asadi and Lin, 2013, Capannini et al., 2016, Clarke et al., 2016, Chen et al., 2017, Mackenzie et al., 2018] and there is substantial documentation that many commercial applications are designed in this manner [Pedersen, 2010, Liu et al., 2017, Huang et al., 2020, Zou et al., 2021]. There has, of late, been much interest and excitement surrounding so-called \u201cdense retrieval\u201d techniques, or ranking with learned dense representations. This general approach, often called a bi-encoder design [Humeau et al., 2020], is perhaps best exempli\ufb01ed by DPR [Karpukhin et al., 2020] and ANCE [Xiong et al., 2021], but other examples abound [Gao et al., 2021b, Hofst\u00e4tter et al., 2020, Qu et al., 2021, Hofst\u00e4tter et al., 2021, Qu et al., 2021, Zhan et al., 2021, Lin et al., 2021c]. Dense retrieval is formulated as a representational learning problem where the task is to learn (nowadays, transformer-based) encoders that map queries and documents into dense \ufb01xed-width vectors (768 dimensions is typical). The goal is to maximize inner products between queries and relevant documents and to minimize inner products between queries and non-relevant documents. This is framed as a supervised machine learning problem, with relevance signals coming from a large dataset such arXiv:2110.01529v2 [cs.IR] 28 Dec 2021 \fas the MS MARCO passage ranking test collection [Bajaj et al., 2018]. Lin et al. [2021b] provide a recent survey of this general approach within the broader context of text ranking using BERT and other pretrained transformer-based language models. Experiments have shown that dense retrieval methods outperform \u201csparse retrieval\u201d methods, usually referring to bag-of-words exact-match methods such as BM25.1 This appears to be a robust and widely replicated \ufb01nding, and dense retrieval models are known to have been deployed in real-world search applications, for example, by Bing [Xiong et al., 2021] and Facebook [Huang et al., 2020]. Scaling such methods requires infrastructure that is very different from sparse retrieval: instead of relying on inverted indexes for query evaluation, as BM25 does, dense retrieval typically relies on approximate nearest neighbor (ANN) search; one standard technique exploits hierarchical navigable small world graphs (HNSW) [Malkov and Yashunin, 2020]. Thus, recent literature appears to have established a contrast between dense retrieval and sparse retrieval. The standard portrayal is that they represent fundamentally different approaches, requiring different problem formulations, different models, and different software infrastructures for ef\ufb01cient execution at scale. I argue, however, that this is not the case. Aspects of the ideas and observations presented here were originally captured in two previous papers [Lin and Ma, 2021, Lin et al., 2021a]. I build on both, with additional analysis and synthesis. The goal of this paper is to provide a conceptual framework that unites dense and sparse retrieval by demonstrating that they, in fact, have the same functional form, just with different parameterizations. This framework adopts a representational approach and breaks the core text retrieval problem into a logical scoring model and a physical retrieval model, allowing a researcher to separate how document relevance scores are computed from how retrieval is performed at scale. In terms of scoring models, dense and sparse retrieval can be characterized along two dimensions: the contrast between dense vs. sparse vector representations, and the contrast between supervised (learned) vs. unsupervised approaches. The main contribution of this conceptual framework is that it provides abstractions to help researchers make sense of the panoply of recently proposed retrieval models that, at \ufb01rst glance, defy orderly categorization. The proposed framework suggests a number of open research questions, providing a roadmap for future research, potentially tying together multiple sub-\ufb01elds within information retrieval. As a bonus, this conceptual framework establishes interesting connections to sentence similarity tasks in natural language processing and information access \u201ctechnologies\u201d prior to the dawn of computing. 2 A Conceptual Framework The formulation of text retrieval (alternatively, text ranking)\u2014what information retrieval researchers more precisely call ad hoc retrieval\u2014is typically de\ufb01ned as follows: Given an information need expressed as a query q, the text retrieval task is to return a ranked list of k documents2 {d1, d2 . . . dk} from an arbitrarily large but \ufb01nite collection of documents D = {di} that maximizes a metric of interest, for example, nDCG, AP, etc. These metrics vary, but they all aim to quantify the \u201cgoodness\u201d of the results with respect to the information need; in some cases, metrics can be understood more formally in terms of the utility that a user would derive from consuming the results. The retrieval task is also called top-k retrieval (or ranking), where k is the length of the ranked list (also known as the retrieval or ranking depth). We can break the text retrieval problem down into two distinct components, as follows: Logical Scoring Model Let us de\ufb01ne \u03b7q(q) and \u03b7d(d) as two arbitrary functions that take a query and a document (both sequences of terms), respectively, and map each into a \ufb01xed-width vector representation. As will become clear below, I will call these two functions \u201cencoders\u201d. 1Referring to bag-of-words exact-match methods as \u201csparse retrieval\u201d is a relatively new invention, primarily to establish contrast with dense retrieval methods. Nevertheless, I will use this terminology throughout the paper. 2Consistent with parlance in information retrieval, I use \u201cdocument\u201d throughout this paper in a generic sense to refer to the unit of retrieved text, even though in truth it may be a passage, a web page, a PDF, or some arbitrary span of text. 2 \fLet us further de\ufb01ne a comparison function \u03c6 that takes these \ufb01xed-width vector representations and computes a score. We have: s(q, d) \u2206 = \u03c6(\u03b7q(q), \u03b7d(d)) (1) We can interpret the score s as quantifying the degree to which d is relevant to query q, i.e., the basis for ranking a set of documents with respect to a query. For example, we desire to maximize scores for queries and their relevant documents and minimize scores for queries and non-relevant documents (note how this statement can be straightforwardly operationalized into a loss function). For dense retrieval methods, this design is commonly called a bi-encoder [Humeau et al., 2020]. More intuitively, we can understand the score s as capturing the probability of relevance: P(Relevant = 1|d, q) \u2206 = s(q, d). (2) Note that the domain of \u03b7q comprises arbitrary sequences of terms, including sequences that have never been encountered before. In contrast, the domain of \u03b7d is typically D, since we are retrieving from a given collection of documents (i.e., the corpus). The logical scoring model, as de\ufb01ned in Eq. (1), nicely captures why I characterize this proposed conceptual framework as a \u201crepresentational approach\u201d, since it focuses on matching representations derived from queries (information needs) and documents (texts to be searched). In the context of bag-of-words representations, this formulation puts the vocabulary mismatch problem [Furnas et al., 1987]\u2014overcoming the fact that information seekers and authors use different words to express the same concepts\u2014front and center in the design of retrieval models. As I will discuss in detail later, neural models are simply the source of (better) representations\u2014the structure of the ad hoc retrieval problem remains the same. In fact, across many diverse formulations of retrieval models, \u03c6 is de\ufb01ned as the inner product. Physical Retrieval Model Given the setup above, top-k retrieval can be de\ufb01ned as: arg top-k d\u2208D \u03c6(\u03b7q(q), \u03b7d(d)) (3) That is, given q, we wish to identify from D the k documents d1 . . . dk that have the highest scores s1 . . . sk. These {(di, si)}k i=0 pairs are usually referred to as the ranked list of results (sometimes called the \u201chits\u201d). If s is interpreted as a probability of relevance, as per Eq. (2), then the physical retrieval model represents a direct realization of the Probability Ranking Principle [Robertson, 1977], which states that documents should be ranked in decreasing order of the estimated probability of relevance with respect to the query. We might think of the logical scoring model and the physical retrieval model as providing what I argue to be the \u201cright\u201d abstractions for the text retrieval problem. So far, however, nothing in the presentation above captures information that isn\u2019t already common knowledge. I have simply adopted notation that may seem slightly peculiar, compared to how the text retrieval problem is usually presented (for example, in standard textbooks). Nevertheless, I will attempt to convince the reader that this isn\u2019t a pointless symbol manipulation exercise, but rather this framing of the problem provides a conceptual framework that bridges dense and sparse retrieval methods. 2.1 Applications to Dense and Sparse Retrieval Let us consider DPR [Karpukhin et al., 2020], a popular representative dense retrieval model, and see how it can be understood within this conceptual framework. DPR uses separate transformerbased encoders for queries and documents, \u03b7q and \u03b7d, respectively. Both encoders take the [CLS] representation from BERT [Devlin et al., 2019] as its output representation. In other words, the DPR encoders project queries and documents into \ufb01xed-width vector representations in some latent semantic space (by default, 768 dimensions). Relevance between query representations and document representations\u2014the comparison function \u03c6\u2014is de\ufb01ned in terms of inner products: \u03c6(\u03b7q(q), \u03b7d(d)) = \u03b7q(q)\u22ba\u03b7d(d) (4) 3 \fThe model is trained as follows: let R = {\u27e8qi, d+ i , d\u2212 i,1, d\u2212 i,2, . . . d\u2212 i,n\u27e9}m i=1 be the training set comprising m instances. Each instance contains a query q, a relevant passage d+, and n non-relevant passages d\u2212 1 , d\u2212 2 , ...d\u2212 n . DPR is trained with the following loss function: L(q, d+, d\u2212 1 , d\u2212 2 , ...d\u2212 n ) = \u2212log exp [\u03c6(\u03b7q(q), \u03b7d(d+))] exp [\u03c6(\u03b7q(q), \u03b7d(d+))] + Pn j=1 exp \u0002 \u03c6(\u03b7q(q), \u03b7d(d\u2212 j )) \u0003. (5) Non-relevant passages for a query are selected via in-batch negative sampling [Henderson et al., 2017], from examples associated with other queries in the same training batch. However, this is a technical detail and other models select negative examples in different ways. For example, ANCE [Xiong et al., 2021] searches for \u201chard negatives\u201d based on an earlier version of the document encoder itself. I have just described DPR in terms of the proposed conceptual framework outlined above. Now let\u2019s try to recast BM25 [Robertson et al., 1995] in the same framework. In fact, the mapping is pretty straightforward: The query encoder \u03b7q and the document encoder \u03b7d both generate sparse bag-of-words vector representations of dimension |V |, where V is the vocabulary of the corpus. For the output of the document encoder \u03b7d, as with any bag-of-words representation, each dimension corresponds to a term in the vocabulary, and each term is assigned a weight according to the BM25 scoring function. The query encoder \u03b7q uses a multi-hot representation, with a weight of one if the term is present in the query, and zero otherwise.3 The comparison function \u03c6 is, like DPR, de\ufb01ned in terms of the inner product. Viewed in this manner, we can clearly see that BM25 and DPR have the same functional form, parameterized by \u03b7q, \u03b7d, and \u03c6, and in fact, \u03c6 is the inner product in both cases. Explained in terms of abstractions such as interfaces in programming languages, by analogy the logical scoring model de\ufb01nes the abstract methods (\u03b7q, \u03b7d, and \u03c6) that speci\ufb01c retrieval models override with custom implementations, and here I have demonstrated that the abstraction covers both BM25 and DPR. This framework can be applied to the recent panoply of proposed dense retrieval methods in the literature, as well as nearly all families of bag-of-words exact-match models beyond BM25\u2019s probabilistic formulation, e.g., tf\u2013idf, query likelihood, divergence from randomness, etc. This conceptual framework allows us to draw a direct connection between dense retrieval and sparse retrieval as parametric variations of the same underlying logical scoring model. Finally, what about cross-encoders? Typical of this design is the monoBERT model [Nogueira and Cho, 2019, Lin et al., 2021b], where a query and a document are fed into a pretrained transformer as part of an input template, and the contextual representation of the [CLS] token is used for relevance classi\ufb01cation. Here, we can say that the comparison function \u03c6 is de\ufb01ned in terms of the transformer, and thus cross-encoders are still captured by the logical scoring model de\ufb01ned in Eq. (1). \u201cHiding\u201d transformer inference in the comparison function \u03c6 might seem like a sleight of hand, but the PreTTR reranking model proposed by MacAvaney et al. [2020] connects a \u201cfull\u201d crossencoder like monoBERT on the one hand to \u03c6-as-inner-product methods like DPR on the other hand. MacAvaney et al. began with the simple observation that query\u2013document attention prevents document representations from being computed of\ufb02ine; recall that in DPR, \u03b7d(\u00b7) does not depend on the query. Yet, it is precisely query\u2013document attention that allows cross-encoders to obtain high levels of effectiveness. PreTTR was designed with this insight: What if we limited query\u2013document attention to only the upper layers of the transformer? In such a design, document representations in the lower layers could be precomputed (and hence cached to accelerate inference). At one extreme end of the PreTTR design space, if all query\u2013document attention is eliminated, then we have essentially \u201ccleaved\u201d monoBERT into two disconnected networks, and the result looks quite similar to DPR, where each of the disconnected networks serves as an encoder (and all document representations can be precomputed and indexed for low-latency retrieval). At the other extreme, if no query\u2013document attention is eliminated, we have monoBERT. Thus, PreTTR provides the conceptual linkage that allows us to understand bi-encoders and cross-encoders as the two extreme cases of a single underlying design: it\u2019s all in the de\ufb01nition of the comparison function \u03c6. 3This is a slight simpli\ufb01cation; the original formulation of BM25 [Robertson et al., 1995] included a query weighting component, but this term is usually omitted in modern implementations [Kamphuis et al., 2020]. 4 \fDense Sparse Supervised DPR, ANCE DeepImpact, uniCOIL Unsupervised LSI, LDA BM25, tf\u2013idf Table 1: A taxonomy of logical scoring models. 2.2 Generalization of Logical Scoring Models Dense retrieval models such as DPR are often compared against sparse retrieval models such as BM25 in experimental evaluations, as Karpukhin et al. [2020] did in their paper. Not surprisingly, results show that dense retrieval models obtain higher effectiveness. This, however, is not a fair comparison. Dense retrieval methods represent an instance of representational learning\u2014the key here is learning. The output of the encoders are learned representations that bene\ufb01t from (large amounts of) training data under a standard supervised machine learning paradigm. In contrast, BM25 is unsupervised.4 Comparing a supervised method to an unsupervised method is fundamentally an apples-to-oranges juxtaposition; it should not be surprising that a supervised technique is more effective. As previously argued in Lin and Ma [2021], the encoders \u03b7\u00b7 should be organized along two distinct dimensions or properties: The \ufb01rst dimension contrasts dense vs. sparse vector representations for queries and documents. The second dimension distinguishes between supervised (learned) and unsupervised representations. Table 1 illustrates this taxonomy. DPR (along with nearly all dense retrieval methods today) are instances of learned dense representations. BM25 is an instance of an unsupervised sparse representation. This taxonomy immediately points to the existence of two other classes of logical scoring models. In fact, they correspond to models described in the literature that we can now categorize and unify in a single conceptual framework: Learned sparse representations The existence of learned dense representations such as DPR and unsupervised sparse representations such as BM25 suggests that there should exist a class of learned sparse representations. Learning sparse representations is by no means a new idea. If we \ufb01x the dimensions of the output representation to be the vocabulary (i.e., retaining a bag-of-words assumption), models for learned sparse representations become term weighting models\u2014that is, a supervised machine learning approach to learning term weights. The earliest example I am aware of is Gordon [1988], who applied (what we might today call) representational learning on boolean vectors of descriptors using genetic algorithms, based on a small set of relevance judgments. These experiments might today be characterized as \u201ctoy\u201d, but all the key elements of learned sparse retrieval models (quite amazingly!) are present. Another example along these lines is the work of Wilbur [2001], who attempted to learn global term weights using TREC data. A bit later, Trotman [2005] used genetic programming to discover better BM25-like scoring functions. Quite simply, there is plenty of evidence that learned sparse representations aren\u2019t new. The \ufb01rst example of learned sparse representations in the \u201cBERT era\u201d is DeepCT [Dai and Callan, 2019], which uses a transformer to learn term weights based on a regression model, with the supervision signal coming from the MS MARCO passage ranking test collection. DeepCT has an interesting \u201cquirk\u201d: in truth, it only learns the term frequency (tf) component of term weights, but still relies on the remaining parts of the BM25 scoring function via the generation of pseudodocuments. The method also has a weakness: it only assigns weights to terms that are already present in the document, which limits retrieval to exact match. More generally, if we retain a bag-of-words assumption, term weighting models cannot address the vocabulary mismatch problem (more below). Note that dense representations do not have this issue since the dimensions of the vector representation capture some latent semantic space, not speci\ufb01c terms in the corpus vocabulary, and thus are able to capture what researchers call \u201csemantic matching\u201d. The exact-match weakness of DeepCT discussed above was resolved by the DeepImpact model [Mallia et al., 2021], which brought together two key ideas: the use of document expansion to 4Leaving aside simple tuning of parameters such as k1 and b. 5 \fidentify dimensions in the sparse bag-of-words representation that should have non-zero weights and a term weighting model based on a pairwise loss between relevant and non-relevant documents with respect to a query. Expansion terms are identi\ufb01ed by doc2query\u2013T5 [Nogueira and Lin, 2019], a sequence-to-sequence model for document expansion that predicts queries for which a text would be relevant. Since DeepImpact directly predicts term weights that are then quantized, it would be more accurate to call these weights learned impacts, since query\u2013document scores are simply the sum of weights of document terms that are found in the query. Furthermore, calling these impact scores draws an explicit connection to a thread of research in information retrieval dating back two decades [Anh et al., 2001]. Many other retrieval models can also be understood as instances of learned sparse representations, which allow for different parameterizations. Lin and Ma [2021] argued that another recent model called COIL [Gao et al., 2021a] is an instance of learned sparse representations, where the scoring model assigns each term a vector \u201cweight\u201d, stored in standard inverted lists. Lin and Ma demonstrated this connection by introducing a degenerate version of COIL called uniCOIL, where the weight vectors are collapsed down into a single dimension, thus yielding scalar weights. In this proposed conceptual framework, we might implement document expansion differently: uniCOIL originally used doc2query\u2013T5 for document expansion, but this was replaced by Zhuang and Zuccon [2021a] with an alternative model based on TILDE [Zhuang and Zuccon, 2021b]. They demonstrated that expansion using TILDE achieves comparable effectiveness on the MS MARCO passage ranking task, but with substantially lower inference costs. As another interesting variation, note that the query and document encoders need not be based on transformers (e.g., Zamani et al. [2018]), or even neural networks at all! For example, the retrieval model of Boytsov and Nyberg [2020], which exploits translation probabilities learned from query\u2013passage pairs, can be considered a (non-neural) learned sparse model. Synthesizing recent literature, there are three important observations about retrieval using learned sparse representations, which were originally noted by Lin and Ma [2021]: \u2022 Choice of basis. When contrasting learned dense representations with learned sparse representations, we see that nearly all recent proposals take advantage of transformers (Boytsov and Nyberg [2020] being a notable exception), so that aspect of the design is not a salient distinction. The critical difference is the basis of the vector representations: In nearly all current sparse approaches, the basis of the vector space remains \ufb01xed to the corpus vocabulary, i.e., they retain the bag-of-words assumption, even though in principle one could imagine sparse representations that abandon this assumption. In dense approaches, the model is given the freedom to \u201cchoose\u201d a new basis derived from transformer representations. This change in basis allows the encoder to represent the \u201cmeaning\u201d of texts in relatively small \ufb01xed-width vectors (say, 768 dimensions, compared to sparse vectors that may have millions of dimensions). This leads us to the next important observation: \u2022 Expansions for sparse representations. Without some form of expansion, learned sparse representations remain limited to (better) exact matching between queries and documents. The nature of sparse representations means that it is computationally impractical to consider non-zero weights for all elements in the vector (i.e., the vocabulary space). Thus, document expansion serves the critical role of proposing a set of candidate terms that should receive non-zero weights; since the number of candidate terms is small compared to the vocabulary size, the resulting vector remains sparse. Without some form of expansion, learned sparse representations cannot address the vocabulary mismatch problem [Furnas et al., 1987], because document terms not present in the query cannot contribute any score. This leads us to the third important observation: \u2022 Expansion and Term Weighting. The upshot of the above analysis is that retrieval methods based on learned sparse representations can be decomposed into an expansion and a term weighting component. For example, DeepCT performs no expansion and uses a regression-based scoring model. DeepImpact performs document expansion with doc2query\u2013T5, and as discussed above, the doc2query\u2013T5 model can be replaced with the TILDE document expansion model [Zhuang and Zuccon, 2021a]. Although many learned sparse models today have distinct expansion and weighting components, one can certainly imagine an integrated end-to-end model that jointly performs both. Nevertheless, such models will still need to tackle these distinct challenges: overcoming vocabulary mismatch and predicting term importance. 6 \fI will examine the impact of different design decisions for learned sparse representations in Section 3, drawing on recent experimental results from the literature. Unsupervised dense representations. The juxtaposition of DPR and BM25 suggests the existence of learned sparse representations. Establishing dense vs. sparse and supervised (learned) vs. unsupervised as the relevant dimensions of contrast suggests a class of unsupervised dense methods. While there is little work in this space of late, this label does describe techniques such as LSI [Deerwester et al., 1990, Atreya and Elkan, 2010] and LDA [Wei and Croft, 2006], which have been previously explored. I don\u2019t have much to say here, except that perhaps this gap might highlight a research direction worth renewed investigation. Based on this discussion, we see that all quadrants in the taxonomy of logical scoring models shown in Table 1 are populated with known examples from the literature. Furthermore, I demonstrate (hopefully, in a convincing manner) that all of these methods can be viewed as different \u03b7q, \u03b7d, and \u03c6 parameterizations of the logical scoring model captured in Eq. (1). 2.3 Logical/Physical Separation The logical scoring model in Eq. (1) describes how query\u2013document scores are to be computed with respect to an arbitrary (query, document) pair. The text retrieval problem, however, requires a system to produce a top-k ranking from an arbitrarily large collection of documents; this is the goal of what I\u2019ve called the physical retrieval model, Eq. (3). In other words, the end-to-end problem requires the execution of the logical scoring model at scale. The simplest physical retrieval model is to brute-force compute, given a query, the query\u2013document score for every document in the collection. In fact, for research experiments, this remains a common approach for dense retrieval methods, for example, using so-called \u201c\ufb02at\u201d indexes in Facebook\u2019s Faiss library [Johnson et al., 2021]. For sparse retrieval, in the early days of information retrieval prior to the development of inverted indexes and associated query evaluation algorithms (see Perry and Willett [1983]), this was also a common approach. Obviously, a brute-force scan of sizeable collections is impractical for low-latency querying, with the exception of a few specialized cases [Lempel et al., 2007, Wang and Lin, 2015]. For dense vector representations, the top-k retrieval problem is often called nearest neighbor (NN) search, and for a small set of \u03c6 comparison functions (inner products, L1 distance, and a few others), there exist ef\ufb01cient, scalable solutions. This problem has been studied for over two decades, with early solutions relying on locality-sensitive hashing [Indyk and Motwani, 1998, Gionis et al., 1999]. Recently, approaches based on hierarchical navigable small-world graphs (HNSW) [Malkov and Yashunin, 2020] have emerged as the preferred solution, and are implemented in a variety of open-source libraries. Note that these techniques solve the approximate nearest neighbor (NN) search problem, which means that the top-k they generate are not exact; see, for example, Indyk and Motwani [1998] for how this approximation is typically formalized. For sparse retrieval, nearly all models adopt the inner product as the comparison function \u03c6, and the top-k retrieval problem is solved using ef\ufb01cient query evaluation algorithms (mostly document-ata-time techniques) operating over inverted indexes. There has, literally, been decades of work on ef\ufb01cient implementations; see Tonellotto et al. [2018] for a survey. With respect to the design of physical retrieval models, there are two important points worth explicitly discussing: \u2022 De\ufb01ning \u03c6 as inner products. Although the comparison function \u03c6 can be arbitrarily de\ufb01ned in the logical scoring model, for both dense and sparse representations, de\ufb01ning \u03c6 in terms of inner products (and a small number of other functions) leads to ef\ufb01cient scalable solutions for the top-k retrieval problem. That is, an inner product formulation of \u03c6 is privileged or \u201cspecial\u201d. If a researcher \ufb01xes \u03c6 to be the inner product and only rede\ufb01nes \u03b7q and \u03b7d to create a new logical scoring model, then existing software infrastructure for ef\ufb01cient top-k retrieval (implemented in various software libraries) can be reused. In the sparse retrieval space, the development of different scoring models such as tf\u2013idf, BM25, query-likelihood, divergence from randomness, etc., can be characterized as such, as well as most recent work in the dense retrieval space. In other words, ef\ufb01cient physical retrieval comes \u201cfor free\u201d. 7 \f\u2022 Tight logical/physical coupling. The current state of affairs can be characterized as follows: for sparse representations, top-k retrieval is almost always performed using inverted indexes, typically with document-at-a-time scoring. For dense representations, the same role is usually \ufb01lled by HNSW, implemented in Faiss or some other toolkit. In other words, we observe tight coupling between the logical scoring model and the physical retrieval model. Thus, dense and sparse representations use completely different \u201csoftware stacks\u201d. The separation of the physical retrieval model from the logical scoring model espoused in this paper represents an explicit attempt to move away from the tight coupling discussed above. Why can\u2019t we perform nearest neighbor search using inverted indexes? Similarly, why can\u2019t we perform BM25 retrieval using HNSW? There is no reason why not, and in fact, both have already been tried! Teo\ufb01li and Lin [2019] evaluated a number of \u201ctricks\u201d for performing top-k ranking on dense vectors with inverted indexes using the open-source Lucene search library. Tu et al. [2020] and Lin et al. [2021a] explored using HNSW for BM25 ranking. As it turns out, dense retrieval using inverted indexes doesn\u2019t work very well, and sparse retrieval using HNSW appears to be attractive only in limited settings. In terms of both ef\ufb01ciency and effectiveness, using the \u201cother\u201d physical technique to execute the logical scoring model is worse than its \u201cnatural\u201d counterpart. Thus, it might be fair to say that sparse representations have an af\ufb01nity with inverted indexes and dense representations with HNSW. While possible in principle, there doesn\u2019t seem to be a compelling case at present to adopt a decoupled approach. So what\u2019s the point? At a high level, tight coupling presents optimizations opportunities, while loose coupling promotes \ufb02exibility\u2014and I argue that this is exactly what\u2019s happened here. Over the course of many decades, researchers have devised numerous optimizations speci\ufb01cally targeted at ef\ufb01cient query evaluation using inverted indexes for sparse retrieval models [Tonellotto et al., 2018]. Thus, it is entirely believable (and perhaps even expected) that HNSW\u2014a much newer technique that has received far less attention\u2014cannot compete. However, it is also plausible that as HNSW receives more attention for different use cases and hence more optimization efforts over time, the performance gap closes. Explicitly promoting logical/physical separation in a loosely-coupled approach, I argue, increases the range of usage scenarios in which HNSW (and future techniques) may be applied, and thus might hasten these developments. Even more interesting to consider are representations that are not really dense, but not sparse either. For such a design, the ability to \u201cmix and match\u201d logical scoring models and physical retrieval models presents an interesting future direction. I come back to discuss this point in more detail in Section 4. The other major bene\ufb01t of the logical/physical separation is that it allows us to understand multistage ranking as practical physical realizations of expensive logical scoring models. For example, in Section 2.1, I argued that cross-encoders like monoBERT are covered by the functional form presented in Eq. (1), where the comparison function \u03c6 is de\ufb01ned in terms of transformers. Due to query\u2013document attention, the monoBERT logical scoring model can only be faithfully realized by computing the scores of all (q, d) pairs, \u2200d \u2208D. This is obviously impractical, and thus one solution to the physical retrieval problem is to adopt a multi-stage design with a \u201ccheap\u201d \ufb01rst-stage retrieval.5 It seems a bit silly to phrase as follows, given the obviousness and triviality of the observation, but de\ufb01ning \u03c6 in terms of transformers does not admit an ef\ufb01cient top-k retrieval solution over large corpora. The transformer is not one of those privileged functional forms of \u03c6 discussed above. Supporting evidence for this view comes from an experimental result presented in Lin et al. [2021b] (Section 3.2.2), who began with a standard BM25 + monoBERT reranking design [Nogueira and Cho, 2019] and successively increased the reranking depth. They performed experiments that applied monoBERT to rerank increasingly larger candidate sets from \ufb01rst-stage retrieval on the MS MARCO passage corpus. On the associated passage ranking task, Lin et al. discovered that effectiveness increases (and then plateaus) as the reranking depth increases, out to 50k hits per query. Given the resource requirements of such an experiment, the authors did not increase reranking depth any further. These results can be interpreted as follows: As the reranking depth increases, the \ufb01nal ranking becomes increasingly closer to a brute-force scan over the entire collection (and, critically, in this method, the \ufb01nal ranking score does not take into account the BM25 retrieval score). This interpretation is consistent with the arguments I made above. To be more precise, multi-stage ranking is an approximate physical retrieval realization of the monoBERT logical scoring model, since empirically, 5Using bag-of-words (unsupervised) sparse retrieval, with \u03c6 de\ufb01ned in terms of the inner product, no less! 8 \fa smaller k in \ufb01rst-stage top-k retrieval degrades effectiveness. In the limit, if k = |D|, then we\u2019re back to a brute-force computation of query\u2013document scores for all documents in the collection. So, in summary, decoupling the logical scoring model from the physical retrieval model offers two conceptual advances: unifying retrieval with dense and sparse representations, and providing a new perspective for understanding multi-stage ranking. 2.4 Connections to Natural Language Processing Lin et al. [2021b] argued that relevance, semantic equivalence, paraphrase, entailment, and a host of other \u201csentence similarity\u201d tasks are all closely related, even though the \ufb01rst is considered an IR problem and the remainder are considered to be problems in NLP. What\u2019s the connection? Cast in terms of the conceptual framework proposed in this paper, I argue that these problems all share in the formalization of the logical scoring model, but NLP researchers usually don\u2019t care about the physical retrieval model. For example, supervised paraphrase detection is typically formalized as a \u201cpointwise\u201d estimation task of the \u201cparaphrase relation\u201d: P(Paraphrase = 1|s1, s2) \u2206 = r(s1, s2). (6) That is, the task is to induce some scoring function based on training data that provides an estimate of the likelihood that two texts (sentences in most cases) are paraphrases of each other. In the popular transformer-based Sentence-BERT model [Reimers and Gurevych, 2019], the solution is formulated in a bi-encoder design: r(s1, s2) \u2206 = \u03c6(\u03b7(s1), \u03b7(s2)), (7) which has exactly the same functional form as the logical scoring model in Eq. (1)! The main difference, I argue, is that paraphrase detection for the most part does not care where the texts come from. In other words, there isn\u2019t an explicitly de\ufb01ned physical retrieval model. In fact, comparing Sentence-BERT with DPR, we can see that although the former focuses on sentence similarity tasks and the latter on passage retrieval, the functional forms of the solutions are identical. Both are captured by the logical scoring model in Eq. (1); the de\ufb01nitions of the encoders are also quite similar, both based on BERT, but they extract the \ufb01nal representations in slightly different ways. Of course, since DPR was designed for a question answering task, the complete solution requires de\ufb01ning a physical retrieval model, which is not explicitly present in Sentence-BERT. Pursuing these connections further, note that there are usage scenarios in which a logical scoring model for paraphrase detection might require a physical retrieval model. Consider a community question answering application [Srba and Bielikova, 2016], where the task is to retrieve from a knowledge base of (question, answer) pairs the top-k questions that are the closest paraphrases of a user\u2019s question. Here, there would be few substantive differences between a solution based on Sentence-BERT and DPR, just slightly different de\ufb01nitions of the encoders. One immediate objection to this treatment is that relevance differs from semantic equivalence, paraphrase, entailment, and other sentence similarity tasks in fundamental ways. For example, the relations captured by sentence similarity tasks are often symmetric (with entailment being an obvious exception), i.e., r(s1, s2) = r(s2, s1), while relevance clearly is not. Furthermore, queries are typically much shorter than their relevant documents (and may not be well-formed natural language sentences), whereas for sentence similarity tasks, the inputs are usually of comparable length and represent well-formed natural language. I argue that these differences are primarily features of the annotation process for the training data and are captured in parametric variations of the logical scoring model de\ufb01ned in Eq. (1). In practical terms, these task distinctions affect implementation design choices. Is the relation we\u2019re trying to model symmetric? In that case, let\u2019s just use the same encoder for both inputs. Otherwise, having separate encoders makes more sense. Interestingly, results from the dense retrieval model ANCE [Xiong et al., 2021], which uses the same encoder for both queries and documents (despite obvious differences between the inputs), has been shown to work well empirically. Maybe these design choices aren\u2019t so important anyway? The goal of this discussion is to illustrate that the conceptual framework proposed in this paper establishes connections between information retrieval and natural language processing, with the hope 9 \fthat these connections can lead to further synergies in the future. Lin et al. [2021b] (Chapter 5) argued that until relatively recently, solutions to the text retrieval problem and sentence similarity tasks have developed in relative isolation in the IR and NLP communities, respectively, despite the wealth of connections. In fact, both communities have converged on similar solutions in terms of neural architectures (in the pre-BERT days). The proposed conceptual framework here makes these connections explicit, hopefully facilitating a two-way dialogue between the communities that will bene\ufb01t both. 2.5 Historical Connections Civilizations have grappled with the challenges of accessing stored information shortly after the invention of writing, when humankind\u2019s collective knowledge outgrew the memory of its elders. We can imagine some ancient scribe, perhaps somewhere in Mesopotamia, scrawling on clay tablets, wondering where he6 put those records from last month. Libraries and archives, of course, have existed for millennia, created precisely to tackle this challenge. In contrast, our conceptualization of information retrieval using computers is less than a century old. Although the technologies have evolved over millennia, from clay tablets to scrolls to books, and now digital information, the underlying goals have changed little. Interestingly, it is possible to apply the conceptual framework proposed in this paper to describe information retrieval in the eras that pre-dated computers. For centuries, human librarians have been assigning content descriptors to information objects (books, scienti\ufb01c articles, etc.). These descriptors (also known as \u201cindex terms\u201d) were usually selected by human subject matter experts and drawn from thesauri, \u201csubject headings\u201d, or \u201ccontrolled vocabularies\u201d\u2014that is, a prede\ufb01ned vocabulary. This process was known as \u201cindexing\u201d or \u201cabstracting\u201d; the original sense of the activity involved humans, and thus, an indexer was a human who performed indexing, not unlike the earliest uses of computers to refer to humans who performed computations by hand! In other words, a human indexer served the role of the document encoder \u03b7d, and the output can be viewed as a multi-hot vector where each of the dimensions represents a content descriptor. Searching required the assistance of librarians who \u201cinterviewed\u201d the information seeker to understand the parameters of the request, to translate the information need into the same representational space of these content descriptors. Thus, librarians served the role of the query encoder \u03b7q. What about \u03c6? Since the query and document representations are best characterized as multi-hot vectors, representation matching occurs in a boolean fashion. In fact, the logical/physical separation applies to this human-mediated approach as well! To \u201cexecute\u201d retrieval in the simplest case of one-hot representations of content descriptors, the librarian consults a guide that maps these content descriptors into physical shelf locations, and then walks with the information seeker directly over to that location. More sophisticated physical retrieval models include the use of card catalogues.7 In the early days of computing, \u03c6 was implemented via the processing of punch cards,8 each of which encoded the representation of an information object (i.e., the output of the document encoder \u03b7d). Thus, as a bonus, the conceptual framework proposed in this paper can help us understand information retrieval through the ages, even prior to the advent of computing. 3 Experimental Results We can apply the conceptual framework proposed in this paper to organize various dense and sparse retrieval methods that have been proposed in the literature. This structure can facilitate comparisons across different classes of methods, and analyzing models in a common framework can perhaps help us better draw generalizations. Table 2 shows the effectiveness of various models on the development queries of the MS MARCO passage ranking test collection [Bajaj et al., 2018], which has emerged in recent years as the most prominent dataset for training and benchmarking retrieval models. As a baseline, row (1) shows the effectiveness of BM25, which can be characterized as an unsupervised sparse retrieval method. Learned sparse retrieval methods are shown in the second main block of Table 2, from row (2) to row (8c): per the discussion in Section 2.3, I break out term weighting and 6As yes, very likely a male. 7Millennials and even younger readers ask, \u201cWhat are those?\u201d 8Anyone other than boomers asks, \u201cWhat are those?\u201d 10 \fUnsupervised Sparse Representations MRR@10 Source (1) BM25 0.184 Nogueira and Lin [2019] Learned Sparse Representations MRR@10 Source Term Weighting Expansion (2) BM25 doc2query\u2013T5 0.277 Nogueira and Lin [2019] (3) DeepCT None 0.243 Dai and Callan [2019] (4) SparTerm MLM-based 0.279 Bai et al. [2020] (5) DeepImpact doc2query\u2013T5 0.326 Mallia et al. [2021] (6a) COIL-tok (d = 32) None 0.341 Gao et al. [2021a] (6b) COIL-tok (d = 32) doc2query\u2013T5 0.361 Lin and Ma [2021] (7a) uniCOIL None 0.315 Lin and Ma [2021] (7b) uniCOIL doc2query\u2013T5 0.352 Lin and Ma [2021] (7c) uniCOIL TILDE 0.349 Zhuang and Zuccon [2021b] (8a) SparTerm/SPLADE none 0.290 Formal et al. [2021b] (8b) SPLADE MLM-based 0.322 Formal et al. [2021b] (8c) DistilSPLADE-max MLM-based 0.368 Formal et al. [2021a] Learned Dense Representations MRR@10 Source (9) ColBERT 0.360 Khattab and Zaharia [2020] (10) ANCE 0.330 Xiong et al. [2021] (11) DistillBERT 0.323 Hofst\u00e4tter et al. [2020] (12) RocketQA 0.370 Qu et al. [2021] (13) TAS-B 0.347 Hofst\u00e4tter et al. [2021] (14) ADORE + STAR 0.347 Zhan et al. [2021] (15) TCT-ColBERTv2 0.359 Lin et al. [2021c] Dense\u2013Sparse Hybrids MRR@10 Source (16) CLEAR 0.338 Gao et al. [2021b] (17) COIL-full 0.355 Gao et al. [2021a] (18a) TCT-ColBERTv2 (15) + BM25 (1) 0.369 Lin et al. [2021c] (18b) TCT-ColBERTv2 (15) + doc2query\u2013T5 (2) 0.375 Lin et al. [2021c] (18c) TCT-ColBERTv2 (15) + DeepImpact (5) 0.378 Lin and Ma [2021] (18d) TCT-ColBERTv2 (15) + uniCOIL (7b) 0.378 Lin and Ma [2021] Table 2: Results on the development queries of the MS MARCO passage ranking task. document expansion components. BM25 with doc2query\u2013T5 document expansions [Nogueira and Lin, 2019], row (2), can be understood as using a neural sequence-to-sequence model for expansion, but retaining the BM25 weighting scheme; thus, learning is only applied in the expansion component. DeepCT [Dai and Callan, 2019], row (3), uses a regression-based term weighting model without any expansion. SparTerm [Bai et al., 2020], row (4), uses the masked language model (MLM) layer of BERT to generate expansion terms on which term weights are learned. DeepImpact [Mallia et al., 2021], row (5), combines the use of doc2query\u2013T5 for expansion with a term weighting model trained using pairwise loss. Rows (6a) and (6b) present a contrastive condition comparing the same term weighting model\u2014COIL [Gao et al., 2021a]\u2014with and without an expansion model; adding document expansion yields a two-point gain in effectiveness. With uniCOIL [Lin and Ma, 2021], which builds on COIL, the literature reports three contrastive conditions: without expansion, row (7a), and with two different expansion methods, doc2query\u2013T5 in row (7b) and TILDE [Zhuang and Zuccon, 2021b] in row (7c). These results af\ufb01rm the importance of document expansion, but suggest that the exact choice of the model might not matter so much, at least in the uniCOIL design, since the expansion model simply provides a candidate list of terms for the term weighting model to consider during training. Finally, row group (8) reports the effectiveness of a family of models called SPLADE, v1 [Formal et al., 2021b] and v2 [Formal et al., 2021a], both of which build on SparTerm [Bai et al., 2020]. These results corroborate the importance of term expansions in learned sparse representations. In the third main block of Table 2, I summarize the effectiveness of a number of learned dense retrieval models on the development queries of the MS MARCO passage ranking test collection. 11 \fNote that ColBERT [Khattab and Zaharia, 2020] uses the more expressive MaxSim operator to compare query and document representations (more discussion in Section 4); all other models use inner products. Comparing dense vs. sparse learned representations, there does not appear to be any discernible pattern that can be identi\ufb01ed. While earlier proposals for learned sparse models under-perform learned dense models, it is likely because researchers have been investigating learned dense representations for a longer period of time. From the perspective of effectiveness, the latest dense and sparse methods appear to be on par with each other. The \ufb01nal block of Table 2 shows the results of dense\u2013sparse hybrids. In particular, rows (18a\u2013d) present results of the TCT-ColBERTv2 dense retrieval model [Lin et al., 2021c] with different learned sparse retrieval models using a simple linear combination of scores. The only point I wish to make here is that dense and sparse representations appear to offer complementary relevance signals, such that combining evidence from both sources yields further increases in effectiveness compared to ranking with each individually. However, it appears that hybrid fusion is less sensitive to the effectiveness of the individual models\u2014for example, DeepImpact is less effective than uniCOIL, but both achieve the same effectiveness in a fusion context, as shown in row (18c) vs. row (18d). Furthermore, fusion with doc2query\u2013T5 achieves nearly the same level of effectiveness, shown in row (18b), even though the method alone is far less effective. Overall, I believe that dense\u2013sparse hybrids represent the state of the art in single-stage retrieval models today (i.e., what can be achieved without reranking). 4 Discussion The conceptual framework described in this paper clari\ufb01es the relationship between recently proposed dense and sparse retrieval methods, and experimental results presented in the previous section begin to help us understand the impact of different design choices. Furthermore, this proposed framework suggests a number of open research questions, which provide a roadmap for future work. I discuss these below: Out-of-distribution inference In the logical scoring model, explicitly establishing a contrast between supervised (learned) vs. unsupervised representations makes it obvious why DPR is more effective than BM25. However, in a supervised machine-learning paradigm, we are immediately led to the obvious follow-up question: What happens if the trained models are applied to out-of-distribution data? Phrased differently, what is the effectiveness of learned representations in a zero-shot setting? Cast into the same parlance for comparison purposes, BM25 is always applied in a \u201czero-shot\u201d manner (although admittedly, such a statement sounds odd). In the information retrieval context, since training data typically comprise (query, relevant document) pairs, out of distribution could mean a number of different things: (1) the document encoder is fed text from a different domain, genre, register, etc. than the training documents, (2) the query encoder is fed queries that are different from the training queries, (3) the relationship between input query\u2013document pairs at inference time differs from the relationship captured in the training data (e.g., task variations), or (4) a combination of all of the above. In fact, we already know the answer, at least in part: learned representations often perform terribly in out-of-distribution settings when applied in a zero-shot manner. Evidence comes from the BEIR benchmark [Thakur et al., 2021], which aims to evaluate the effectiveness of dense retrieval models across diverse domains. Results show that, in many cases, directly applying a dense retrieval model trained on one dataset to another dataset sometimes yields effectiveness that is worse than BM25. Complementary evidence comes from Li et al. [2021], who found that for passage retrieval in question answering, training DPR on one dataset and testing on another can lead to poor results. In their experiments, the corpus was \ufb01xed (Wikipedia articles), but the questions are generated in different ways; the end result is that the trained encoders often generalize poorly across datasets. In contrast to BM25, which \u201cjust works\u201d regardless of the corpus and queries in a \u201czero-shot\u201d manner, learned representations may perform poorly in out-of-distribution settings. This immediately suggests one important research direction, to better cope with these issues. For example, Li et al. [2021] proposed model uncertainty fusion as a solution. The BEIR benchmark [Thakur et al., 2021] provides a resource to evaluate progress, and the latest results show that learned sparse representations are 12 \fable to outperform BM25 [Formal et al., 2021a]. At a high level, there are at least three intertwined research questions: 1. What are the different ways in which models can be applied in an out-of-distribution manner and what is the impact of each? The four ways I\u2019ve sketched above provide a starting point, but could be further re\ufb01ned with experimental support. For example, is effectiveness degradation more severe with out-of-distribution documents or queries? Can we more formally characterize \u201cout-of-distribution\u201d\u2013ness? 2. Given the answers to the above questions, how do we then detect when an input instance is out of distribution? 3. And once we identify a potentially \u201cproblematic\u201d instance, what mitigation techniques can we bring to bear? In other words, we must understand the scope of the problem, identify when the problem occurs, and then \ufb01nally mitigate the problem. Without addressing these challenges, the real-world deployment of learned representations will be hampered by their inability to generalize to arbitrary information retrieval scenarios, in the way that BM25 isn\u2019t.9 I am heartened to see that the community has already begun to explore these interesting and important research questions, but there remains much more work to be done. Quality\u2013Space\u2013Time\u2013Cost tradeoffs By situating dense and sparse retrieval models in a uni\ufb01ed conceptual framework, comparisons between different methods become more meaningful. There are four dimensions along which different retrieval models should be compared: quality (e.g., retrieval effectiveness), space (e.g., index size), time (e.g., query latency), and cost (e.g., dollars per query). Naturally, most papers today focus on output quality, but the space requirements of dense vector representations have drawn interest from researchers as well. Retrieval models that depend on dense vector representations consume a large amount of space, which often translates into large memory requirements since many approximate nearest neighbor search libraries require memory-resident index structures for ef\ufb01cient querying. For example, a minimal Lucene index in Anserini [Yang et al., 2018], suf\ufb01cient to support bag-of-words querying on the MS MARCO passage corpus (8.8M passages), takes up only around 660 MB. A comparable HNSW index with 768-dimensional vectors in Faiss occupies 42 GB (with typical parameter settings), which is many times larger. As another example, Ma et al. [2021a] reported that the size of the original DPR (\ufb02at) vector index on the Wikipedia corpus is about 61 GB,10 compared to 2.4 GB for a comparable Lucene inverted index. This 25\u00d7 increase in space only yields an average gain of 2.5% in top-100 accuracy across \ufb01ve datasets [Ma et al., 2021b]. While researchers have begun to explore different techniques for reducing the space requirements for dense representations, for example, via dimensionality reduction or quantization [Izacard et al., 2020, Yamada et al., 2021, Ma et al., 2021a], there is much more work to be done. I am optimistic that the community will make headway here because, as already mentioned above, the comparisons to sparse representations are \u201cnot fair\u201d because inverted indexes have bene\ufb01ted from many decades of optimizations, particularly in the coding of sparse integer sequences, whereas researchers have only begun to tackle the impractically large space requirements associated with dense retrieval models. Finally, speed (more generally, performance characterized in terms of query latency, throughput, etc.) and cost (of hardware, power consumption, amount of CO2 generated, etc.) are issues that have received comparatively little attention, but are obviously important in real-world applications. I mention these considerations in tandem because there are many examples where, holding everything else \ufb01xed, speed and cost can be traded off for each other. A simple example is GPU vs. CPU inference for retrieval models that require neural inference on queries, which must be performed at search time. Since queries are usually short, CPU inference, even with transformer models, can be tolerable, but obviously, GPU inference can reduce query latency but incur additional hardware costs. As another example, in many real-world search applications, query latency can be controlled 9Another way to say this: Suppose we\u2019re faced with a completely new retrieval task in a highly specialized and obscure domain. I think most researchers and practitioners would unequivocally suggest using BM25 as the baseline, and would be con\ufb01dent of obtaining \u201creasonable\u201d results. I don\u2019t think we have that same con\ufb01dence with any learned representations at present. 10An HNSW index suitable for low-latency querying would be even larger. 13 \fin partitioned architectures by adjusting the size of each partition (also called a shard): the smaller each partition, the lower the query latency, but at the cost of needing more hardware (and hence cost) for a given corpus size. While there have been some discussions of these issues in blog posts11 and on social media, these considerations have not attracted much attention from researchers. Moving forward, I believe that an accurate characterization of dense and sparse retrieval methods requires clearly evaluating quality\u2013space\u2013time\u2013cost tradeoffs. This to me is exciting because it provides an opportunity for collaborations between \u201cmodeling\u2013minded\u201d, \u201calgorithm\u2013minded\u201d, and \u201cef\ufb01ciency\u2013minded\u201d researchers.12 \u201cMixing and matching\u201d logical scoring models and physical retrieval models Dense and sparse representations are not discrete categories, but rather lie on a continuum with many variations. Currently, the size (in terms of the number of dimension) of (most) sparse representations equals the vocabulary size of the corpus, and dense representations typically have hundreds of dimensions (768 being a common setting). What if we \u201cdensify\u201d sparse representations and \u201csparsify\u201d dense representations\u2014to yield, say, vectors that are on the order of a few thousand dimensions? We might characterize these vectors as \u201cnot really dense, but not sparse either\u201d. For such a logical scoring model, what physical retrieval model makes the most sense in terms of different tradeoffs? In Section 2.3, I advocated for the separation of the logical scoring model from the physical retrieval model. A loosely coupled approach provides \ufb02exibility and the ability to make progress independently on different aspects of the overall problem. Currently, there is an af\ufb01nity between sparse representations and query evaluation using inverted indexes on the one hand, and dense representations and HNSW on the other. But what happens when the representations move out of their respective \u201csweet spots\u201d? As we \u201cdensify\u201d sparse representations, the performance of inverted indexes is expected to degrade. As we \u201csparsify\u201d dense representations, the performance of HNSW is expected to degrade. Thus, we expect some crossover point in the middle? Perhaps for vectors that are \u201cnot really dense, but not sparse either\u201d, neither approach will work well. This suggests a need to build index structures coupled with algorithmic innovations for top-k retrieval on such vector representations. I believe that this is where a clean abstraction and the ability to \u201cmix and match\u201d different logical scoring models with physical retrieval models will really become bene\ufb01cial. We can imagine the development of different data structures and algorithms targeted to different types of representations\u2014 beyond the (basically, two) limited options we have today. Depending on the characteristics of the vector representations, for example, the number of dimensions, the entropy of the values, the degree of isotropy, etc., different physical retrieval models might be appropriate. This is taking a page out of the playbook of database researchers\u2014for example, it is precisely the logical/physical abstraction that has enabled the development of very different types of database engines such as row stores and column stores for different application scenarios [Stonebraker et al., 2005]. And who knows, maybe we can even learn physical retrieval models [Idreos et al., 2019]! Alternative comparison functions For both sparse and dense representations, the inner product holds a privileged position as the comparison function \u03c6 because ef\ufb01cient solutions already exist for the top-k retrieval problem. As I already explained in Section 2.3, \ufb01xing \u03c6 to be the inner product allows a researcher to focus on the logical scoring model in isolation (notwithstanding the issues discussed above). This is a good compromise because limiting \u03c6 to be the inner product still leaves open the entire space of neural architectures for designing the encoders\u2014and indeed, most dense retrieval research operates under this constraint. The framework does not, however, preclude alternative de\ufb01nitions of \u03c6\u2014rather, it just means that a \u201ccustom\u201d comparison function may need its own dedicated physical retrieval model (unless, that is, we solve the challenges discussed above). A good example is ColBERT [Khattab and Zaharia, 2020], which introduced a comparison function called \u201cMaxSim\u201d that computes query\u2013document similarity as the sum of the maximum cosine similarities between each query term and the \u201cbest\u201d matching document term; cf. Kusner et al. [2015]. To ef\ufb01ciently compute top-k rankings in terms of MaxSim, the authors \ufb01rst built an index for approximate nearest neighbor search over all tokens in the document collection, where each token retains a pointer back to its source document. Retrieval 11For example, the \u201cPretrained Transformer Language Models for Search\u201d series at https://blog.vespa.ai/. 12More colloquially, our colleagues who get their kicks reducing L1 cache misses and bits per posting can now get in on the neural action. 14 \fis performed by \ufb01rst fetching candidate documents using this index (by following the pointers) and then computing MaxSim for all query\u2013document candidates. In other words, the authors presented a two-stage physical retrieval model speci\ufb01cally for their novel comparison function. In fact, ColBERT offers a good example where many of the discussion threads above come together. Khattab and Zaharia described a design where the logical scoring model and the physical retrieval model are tightly coupled. Separating the two might accelerate future advances by enabling independent progress. On the one hand, researchers could rely on MaxSim as \u03c6 and explore different query or document encoders without worrying about retrieval ef\ufb01ciency. On the other hand, another group of researchers could focus on optimizing MaxSim calculations over large document collections without worrying about whether such optimizations would be useful. In this way, MaxSim might gain a \u201cprivileged\u201d status, alongside the inner product, in the selection of the comparison function \u03c6 for retrieval model design. In addition, ColBERT provides an illustrative case study for the need to characterize quality\u2013space\u2013 time\u2013cost tradeoffs in order to compare retrieval models in a \u201cfair\u201d manner. Khattab and Zaharia presented their innovation as a model that is just as effective as a retrieve-then-rerank approach using BERT-based cross-encoders [Nogueira and Cho, 2019], but is substantially faster. This, however, comes at the cost of huge index sizes\u2014154 GB for the MS MARCO passage corpus (compared to 660 MB for an inverted index). While the authors did discuss this limitation, when all four dimensions of evaluation are considered (quality, space, time, and cost), it is dif\ufb01cult to see ColBERT as a practical solution for real-world problems. Multi-stage ranking as physical optimizations In Section 2.3, I argued that multi-stage ranking architectures are simply practical implementations of expensive logical scoring models (based on brute-force scans). Here, I elaborate on this observation, which also bolsters the case for logical/physical separation. Any multi-stage ranking pipeline where the scores from each stage are additive can be converted into the functional form of Eq. (1) by \u201ccomposing\u201d the models at each stage (including \ufb01rst-stage retrieval). In a ranking pipeline where the later stages do not incorporate evidence from the earlier stages (that is, stages are used only to reduce the candidates under consideration), such as BM25 + monoBERT [Nogueira and Cho, 2019], the score of the \ufb01nal reranking stage is the logical scoring model. In either case, top-k retrieval can be performed using a brute-force scan through the entire document collection based on the logical scoring model. Thus, multi-stage pipelines can be viewed as hand-crafted optimizations in the physical retrieval model. In other words, with a clean logical/physical separation, researchers and practitioners can focus on developing the logical scoring model, leaving the realization of the physical retrieval model as a separate exercise. In the tightly coupled architectures of today, the logical scoring model and the physical retrieval model must be co-designed to produce the \u201cright\u201d multi-stage pipeline. This is inelegant, as designers are mixing elements from different levels of abstraction: what to compute with how to compute. However, this conceptual tangle need not be the only approach. For example, we might build automated processes that \u201ccompile\u201d the speci\ufb01cation of the logical scoring model into a physical realization, subjected to declaratively speci\ufb01ed constraints. These hypothetical logicalto-physical compilers can even be machine learned! The work of Wang et al. [2011] provides an example of how this could be accomplished in the context of feature-based learning to rank; perhaps these ideas from a decade ago could be dusted off for a fresh take? Unsupervised dense representations The conceptual framework proposed in this paper characterizes logical scoring models along two dimensions. The four-quadrant taxonomy illustrated in Table 1 highlights a space that has not received much attention of late. I don\u2019t have much to say here, except that perhaps this gap might suggest a research direction worth renewed investigation. Other odds and ends If the logical scoring model and the physical retrieval model represent abstractions that are helpful in advancing IR research, what other such abstractions might exist? And a related question: So far, the conceptual framework proposed here has been applied primarily to deepen our understanding of ad hoc retrieval. What, if any, implications does this framework hold for other areas of information seeking beyond the design of retrieval models? 15 \fAddressing the \ufb01rst question: An important abstraction that immediately comes to mind, although hardly novel, is that of a token stream as the input to an inverted indexer (and correspondingly, to a query processor prior to retrieval). That is, an inverted indexer merely requires a stream of discrete tokens on which to operate, and is agnostic with respect to how the tokens are generated from arbitrary natural language text. In the canonical case, these tokens correspond to \u201cwords\u201d in the language (however de\ufb01ned) after some amount of analysis (e.g., stemming), but researchers have discovered, that at least for some languages, character n-grams (which have no basis in linguistic reality) work well [Foo and Li, 2004, McNamee and May\ufb01eld, 2004]. Much along the same lines, Xue et al. [2021] recently explored pretrained neural sequence-to-sequence models based on byte sequences and showed that such models are competitive to token-based models, but more robust to noisy inputs. Perhaps it is worth reconsidering the information retrieval tokenization pipeline in light of these latest results? Addressing the second question on whether the conceptual framework presented in this paper has anything meaningful to say about other areas of information retrieval and information seeking more broadly: I can think of two answers. First, it has long been observed that information \ufb01ltering and ad hoc retrieval are intimately related, what Belkin and Croft [1992] have called \u201ctwo sides of the same coin\u201d. At a high level, ad hoc retrieval is concerned with a stream of queries posed against a (relatively) static collection of documents, whereas information \ufb01ltering is concerned with a stream of documents posed against a (relatively) static collection of queries. Filtering has a long history that dates back to the 1960s [Housman and Kaskela, 1970], which evolved into the TREC Filtering Tracks [Lewis, 1995] in the late 1990s and the general research program known as Topic Detection and Tracking (TDT) [Allan, 2002] in the early 2000s. The most recent incarnations of \ufb01ltering include the TREC Incident Streams Tracks [Buntain et al., 2020], which aim to automatically process social media streams during emergency situations to triage information and aid requests for emergency service operators. This evaluation series has its roots in the TREC Real-Time Summarization Tracks [Lin et al., 2016], where systems automatically monitor streams of social media posts to keep users up to date on topics of interest. I believe that a more succinct way to convey the connections between \ufb01ltering and ad hoc retrieval (cf. Belkin and Croft [1992]) is to say that they share logical scoring models\u2014at least in terms of Eq. (1), although the relevance criteria are often different\u2014but may require different physical retrieval models. Although information \ufb01ltering, in fact, can be physically implemented via inverted indexes, such a realization can be somewhat awkward (a side effect of the \u201ctight coupling\u201d approach). A clean separation between the logical and physical can help researchers focus on representations and scoring models without arti\ufb01cial constraints on execution. More clearly-de\ufb01ned sub-problems, I believe, will lead to accelerated progress in the \ufb01eld, with all the advantages I\u2019ve already discussed above. Second, I believe that the conceptual framework proposed here can capture relevance feedback (pseudoor based on human judgments), and more generally, interactive retrieval. The logical scoring model as currently de\ufb01ned computes the query representation from the query itself, i.e., \u03b7q(q). However, this formalism can be extended to take into account previous queries in a session, e.g., \u03b7q(qi; q