Dataset Viewer
Auto-converted to Parquet
url
stringlengths
33
33
published
stringdate
2024-04-25 00:00:00
2024-05-09 00:00:00
title
stringlengths
16
144
abstract
stringlengths
450
2.9k
gt
stringlengths
450
2.9k
primary_cat
stringclasses
48 values
paper_cat
stringclasses
20 values
updated
stringdate
2024-04-25 00:00:00
2024-05-09 00:00:00
main_content
stringlengths
5.61k
156k
authors
stringlengths
8
563
label
stringclasses
1 value
cats
listlengths
1
6
http://arxiv.org/abs/2404.16260v1
2024-04-25 00:00:00
OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search
In this paper, we present OmniSearchSage, a versatile and scalable system for understanding search queries, pins, and products for Pinterest search. We jointly learn a unified query embedding coupled with pin and product embeddings, leading to an improvement of $>8\%$ relevance, $>7\%$ engagement, and $>5\%$ ads CTR in Pinterest's production search system. The main contributors to these gains are improved content understanding, better multi-task learning, and real-time serving. We enrich our entity representations using diverse text derived from image captions from a generative LLM, historical engagement, and user-curated boards. Our multitask learning setup produces a single search query embedding in the same space as pin and product embeddings and compatible with pre-existing pin and product embeddings. We show the value of each feature through ablation studies, and show the effectiveness of a unified model compared to standalone counterparts. Finally, we share how these embeddings have been deployed across the Pinterest search stack, from retrieval to ranking, scaling to serve $300k$ requests per second at low latency. Our implementation of this work is available at https://github.com/pinterest/atg-research/tree/main/omnisearchsage.
In this paper, we present OmniSearchSage, a versatile and scalable system for understanding search queries, pins, and products for Pinterest search. We jointly learn a unified query embedding coupled with pin and product embeddings, leading to an improvement of $>8\%$ relevance, $>7\%$ engagement, and $>5\%$ ads CTR in Pinterest's production search system. The main contributors to these gains are improved content understanding, better multi-task learning, and real-time serving. We enrich our entity representations using diverse text derived from image captions from a generative LLM, historical engagement, and user-curated boards. Our multitask learning setup produces a single search query embedding in the same space as pin and product embeddings and compatible with pre-existing pin and product embeddings. We show the value of each feature through ablation studies, and show the effectiveness of a unified model compared to standalone counterparts. Finally, we share how these embeddings have been deployed across the Pinterest search stack, from retrieval to ranking, scaling to serve $300k$ requests per second at low latency. Our implementation of this work is available at https://github.com/pinterest/atg-research/tree/main/omnisearchsage.
cs.IR
LLM Fairness
2024-04-25 00:00:00
INTRODUCTION Pinterestโ€™s mission is to bring everyone the inspiration to create a life they love. Search is one of the key surfaces on Pinterest where users seek inspiration spanning a wide range of interests, such as decorating their homes, planning weddings, or keeping up with the latest trends in beauty and fashion. In order to enhance the search experience, modern search systems aim to incorporate various types of content such as web documents, news, shopping items, videos, and more. Similarly, Pinterestโ€™s search feed encompasses a diverse range of content, including pins, shopping items, video pins, and related queries. To construct an inspiring feed for each of the more than 6 billion searches per month on Pinterest we must uncover relevant content from billions of pins and products. We must also find relevant queries to help users refine their queries and navigate their search journey. As an additional challenge, Pinterest search is global and multilingual with searchers using more than 45 languages to find inspirational content. Embeddings are useful building blocks in recommendation systems, especially search, where natural language understanding is key [11, 23, 24]. Embeddings can power retrieval use cases via approximate nearest neighbor (ANN) search [14, 22], enable detailed content and query understanding in ranking models without the overhead of processing raw data, and serve as a strong base to learn in low-data use-cases [31]. Despite their utility, embeddings come with their own challenges: if we learn a separate embedding for every use-case, there is an explosion of potentially expensive models that must be inferred on every request and used in downstream models. This also may lead to suboptimal recommendation quality โ€“ some use-cases may not have enough labels to learn an optimal representation. In practice, it could entail additional maintenance costs and technical debt for upgrading to new versions of embeddings in certain applications, as some data may have been collected over the course of months or years. Through rigorous offline experimentation, we show the impact of our key decisions in building embeddings for web-scale search at Pinterest: โ€ข Pin and product representations can be substantially enriched using diverse text derived from image captions from arXiv:2404.16260v1 [cs.IR] 25 Apr 2024 WWW โ€™24 Companion, May 13โ€“17, 2024, Singapore, Singapore Prabhat Agarwal et al. a generative LLM, historical engagement, and user-curated boards. โ€ข A single query embedding can be used to retrieve queries, products, and Pins with nearly the same effectiveness as task-specific embeddings. โ€ข A single query embedding can learn compatibility with multiple pre-existing embeddings and learned entity embeddings, and perform well when compared across tasks. OmniSearchSage has been deployed at Pinterest and is an integral component of the search stack. It powers embedding-based retrieval for standard and product pins, queries and ads. It is also one of the most important feature in multi-stage ranking models and various query classification models. These gains all arise despite the existence of other features enabling pin and product understanding, which highlights the importance optimizing embeddings end-to-end for search. 2 RELATED WORK Our work to build multi-task multi-entity embeddings for search draws upon broad areas of work. Our representation of pins and products extends existing work on multi-modal learning and two tower models for search retrieval. These have been extensively applied in the context of search and recommendation systems as an efficient way to retrieve results not purely related to the search query based on text. In OmniSearchSage, we demonstrate that the embeddings generated by these models can also serve as features in ranking and relevance models. Additionally, we offer a brief examination of specific embeddings within the Pinterest ecosystem. 2.1 Model-based Search Retrieval Historically, search systems have been powered by two stages: token-based matching, or candidate generation, and then scoring with a complex model. These have drawbacks, especially when users make complex queries or content is not primarily textual. This has led to the exploration of two tower models, which encode a query into a single embedding or a small set of embeddings, and then use those to retrieve relevant documents with approximate or exact nearest neighbor search [5, 11, 18, 20, 21, 24, 40]. Two natural topics in learning embeddings for search are document representation, and query representation. Depending on the learning objective, this query representation could be personalized, or it could be a pure text embedding model. Many architectures for query embeddings in industry have been proposed based on simple CNNs [12], bag of words models [11, 23], transformers [19], and more, but they share a basic structure involving query understanding and sometimes context understanding. Document representation is also a major challenge. The text associated directly with an item is popular as a key feature, but depending on the task, other sources have been found to provide great value, including queries where other users have engaged with a given item [5, 24, 25] and image content embeddings [19]. 2.2 Multi-task, multi-modal, and multi-entity embeddings The area of learning embeddings isnโ€™t exclusive to the realm of recommendation systems and has been studied extensively [4, 6, 29, 30]. Multi-task learning is a technique commonly utilized in ranking models to optimize for multiple objectives concurrently, aiming for enhanced performance or more efficient information sharing [33, 41]. A less frequently encountered approach involves the joint learning of embeddings for more than two entities. Though this methodology is sometimes implemented in graph learning scenarios, it can also be perceived as an extension of multi-task learning [39]. Multi-modal embeddings are of substantial interest in the industry since the majority of web content is multi-modal, typically including at both text and images [18, 19, 38]. One can take embeddings or raw data from each modality as inputs, and merge them at any stage of the model. The methodology typically involves utilizing embeddings or raw data from each mode as inputs, which are then merge at different stages in the model. Early-stage fusion can pose computational hurdles; therefore, in cases where performance is indifferent, utilizing embeddings instead of raw data is generally the preferred course of action [38]. 2.3 Embeddings at Pinterest PinSage [37] is a scalable GNN-based embedding representing pins. It is based on the GraphSage GCN algorithm [10], sampling neighborhoods with personalized PageRank to augment pin understanding, instead of simple heuristics like ๐‘›-hop neighbors. It aggregates some basic visual [2] and text information into a single dense representation, and is a critical feature in many models. To represent products, we have an embedding, ItemSage [1], which aggregates raw data about products, including metadata from product pages, and potentially many images of the product. ItemSage is trained for compatibility with PinSage, and the search query embedding preceding OmniSearchSage, meaning that the distance between ItemSage and these two embeddings can be used for retrieving or ranking content [27]. 3 METHOD 3.1 Problem Formulation In order to enhance the search experience, modern search systems aim to incorporate various types of content such as web documents, news, shopping items, videos, and more. Similarly, Pinterestโ€™s search feed encompasses a diverse range of content, including pins, shopping items, video pins, and related queries. Training separate query embedding models for each content type and its representation proves to be resource-intensive and inefficient. To address this issue, we introduce OmniSearchSage, which offers a unified query embedding model that jointly trains query embeddings for query-query, query-pin, and query-product retrieval and ranking. Another requirement in production systems is compatibility with existing embeddings, which is essential for purposes such as cost-efficiency and simplified migration. Hence we also train the query embeddings to be compatible with the corresponding preexisting embeddings for the entities. As a side effect, we also get compatibility with some embeddings due to the triangle inequality property inherent to cosine similarity. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW โ€™24 Companion, May 13โ€“17, 2024, Singapore, Singapore 3.2 Enriching Entity Representations On Pinterest, each pin or product is associated with an image and title, along with an optional text (known as description) and link. Beyond these typical attributes, products may carry additional metadata, such as brand information, color description, and more. Document expansion techniques has been empirically demonstrated to significantly enhance the performance of not just token-based, but also embedding-based search retrieval systems [8, 25, 26, 28, 34]. Hence, in OmniSearchSage, we enrich our entity representations using diverse text derived from image captions from a generative LLM, historical engagement, and user-curated boards as described below. In the dataset, 71% of pins and products feature a title or description, 91% include non-empty board titles, and 65% contain non-empty engaged queries. Synthetic GenAI captions are generated for all pins and products, ensuring full coverage. Section 4.3.2 discusses the importance of each of these enrichment. 3.2.1 Synthetic GenAI Captions. On our platform, a substantial volume of pins (about 30%) lack associated titles or descriptions, or possess noisy and/or irrelevant title or description. We address this issue by employing an off-the-shelf image captioning model, BLIP [17], to generate synthetic descriptions for these images. To assess the quality of these synthetically generated descriptions, we enlisted human evaluators to judge their relevance and quality. For a robust assessment, three distinct ratings were collected for each image within a sample of 10๐‘˜images, curated uniformly across various broad pin categories. The results indicated that an overwhelming 87.84% of the generated descriptions were both relevant and of high quality, while a meager 1.16% were deemed irrelevant and of poor quality. These synthetically generated descriptions serve as an added feature in our model, enriching the diversity of data associated with each entity. Despite not being directly visible to the users, their addition significantly contributes to a deeper understanding of the pinsโ€™ content. 3.2.2 Board Titles. On Pinterest, users explore and save pins to their personal collections, referred to as boards. Each board carries an associated title, reflecting the topic or theme of the collection. Most often, these user-crafted boards are meticulously organized, each focusing on a distinct theme or purpose. A user might, for instance, create discrete boards for โ€œSocial Media Marketing" and โ€œGraphic Designโ€™ยจ. Consequently, these board titles provide valuable, user-generated descriptors for the pins within the respective boards. We exploit this user-curated information by accumulating the titles of all boards each pin has been saved to. We limit our selection to a maximum of 10 unique board titles for each pin/product, systematically eliminating any potentially noisy or redundant titles as described next. First, each title is assigned a score influenced by two factors: its frequency of occurrence and the prevalence of its comprising words. Following this, titles are then ranked based on a hierarchy of their score (ascending), word count (descending), and character length (descending). The resulting top 10 board titles are subsequently incorporated as a feature in our model. This process eliminates any potentially noisy or redundant titles from the feature. Query Encoder Query Encoder Unified Pin-Product Encoder PinSage Unified Pin-Product Encoder ItemSage Query Pin Item Query L(query, query) L(query, pin) L(query, pin_c) L(query, product) L(query, product_c) Pretrained and Frozen Trained from scratch Figure 1: Diagrammatic Representation of OmniSearchSageโ€™s Multi-Entity, Multi-Task Architecture. 3.2.3 Engaged Queries. When multiple users interact with a specific pin or product for a certain query within a search feed, it signifies that pinโ€™s relevance to that query. We can use these queries to expand our understanding of the pin/product. For every pin, we generate a list of queries that have attracted user engagements, along with the counts and types of such engagements. This list of queries is then sorted using a function based on the count for each type of engagement. We use the top 20 queries from these sorted lists as a feature in our model. Through experimentation with diverse time-windows of query logs for feature creation, we discovered that larger windows yield superior performance. Consequently, we have opted for a twoyear window for feature calculation. However, the complexity of computing this from scratch every time presents a challenge. To mitigate this, we deploy an incremental approach. Every ๐‘›days, we examine new query logs, create a list of queries for every pin, and then blend it with the previously existing top 20 queries, thereby updating the latest value of the feature. 3.3 Entity Features The features we incorporate include PinSage [37] and unified image embeddings [2] to capture the essence of each pin. Additionally, for product pins, we use ItemSage [1] given its capability in effectively representing product-related pins. Text-based features such as the title and description of each pin are also integral to our feature set. Furthermore, we augment the text associated with each pin with the inclusion of synthetic captions, board titles, and engagement queries as outlined earlier. By integrating all these features, we attain a comprehensive and multi-dimensional representation of each pin, hence facilitating enhanced learning of representations. 3.4 Encoders In our work, we consider 3 entity types, namely, pin, product and query. Our model consists of an encoder for query, a unified learned encoder for both pin and product, and dedicated compatibility encoders for pin and product, respectively. 3.4.1 Query Encoder. The query encoder in our model (depicted in Figure 2) is based on a multilingual version of the DistilBERT WWW โ€™24 Companion, May 13โ€“17, 2024, Singapore, Singapore Prabhat Agarwal et al. Multilingual DistilBERT [CLS] antique copper bat ##hro ##om sin ##k Project and L2 Normalize Figure 2: Overview of the query encoder architecture. The encoder takes the output from the last layer associated with the โ€˜CLSโ€™ token, projects it onto a 256-dimensional vector space, and finally L2-normalizes the output to generate the final embedding. (distilbert-base-multilingual-cased2) [32]. This choice facilitates efficient handling of queries across a variety of languages. The encoder utilizes the output from the last layer corresponding to the ๐ถ๐ฟ๐‘†token and thereafter projects it to a 256-dimensional vector space. Post projection, we apply a ๐ฟ2 normalization on the 256-dimensional vectors to obtain the final embedding. This normalization greatly simplifies the calculation of cosine-distance in downstream applications, allowing for a straightforward dot product operation. 3.4.2 Unified Pin and Product Encoder. In our model, we utilize a single unified encoder for both pins and products (depicted in Figure 3), and this encoder is jointly trained with the query embeddings. Designed to process both textual features and continuous features, it plays a crucial role in learning the respective embeddings of pins and products. In cases where certain features are defined for one entity but not the other, we substitute them with zero, ensuring a consistent data input. As detailed in section 3.5, we utilize in-batch negatives to train our model. Prior research [9, 15, 16, 29] has empirically demonstrated that larger batches with a substantial number of negatives help in learning better representations. Therefore, to accommodate a larger batch size in the GPU memory, we employ a simple pin encoder model. The following encoder design has been determined through numerous ablation studies. These studies have allowed us to select the most effective configuration for each of the components, while still considering the importance of both training and serving efficiencies. The encoder uses three distinct tokenizers to process the textual features associated with a pin [1, 13, 23]. These include (i) a word unigram tokenizer that uses a vocabulary encompassing the 200๐‘˜most frequent word unigrams, (ii) a word bigram tokenizer that makes use of a vocabulary comprising the 1๐‘€most frequent word bigrams, and (iii) a character trigram tokenizer that utilizes a vocabulary of 64๐‘˜character trigrams. The tokens are mapped to their respective IDs in the vocabulary V which constitute all three 2https://huggingface.co/distilbert-base-multilingual-cased Image Encoder PinSAGE ItemSAGE MLP & L2 Normalize Hash Embedder Word Unigram Tokenizer Word Bigram Tokenizer Character Trigram Tokenizer Tokenizer Pin Text Board Titles Engaged Queries Synthetic GenAI Captions Figure 3: Schematic of the unified encoder model for pins and products, illustrating the use of three different tokenizers, a hash embedding table, and an MLP layer for combining text embeddings with other continuous features. tokenizers. Any token that falls out of this combined vocabulary gets discarded. The use of these combined tokenizers effectively helps in capturing the semantics of various texts associated with a pin/product. For token embedding learning, we use a 2-hash hash embedding table of size 100, 000 [1, 35]. Each identified tokenโ€™s ID ๐‘–is hashed into two places within the embedding table using hash functions โ„Ž1(๐‘–) and โ„Ž2(๐‘–). The ultimate embedding of a token with ID ๐‘–is a weighted interpolation of the two locations: ๐‘Š1๐‘–โ„Ž1(๐‘–) +๐‘Š2๐‘–โ„Ž2(๐‘–), where ๐‘Š1 and ๐‘Š2 are learned weight vectors of size |V| each. The sum of all token embeddings and the embedding features are concatenated and fed into a 3-layer MLP, with layer sizes of 1024, 1024, 256. Following this, the output of the MLP layer undergoes L2-normalization just like the query embedding. 3.4.3 Compatibility Encoders. In our model, we employ two discrete compatibility encoders individually dedicated to pins and products. These encoders leverages the pre-existing pin and product embeddings, represented by PinSage for pins and ItemSage for products. This allows the model to adeptly learn query embeddings that align effectively with PinSage and ItemSage embeddings. 3.5 Multi-Task Sampled Softmax Loss Taking inspiration from Itemsage [1], the problem of learning query and entity embeddings is treated as an extreme classification problem, with the aim of predicting entities relevant to a given query [7]. We employ the sampled softmax loss with logQ correction [36] to train our model. We use multitasking to jointly train entity embeddings and train the query embeddings to be compatible with existing entity embeddings. Formally, we define a task ๐‘‡โˆˆT as a tuple of a dataset of query-entity pairs (D = {(๐‘ฅ,๐‘ฆ)๐‘–}) and an entity encoder E. ๐‘‡โ‰œ{D, E}. For a batch of data, B = {(๐‘ฅ,๐‘ฆ)๐‘–} โŠ‚D, for task๐‘‡โˆˆT, the aim is to learn query embedding ๐‘ž๐‘ฅ๐‘–and entity embedding ๐‘๐‘ฆ๐‘–= E(๐‘ฆ๐‘–) such that the cosine similarity of the embeddings ๐‘ž๐‘ฅ๐‘–ยท ๐‘๐‘ฆ๐‘–is maximized. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW โ€™24 Companion, May 13โ€“17, 2024, Singapore, Singapore This is achieved by minimizing the softmax loss: ๐ฟ๐‘‡= โˆ’1 |B| |B| โˆ‘๏ธ ๐‘–=1 log exp(๐‘ž๐‘ฅ๐‘–ยท ๐‘๐‘ฆ๐‘–) ร ๐‘ฆโˆˆ๐ถexp(๐‘ž๐‘ฅ๐‘–ยท ๐‘๐‘ฆ) , (1) where C is the catalog of all entities of the same type as๐‘ฆ๐‘–. To ensure problem tractability, the normalization term in the denominator is approximated using a sample of the catalog ๐ถ. We use (i) positives in the batch, ๐ต๐‘= {๐‘ฆ๐‘–|(๐‘ฅ๐‘–,๐‘ฆ๐‘–) โˆˆB}, and (ii) a random sample of the catalog, ๐ถโ€ฒ. To rectify any bias that might have been introduced through sampling, we utilize the logQ correction technique. This method operates by deducting the sampling probability of the negative, represented as log๐‘„(๐‘ฆ|๐‘ฅ๐‘–), from the existing logits. This is crucial to ensure that popular entities arenโ€™t disproportionately penalized. ๐ฟ๐‘‡= ๐ฟ๐‘†๐‘๐‘› ๐‘‡ + ๐ฟ๐‘†๐‘Ÿ๐‘› ๐‘‡ (2) ๐ฟ๐‘†๐‘๐‘› ๐‘‡ = โˆ’1 |B| |B| โˆ‘๏ธ ๐‘–=1 log exp(๐‘ž๐‘ฅ๐‘–ยท ๐‘๐‘ฆ๐‘–โˆ’log๐‘„(๐‘ฆ๐‘–|๐‘ฅ๐‘–)) ร ๐‘งโˆˆ๐ต๐‘exp(๐‘ž๐‘ฅ๐‘–ยท ๐‘๐‘งโˆ’log๐‘„(๐‘ง|๐‘ฅ๐‘–)) (3) ๐ฟ๐‘†๐‘Ÿ๐‘› ๐‘‡ = โˆ’1 |B| |B| โˆ‘๏ธ ๐‘–=1 log exp(๐‘ž๐‘ฅ๐‘–ยท ๐‘๐‘ฆ๐‘–โˆ’log๐‘„(๐‘ฆ๐‘–|๐‘ฅ๐‘–)) ร ๐‘ฆโˆˆ๐ถโ€ฒ exp(๐‘ž๐‘ฅ๐‘–ยท ๐‘๐‘ฆโˆ’log๐‘„(๐‘ฆ|๐‘ฅ๐‘–)) (4) = โˆ’1 |B| |B| โˆ‘๏ธ ๐‘–=1 log exp(๐‘ž๐‘ฅ๐‘–ยท ๐‘๐‘ฆ๐‘–โˆ’log๐‘„(๐‘ฆ๐‘–|๐‘ฅ๐‘–)) ร ๐‘ฆโˆˆ๐ถโ€ฒ exp(๐‘ž๐‘ฅ๐‘–ยท ๐‘๐‘ฆโˆ’log๐‘„๐‘›(๐‘ฆ)) , (5) since ๐‘ฆis sampled independently The total loss is defined as the sum of all individual task losses, ๐ฟ= โˆ‘๏ธ ๐‘‡โˆˆT ๐ฟ๐‘‡. (6) We mix together different tasks together in one batch and control the influence of each task on the model through this composition. To increase training efficiency, we share the pairs in the batch across all tasks with the same dataset. 3.6 Model Serving OmniSearchSage query embeddings are integral to numerous applications in the search stack, which necessitates us to maintain a strict latency budget. For real-time inference with minimized latency, our query encoder is served on GPUs by our in-house C++based machine learning model server, the Scorpion Model Server (SMS). Factoring in that query distribution complies with Zipfโ€™s law, we have instituted a cache-based system to curb costs and shorten response times. The query embedding server first verifies if a query is cached before resorting to the query inference server should it be absent from the cache. After testing various Cache Time-To-Live (TTL) periods, a TTL of 30 days was established as optimal. The system is equipped for handling 300๐‘˜requests per second, maintaining a median (p50) latency of just 3ms, and 90 percentile (p90) latency of 20ms. The implementation of this cachebased system efficiently reduces the load on the inference server to approximately 500 QPS, leading to substantial cost and latency reductions. The pin and product embeddings are derived offline on a daily basis through batch inference on GPUs and are subsequently published to our signal store for consumption. Pair Source Actions Size Query-Pin Query Logs repin, longclick 1.5B Query-Product Query Logs repin, longclick 136M Query-Product Offsite logs add-to-cart, checkout 2.5M Query-Query Query Logs click 195M Table 1: Summary of the different training datasets. 4 EXPERIMENTS 4.1 Dataset Our dataset is primarily constructed by extracting unique queryentity pairs from one year of search query logs. We consider various forms of engagement on the platform when extracting these pairs, including โ€˜savesโ€™ (when a user saves a pin to a board) and โ€˜long clicksโ€™ (instances where users browse the linked page for more than 10 seconds before returning to Pinterest). For products, we enrich our dataset by incorporating offsite actions as well. Thus, we also include anonymized pairs tied to significant actions like โ€˜add to cartโ€™ and โ€˜checkoutโ€™. A common challenge in recommendation systems is the popularity bias, where certain pins are overrepresented due to their high appeal. To counteract this bias, we impose a limit on the number of times the same pin can be paired. This limit is capped at 50 pairs for pins and is extended to 200 pairs for products (since products have lower volume and engagement). By adopting this strategy, we ensure our dataset is robust and truly representative of the userโ€™s activity on the platform. Our model training is further extended to encompass queryquery pairs. On Pinterest, users are presented with similar query suggestions, and engagements with these recommendations are recorded in the search logs. We leverage these records, extracting such pairs from an entire yearโ€™s logs, thus enriching our training dataset. A detailed breakdown of the positive labels in the dataset is provided in Table 1. 4.2 Offline Evaluation Metrics Our evaluation of the model encompasses both user engagement data and human-labeled relevance data. Relevance gets measured using human-labeled pairs of queries and pins, sampled from production traffic from four distinct countries: US, UK, France, and Germany. This strategy serves to assess the modelโ€™s performance in handling multiple languages and cultural contexts. Evaluation of user engagement considers a selected 7-day period. We ensure no data leakageโ€”possible due to the inclusion of engagement features such as engaged queriesโ€”by maintaining a 15-day separation between the end of the training dataset and the beginning of the evaluation phase. We sample 80๐‘˜pairs from the defined evaluation duration to represent repins and long clicks for both pins and products. Another 80๐‘˜pairs, corresponding to clicks for queries, are also included for comprehensive performance evaluation. The primary metric we used for evaluation is named โ€˜Recall@10โ€™. This metric denotes the likelihood of the occurrence of the engaged entity within the top 10 entities when these entities are sorted in descending order based on their similarity to the query. WWW โ€™24 Companion, May 13โ€“17, 2024, Singapore, Singapore Prabhat Agarwal et al. Metric SearchSage OmniSearchSage Gain Pin Save 0.39 0.65 +67% Long-Click 0.45 0.73 +62% Relevance (US) 0.25 0.45 +80% Relevance (UK) 0.29 0.51 +76% Relevance (FR) 0.23 0.43 +87% Relevance (DE) 0.28 0.46 +64% Product Save 0.57 0.73 +28% Long-Click 0.58 0.73 +26% Query Click 0.54 0.78 +44% Table 2: Comparative analysis of OmniSearchSage and the baseline SearchSage across various tasks Pin, Product, and Query. Consider a dataset ๐ท= (๐‘ž๐‘–,๐‘’๐‘–)๐‘› ๐‘–=1, where each (๐‘ž๐‘–,๐‘’๐‘–) denotes a query-engaged entity pair, and also consider a random corpus ๐ถ with ๐‘šentities. The Recall@10 metric can then be defined as the average over all queries of the indicator function 1, where 1 equals 1 if the engaged entity ๐‘’๐‘–is amongst the top 10 entities in ๐ถwhen ranked by their dot product with the query ๐‘ž๐‘–. Recall@10 = 1 |๐ท| |๐ท| โˆ‘๏ธ ๐‘–=1 1[( โˆ‘๏ธ ๐‘ฆโˆˆ๐ถ ๐‘ฅ๐‘–ยท ๐‘ฆ> ๐‘ฅ๐‘–ยท ๐‘ฆ๐‘–) > 10] For every pin, query, and product, we employ a uniformly distributed random sample of ๐‘š= 1.5๐‘€entities from our corpus. 4.3 Offline Results In this section, we provide a comprehensive comparison between our proposed model, OmniSearchSage, and the existing baselines, which helps showcase its performance enhancements. Subsequently, we undertake an in-depth exploration of key influential aspects such as the significance of text enrichments, the pros and cons of adopting multitasking approaches, and the operational efficacy of compatibility encoders in the context of our model. 4.3.1 Comparison with Baselines. In this study, the existing version of SearchSage [27] serves as our comparison baseline. It operates using fixed PinSage and ItemSage embeddings for pins and products, respectively. For OmniSearchSage, we utilize the query encoder to derive query embeddings and the unified pin and product encoder to generate pin and product embeddings. In Table 2, comparisons are drawn between OmniSearchSage and SearchSage, with both models being trained and evaluated on the same dataset. It is important to highlight that the baseline model, SearchSage, does not involve query-query pairs for training purposes. On the pin dataset, OmniSearchSage shows a significant gain, between 60% and 90%, over SearchSage across all metrics. Recall is relatively consistent across different countries, reflecting the multilingual robustness of OmniSearchSage. Analysis of the product dataset reveals that OmniSearchSage outperforms the baseline model by about 27% in predicting product save long-click relevance No captions 0.51 0.60 0.36 With captions 0.66 0.76 0.36 Improvement +30.43% +25.58% 0% Table 3: Comparative assessment displaying the influence of Synthetic GenAI Captions on pins lacking titles and descriptions. engagement. This increment is less prominent as compared to the pins dataset, mainly because ItemSage, upon which this comparison is based, has already undergone training on search tasks. Nevertheless, the observed improvement shows the positive impact of incorporating new features as well as the benefit of multi-tasking. Interestingly, SearchSage is able to predict related query clicks substantially better than random despite not being trained on this task. However, when we directly optimize for this objective in OmniSearchSage, we see a substantial +44% improvement. We show this improvement can be attributed to both training on related queries, and multi-task learning in Section 4.3.3. 4.3.2 Importance of content enrichment. In this section, we delve into an analysis of the importance of various text enhancements described in Section 3.2. To maintain brevity, the evaluation focuses solely on the metrics related to the query-pin task. Our first direction of investigation centers around the impact of integrating synthetic captions for pins that lack both a title and description. For this purpose, we extracted pairs from the evaluation dataset in which the engaged pin was missing a title or a description. This resulted in a narrowed evaluation dataset of 24๐‘˜pairs. The modelโ€™s performance, initially based on solely continuous features and native text, was then compared to a model additionally enriched with captions. Table 3 presents the results of this comparison. When synthetic captions were added, both โ€˜saveโ€™ and โ€˜long-clickโ€™ metrics saw substantial improvements โ€” approximately +30% and +26% respectively. However, the relevance metric remained unchanged. This suggests that adding synthetic captions can significantly enhance the modelโ€™s performance for certain metrics when representing pins that lack a title and description. Table 4 illustrates the impact of adding different text enrichments on the modelโ€™s performance. Each percentage increase is relative to the previous row, displaying the additional improvement from each additional feature. Our baseline model utilizes only continuous features for training and its performance values are reflected in the first row. Upon adding โ€˜Titleโ€™, โ€˜Descriptionโ€™, and โ€˜Synthetic GenAI Captionsโ€™ to the baseline model, we notice a robust improvement across all metrics. save long-click relevance Continuous Features Only 0.43 0.53 0.30 Adding Title, Description and Synthetic GenAI Captions 0.52 (+21%) 0.63 (+19%) 0.39 (+30%) Adding Board Titles 0.61 (+17%) 0.68 (+8%) 0.44 (+13%) Adding Engaged Queries 0.65 (+7%) 0.73 (+7%) 0.46 (+5%) Table 4: Impact of adding different text enrichments on the modelโ€™s performance. Each percentage increase is relative to the previous row, displaying the additional improvement from each additional feature. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW โ€™24 Companion, May 13โ€“17, 2024, Singapore, Singapore Dataset Pin Only Product only Query Only OmniSearchSage pin save 0.68 0.65 long-click 0.75 0.73 avg relevance 0.45 0.46 product save 0.73 0.73 long-click 0.73 0.73 query click 0.73 0.78 Table 5: Comparative analysis illustrating the contrasts between our unified multi-task model and models trained individually for each task pin, product, and query. There is a 20% improvement in the engagement datasets, while the relevance metric improves by a notable 30%, demonstrating the substantial impact of these text features. The model enhancement continues with adding board titles to the feature set, leading to a further increase of 8 โˆ’15% in different metrics. This affirms the relevance of board titles in improving predictive accuracy. Finally, we incorporated engaged queries feature into the model, resulting in a consistent, albeit smaller growth across all three metrics. Although the incremental relative gain appears smaller, it still constitutes a significant improvement when compared to the baseline model. In summary, each text enrichment feature contributes significantly to improving model performance as seen by the increment in metrics compared to their immediate preceding state. 4.3.3 Effect of multi-tasking. In Table 5, we present a comparative analysis between models trained independently for each task (pin, product, and query) and our consolidated multitask model. For this comparison, both the independent and multitask models were trained under equivalent conditions with matching batch sizes, computational power, and iterations. The datasets used for both training and evaluation were also identical, with the sole difference that the individual models were trained on their respective subset of pairs from the dataset. This systematic approach ensures the fair and accurate assessment of the performance of the multitask model in relation to the independent task models. On the pin task, we see slight degradation in quality from multitask learning, but, on product and query tasks, results are neutral to positive. This aligns with general notions about multi-task learning: low-data tasks are unlikely to see regressions from multi-task learning, while the pin task using 1.5๐ตpairs sees a very slight drop in performance. Despite this drop, the simplification benefits of multi-task learning outweigh the metric loss. 4.3.4 Effect of compatibility encoders. We examine the influence of incorporating compatibility encoders on the effectiveness of the learned pin/product embeddings. We train a model that comprises only the query and unified pin and product encoder. Subsequently, this model is compared with another model that fully incorporates all the encoders. Interestingly, there is almost no noticeable degradation in the metrics of the learned encoder, thereby essentially achieving seamless compatibility of the query embedding with pre-existing embeddings at no substantial cost. Furthermore, as demonstrated in Table 6, the performance of the compatibility encoders in the OmniSearchSage model is either on par with or surpasses that of the SearchSage model, which is trained utilising only compatibility encoders. Dataset SearchSage OmniSearchSage pin save 0.39 0.39 long-click 0.45 0.43 avg relevance 0.26 0.26 product save 0.57 0.57 long-click 0.58 0.57 Table 6: Comparison of co-trained compatibility encoders with independently trained compatibility encoders. Product Embedding Index (HNSW) Ads Embedding Index (HNSW) Pin Embedding Index (HNSW) Pin Inverted Token Index Product Inverted Token Index Ads Inverted Token Index L1 Scoring Model User Input Query Query Understanding L2 Scoring Model Query Embedding Server User, Query, Pin Features Figure 4: A simplified depiction of the search retrieval and ranking stack at Pinterest highlighting the integration points for OmniSearchSage embeddings. 5 APPLICATIONS IN PINTEREST SEARCH OmniSearchSage embeddings find wide applications throughout the Pinterest search stack, primarily in retrieval and ranking tasks. Figure 4 presents a simplified depiction of the search retrieval and ranking stack at Pinterest and highlights the integration points for OmniSearchSage embeddings. These embeddings are employed to power the retrieval of pins and products using HNSW [22]. They are also instrumental in the L1 scoring model, where they enhance the efficiency of token-based retrieval sources. Moreover, OmniSearchSage embeddings serve as one of the most critical features in the L2 scoring and relevance models. In this section, we delineate the results derived from the A/B tests we conducted. In these tests, production SearchSage embeddings were replaced with OmniSearchSage embeddings, resulting in boosted performance in both organic and promoted content (Ads) in search. Additionally, we provide results from a human relevance assessment conducted on actual production-sampled traffic. This evaluation further confirms the improved performance derived from the utilization of OmniSearchSage embeddings. Finally, we demonstrate how employing query embeddings also enhances performance in other tasks, such as classification, particularly in situations where data availability is limited. This highlights the ability of the OmniSearchSage model to generalize to tasks different from its original training objectives. 5.1 Human Relevance Evaluation To understand advantages of OmniSearchSage, we enlisted human evaluators to assess the relevance of candidates retrieved via two WWW โ€™24 Companion, May 13โ€“17, 2024, Singapore, Singapore Prabhat Agarwal et al. (a) Token-based (b) OmniSearchSage-based Figure 5: Comparative display of pins retrieved in response to the query โ€™antique copper bathroom sinkโ€™ from the tokenbased system and the OmniSearchSage-based system. Pins deemed relevant are outlined in green, while those considered irrelevant are encircled in red. methods: OmniSearchSage embeddings-based pin retrieval and token-based pin retrieval. For this evaluation, we selected a set of 300 queries, deliberately stratified across both head and tail queries. The top 8 candidate pins were then retrieved from each system using these queries, and human evaluators determined the relevance of the pins to the corresponding query. Every query-pin pair received three judgements, with an inter-annotator agreement rate of 0.89. Evaluation results revealed a noticeable improvement with OmniSearchSage, showing a 10% increase in relevance compared to the token-based system. Figure 5 offers a distinct comparison of retrieved pins for the query โ€˜antique copper bathroom sinkโ€™ between the candidates retrieved by the token-based system and the OmniSearchSage-based system. The token-based retrieval system often fetches pins related to only part of the query and fails to fetch consistently relevant results. In striking contrast, nearly all pins retrieved by the OmniSearchSage-based system are highly relevant to the specified query, underlining the efficacy of the OmniSearchSage model in understanding the query and aligning similar pins and queries in the same space together. 5.2 Organic Search In this section, we outline the results of the A/B testing conducted to substitute the existing production SearchSage query and entity embeddings with OmniSearchSage embeddings for organic content within Pinterest search. Within the context of search experiments at Pinterest, our attention is largely concentrated on two key metrics: the search fulfillment rate and relevance. The search fulfillment rate is defined as the proportion of searches that result in a user engagement action of significance. Relevance is calculated as the weighted average relevance of the top eight pins for each query, assessed across different query segments. This is measured through human evaluation. The impact on these two metrics, from replacing SearchSage with OmniSearchSage, is presented in Table 7. The table provides data drawn from experiments for three distinct use-cases: (i) retrieval of pins and products, (ii) L1 scoring model, and (iii) L2 scoring model and relevance model. Search Fulfilment Rate Relevance Pin and Product Retrieval +4.1% +0.5% L1 Scoring +0.5% +0.0% L2 Scoring and Relevance Model +2.8% +3.0% Table 7: Online A/B experiment results of OmniSearchSage in Organic Search. gCTR Product Ads Retrieval +5.27% Ads Search Engagement Model +2.96% Ads Search Relevance Model +1.55% Table 8: Online A/B experiment results of OmniSearchSage for Ads in Search. 5.3 Ads in Search The OmniSearchSage embeddings have also successfully replaced the SearchSage embeddings in various applications within Ads on Search surface. We present the results of three use cases: search engagement model, search relevance model, and product ads retrieval. Uniformly, we noted substantial improvements in engagement and relevance within Ads across all use cases. These increments, specifically in the long clickthrough rate (gCTR), are outlined in Table 8. Furthermore, OmniSearchSage led to a noteworthy 4.95% increase in Ads relevance within the Search Ads relevance model. These gains highlight the positive impact of transitioning to OmniSearchSage embeddings for Ads on Search. 5.4 Classification One of the primary advantages of developing robust query representation such as OmniSearchSage is its utility in powering downstream applications, particularly when there is a lack of labels for learning large models. One example of this at Pinterest is interest classification, where we classify queries into a hierarchical taxonomy. Using OmniSearchSage query embeddings for query representation, we were able to increase performance when compared to the baseline FastText [3] model. Precision increased by 30% on average across levels, with the larger gains coming from more granular levels. 6
Prabhat Agarwal, Minhazul Islam Sk, Nikil Pancha, Kurchi Subhra Hazra, Jiajing Xu, Chuck Rosenberg
Original Paper
[ "cs.IR", "cs.AI", "cs.LG", "H.3.3" ]
http://arxiv.org/abs/2404.16277v1
2024-04-25 00:00:00
Causally Inspired Regularization Enables Domain General Representations
Given a causal graph representing the data-generating process shared across different domains/distributions, enforcing sufficient graph-implied conditional independencies can identify domain-general (non-spurious) feature representations. For the standard input-output predictive setting, we categorize the set of graphs considered in the literature into two distinct groups: (i) those in which the empirical risk minimizer across training domains gives domain-general representations and (ii) those where it does not. For the latter case (ii), we propose a novel framework with regularizations, which we demonstrate are sufficient for identifying domain-general feature representations without a priori knowledge (or proxies) of the spurious features. Empirically, our proposed method is effective for both (semi) synthetic and real-world data, outperforming other state-of-the-art methods in average and worst-domain transfer accuracy.
Given a causal graph representing the data-generating process shared across different domains/distributions, enforcing sufficient graph-implied conditional independencies can identify domain-general (non-spurious) feature representations. For the standard input-output predictive setting, we categorize the set of graphs considered in the literature into two distinct groups: (i) those in which the empirical risk minimizer across training domains gives domain-general representations and (ii) those where it does not. For the latter case (ii), we propose a novel framework with regularizations, which we demonstrate are sufficient for identifying domain-general feature representations without a priori knowledge (or proxies) of the spurious features. Empirically, our proposed method is effective for both (semi) synthetic and real-world data, outperforming other state-of-the-art methods in average and worst-domain transfer accuracy.
cs.LG
Knowledge AND Graph
2024-04-25 00:00:00
Introduction A key feature of machine learning is its capacity to generalize across new domains. When these domains present di๏ฌ€erent data distributions, the algorithm must leverage shared structural concepts to achieve outof-distribution (OOD) or out-of-domain generalization. This capability is vital in numerous important realworld machine learning applications. For example, in safety-critical settings such as autonomous driving, a lack of resilience to unfamiliar distributions could lead to human casualties. Likewise, in the healthcare sector, where ethical considerations are critical, an inability to adjust to shifts in data distribution can result in unfair biases, manifesting as inconsistent performance across di๏ฌ€erent demographic groups. An in๏ฌ‚uential approach to domain generalization is Invariant Causal Prediction (ICP; [Peters et al., 2016]). ICP posits that although some aspects of data distributions (like spurious or non-causal mechanisms [Pearl, 2010]) may change across domains, certain causal mechanisms remain constant. ICP suggests focusing on these invariant mechanisms for prediction. However, the estimation method for these invariant mechanisms suggested by [Peters et al., 2016] struggles with scalability in high-dimensional feature spaces. To overcome this, Arjovsky et al. [2019] introduced Invariant Risk Minimization (IRM), designed to identify these invariant mechanisms by minimizing an objective. However, requires strong assumptions for identifying the desired domain-general solutions [Ahuja et al., 2021, Rosenfeld et al., 2022]; for instance, observing a number of domains proportional to the spurious featuresโ€™ dimensions is necessary, posing a signi๏ฌcant challenge in these high-dimensional settings. Subsequent variants of IRM have been developed with improved capabilities for identifying domaingeneral solutions [Ahuja et al., 2020, Krueger et al., 2021, Robey et al., 2021, Wang et al., 2022, Ahuja et al., 2021]. Additionally, regularizers for Distributionally Robust Optimization with subgroup shift have been proposed (GroupDRO) [Sagawa et al., 2019]. However, despite their solid theoretical motivation, empirical evidence suggests that these methods may not consistently deliver domain-general solutions in practice Gulrajani and Lopez-Paz [2020], Kaur et al. [2022], Rosenfeld et al. [2022]. โˆ—Contact: [email protected] 1 Kaur et al. [2022] demonstrated that regularizing directly for conditional independencies implied by the generative process can give domain-general solutions, including conditional independencies beyond those considered by IRM. However, their experimental approach involves regularization terms that require direct observation of spurious features, a condition not always feasible in real-world applications. Our proposed methodology also leverages regularizers inspired by the conditional independencies indicated by causal graphs but, crucially, it does so without necessitating prior knowledge (or proxies) of the spurious features. 1.1 Contributions In this work, โ€ข we outline su๏ฌƒcient properties to uniquely identify domain-general predictors for a general set of generative processes that include domain-correlated spurious features, โ€ข we propose regularizers to implement these constraints without independent observations of the spurious features, and โ€ข ๏ฌnally, we show that the proposed framework outperforms the state-of-the-art on semi-synthetic and real-world data. The code for our proposed method is provided at https://github.com/olawalesalaudeen/tcri. Notation: Capital letters denote bounded random variables, and corresponding lowercase letters denote their value. Unless otherwise stated, we represent latent domain-general features as Zdg โˆˆZdg โ‰กRm and spurious latent features as Zspu โˆˆZspu โ‰กRo. Let X โˆˆX โ‰กRd be the observed feature space and the output space of an invertible function ฮ“ : Zdg ร— Zspu 7โ†’X and Y โˆˆY โ‰ก{0, 1, . . ., K โˆ’1} be the observed label space for a K-class classi๏ฌcation task. We then de๏ฌne feature extractors aimed at identifying latent features ฮฆdg : X 7โ†’Rm, ฮฆspu : X 7โ†’Ro so that ฮฆ : X 7โ†’Rm+o that is ฮฆ(x) = [ฮฆdg(x); ฮฆspu(x)]โˆ€x โˆˆX  . We de๏ฌne e as a discrete random variable denoting domains and E = {P e(Zdg, Zspu, X, Y ) : e = 1, 2, . . .} to be the set of possible domains. Etr โŠ‚E is the set of observed domains available during training. 2 Related Work The source of distribution shift can be isolated to components of the joint distribution. One special case of distribution shift is covariate shift [Shimodaira, 2000, Zadrozny, 2004, Huang et al., 2006, Gretton et al., 2009, Sugiyama et al., 2007, Bickel et al., 2009, Chen et al., 2016, Schneider et al., 2020], where only the covariate distribution P(X) changes across domains. Ben-David et al. [2009] give upper-bounds on target error based on the H-divergence between the source and target covariate distributions, which motivates domain alignment methods like the Domain Adversarial Neural Networks [Ganin et al., 2016] and others [Long et al., 2015, Blanchard et al., 2017]. Others have followed up on this work with other notions of covariate distance for domain adaptation, such as mean maximum discrepancy (MMD) [Long et al., 2016], Wasserstein distance [Courty et al., 2017], etc. However, Kpotufe and Martinet [2018] show that these divergence metrics fail to capture many important properties of transferability, such as asymmetry and non-overlapping support. Furthermore, Zhao et al. [2019] shows that even with the alignment of covariates, large distances between label distributions can inhibit transfer; they propose a label conditional importance weighting adjustment to address this limitation. Other works have also proposed conditional covariate alignment [des Combes et al., 2020, Li et al., 2018c,b]. Another form of distribution shift is label shift, where only the label distribution changes across domains. Lipton et al. [2018] propose a method to address this scenario. Schrou๏ฌ€et al. [2022] illustrate that many real-world problems exhibit more complex โ€™compoundโ€™ shifts than just covariate or label shifts alone. One can leverage domain adaptation to address distribution shifts; however, these methods are contingent on having access to unlabeled or partially labeled samples from the target domain during training. When such samples are available, more sophisticated domain adaptation strategies aim to leverage and adapt spurious feature information to enhance performance [Liu et al., 2021, Zhang et al., 2021, Kirichenko et al., 2022]. 2 However, domain generalization, as a problem, does not assume access to such samples [Muandet et al., 2013]. To address the domain generalization problem, Invariant Causal Predictors (ICP) leverage shared causal structure to learn domain-general predictors [Peters et al., 2016]. Previous works, enumerated in the introduction (Section 1), have proposed various algorithms to identify domain-general predictors. Arjovsky et al. [2019]โ€™s proposed invariance risk minimization (IRM) and its variants motivated by domain invariance: min w,ฮฆ 1 |Etr| X eโˆˆEtr Re(w โ—ฆฮฆ) s.t. w โˆˆargmin e w Re( e w ยท ฮฆ), โˆ€e โˆˆEtr, where Re(w โ—ฆฮฆ) = E  โ„“(y, w ยท ฮฆ(x))  , with loss function โ„“, feature extractor ฮฆ, and linear predictor w. This objective aims to learn a representation ฮฆ such that predictor w that minimizes empirical risks on average across all domains also minimizes within-domain empirical risk for all domains. However, Rosenfeld et al. [2020], Ahuja et al. [2020] showed that this objective requires unreasonable constraints on the number of observed domains at train times, e.g., observing distinct domains on the order of the rank of spurious features. Follow-up works have attempted to improve these limitations with stronger constraints on the problem โ€“ enumerated in the introduction section. Our method falls under domain generalization; however, unlike the domain-general solutions previously discussed, our proposed solution leverages di๏ฌ€erent conditions than domain invariance directly, which we show may be more suited to learning domain-general representations. 3 Causality and Domain Generalization We often represent causal relationships with a causal graph. A causal graph is a directed acyclic graph (DAG), G = (V, E), with nodes V representing random variables and directed edges E representing causal relationships, i.e., parents are causes and children are e๏ฌ€ects. A structural equation model (SEM) provides a mathematical representation of the causal relationships in its corresponding DAG. Each variable Y โˆˆV is given by Y = fY (X) + ฮตY , where X denotes the parents of Y in G, fY is a deterministic function, and ฮตY is an error capturing exogenous in๏ฌ‚uences on Y . The main property we need here is that fY is invariant to interventions to V \{Y } and is consequently invariant to changes in P(V ) induced by these interventions. Interventions refer to changes to fZ, Z โˆˆV \{Y }. In this work, we focus on domain-general predictors dg that are linear functions of features with domaingeneral mechanisms, denoted as gdg := w โ—ฆฮฆdg, where w is a linear predictor and ฮฆdg identi๏ฌes features with domain-general mechanisms. We use domain-general rather than domain-invariant since domain-invariance is strongly tied to the property: Y โŠฅ โŠฅe | Zdg [Arjovsky et al., 2019]. As shown in the subsequent sections, this work leverages other properties of appropriate causal graphs to obtain domain-general features. This distinction is crucial given the challenges associated with learning domain-general features through domaininvariance methods [Rosenfeld et al., 2020]. Given the presence of a distribution shift, itโ€™s essential to identify some common structure across domains that can be utilized for out-of-distribution (OOD) generalization. For example, Shimodaira [2000] assume P(Y |X) is shared across all domains for the covariate shift problem. In this work, we consider a setting where each domain is composed of observed features and labels, X โˆˆX, Y โˆˆY, where X is given by an invertible function ฮ“ of two latent random variables: domain-general Zdg โˆˆZdg and spurious Zspu โˆˆZspu. By construction, the conditional expectation of the label Y given the domain-general features Zdg is the same across domains, i.e., Eei [Y |Zdg = zdg] = Eej [Y |Zdg = zdg] (1) โˆ€zdg โˆˆZdg, โˆ€ei ฬธ= ej โˆˆE. Conversely, this robustness to e does not necessarily extend to spurious features Zspu; in other words, Zspu may assume values that could lead a predictor relying on it to experience arbitrarily high error rates. Then, a sound strategy for learning a domain-general predictor โ€“ one that is robust to distribution shifts โ€“ is to identify the latent domain-general Zdg from the observed features X. 3 e Zdg Zspu Y X Figure 1: Partial Ancestral Graph representing all non-trivial and valid generative processes (DAGs); dashed edges indicate that an edge may or may not exist. The approach we take to do this is motivated by the Reichenbach Common Cause Principle, which claims that if two events are correlated, there is either a causal connection between the correlated events that is responsible for the correlation or there is a third event, a so-called (Reichenbachian) common cause, which brings about the correlation [Hitchcock and Rรฉdei, 2021, Rรฉdei, 2002]. This principle allows us to posit the class of generative processes or causal mechanisms that give rise to the correlated observed features and labels, where the observed features are a function of domain-general and spurious features. We represent these generative processes as causal graphs. Importantly, the mapping from a nodeโ€™s causal parents to itself is preserved in all distributions generated by the causal graph (Equation 1), and distributions can vary arbitrarily so long as they preserve the conditional independencies implied by the DAG (Markov Property [Pearl, 2010]). We now enumerate DAGs that give observe features with spurious correlations with the label. Valid DAGs. We consider generative processes, where both latent features, Zspu, Zdg, and observed X are correlated with Y , and the observed X is a function of only Zdg and Zspu (Figure 1). Given this setup, there is an enumerable set of valid generative processes. Such processes are (i) without cycles, (ii) are feature complete โ€“ including edges from Zdg and Zspu to X, i.e., Zdg โ†’X โ†Zspu, and (iii) where the observed features mediate domain in๏ฌ‚uence, i.e., there is no direct domain in๏ฌ‚uence on the label e ฬธโ†’Y . We discuss this enumeration in detail in Appendix B. The result of our analysis is identifying a representative set of DAGs that describe valid generative processes โ€“ these DAGs come from orienting the partial ancestral graph (PAG) in Figure 1. We compare the conditional independencies implied by the DAGs de๏ฌned by Figure 1 as illustrated in Figure 2, resulting in three canonical DAGs in the literature (see Appendix B for further discussion). Other DAGs that induce spurious correlations are outside the scope of this work. e Zdg Zspu Y X (a) Causal [Arjovsky et al., 2019]. e Zdg Zspu Y X (b) Anticausal [Rosenfeld et al., 2020]. e Zdg Zspu Y X (c) Fully Informative Causal [Ahuja et al., 2021]. Figure 2: Generative Processes. Graphical models depicting the structure of possible data-generating processes โ€“ shaded nodes indicate observed variables. X represents the observed features, Y represents observed targets, and e represents domain in๏ฌ‚uences (domain indexes in practice). There is an explicit separation of domain-general Zdg and domain-speci๏ฌc Zspu features; they are combined to generate observed X. Dashed edges indicate the possibility of an edge. Conditional independencies implied by identi๏ฌed DAGs (Figure 2). 4 Table 1: Generative Processes and Su๏ฌƒcient Conditions for Domain-Generality Graphs in Figure 2 (a) (b) (c) Zdg โŠฅ โŠฅZspu | {Y, e} โœ“ โœ“ โœ— Identifying Zdg is necessary โœ“ โœ“ โœ— Fig. 2a: Zdg โŠฅ โŠฅZspu | {Y, e}; Y โŠฅ โŠฅe | Zdg. This causal graphical model implies that the mapping from Zdg to its causal child Y is preserved and consequently, Equation 1 holds [Pearl, 2010, Peters et al., 2016]. As an example, consider the task of predicting the spread of a disease. Features may include causes (vaccination rate and public health policies) and e๏ฌ€ects (coughing). e is the time of month; the distribution of coughing changes depending on the season. Fig. 2b: Zdg โŠฅ โŠฅZspu | {Y, e}; Zdg โŠฅ โŠฅZspu | Y ; Y โŠฅ โŠฅe | Zdg, Zdg โŠฅ โŠฅe. The causal graphical model does not directly imply that Zdg โ†’Y is preserved across domains. However, in this work, it represents the setting where the inverse of the causal direction is preserved (inverse: Zdg โ†’Y ), and thus Equation 1 holds. A context where this setting is relevant is in healthcare where medical conditions (Y ) cause symptoms (Zdg), but the prediction task is often predicting conditions from symptoms, and this mapping Zdg โ†’Y , opposite of the causal direction, is preserved across distributions. Again, we may consider e as the time of month; the distribution of coughing changes depending on the season. Fig. 2c: Y โŠฅ โŠฅe | Zdg; Zdg โŠฅ โŠฅe. Similar to Figure 2a, this causal graphical model implies that the mapping from Zdg to its causal child Y is preserved, so Equation 1 holds [Pearl, 2010, Peters et al., 2016]. This setting is especially interesting because it represents a Fully Informative Invariant Features setting, that is Zspu โŠฅ โŠฅY | Zdg [Ahuja et al., 2021]. Said di๏ฌ€erently, Zspu does not induce a backdoor path from e to Y that Zdg does not block. As an example of this, we can consider the task of predicting hospital readmission rates. Features may include the severity of illness, which is a direct cause of readmission rates, and also include the length of stay, which is also caused by the severity of illness. However, length of stay may not be a cause of readmission; the correlation between the two would be a result of the confounding e๏ฌ€ect of a common cause, illness severity. e is an indicator for distinct hospitals. We call the condition Y โŠฅ โŠฅe | Zdg the domain invariance property. This condition is common to all the DAGs in Figure 2. We call the condition Zdg โŠฅ โŠฅZspu | {Y, e} the target conditioned representation independence (TCRI) property. This condition is common to the DAGs in Figure 2a, 2b. In the settings considered in this work, the TCRI property is equivalently Zdg โŠฅ โŠฅZspu | Yโˆ€e โˆˆE since e will simply index the set of empirical distributions available at training. Domain generalization with conditional independencies. Kaur et al. [2022] showed that su๏ฌƒciently regularizing for the correct conditional independencies described by the appropriate DAGs can give domaingeneral solutions, i.e., identi๏ฌes Zdg. However, in practice, one does not (partially) observe the latent features independently to regularize directly. Other works have also highlighted the need to consider generative processes when designing robust algorithms to distribute shifts [Veitch et al., 2021, Makar et al., 2022]. However, previous work has largely focused on regularizing for the domain invariance property, ignoring the conditional independence property Zdg โŠฅ โŠฅZspu | Y, e. Su๏ฌƒciency of ERM under Fully Informative Invariant Features. Despite the known challenges of learning domain-general features from the domain-invariance properties in practice, this approach persists, 5 likely due to it being the only property shared across all DAGs. We alleviate this constraint by observing that Graph (Fig. 2c) falls under what Ahuja et al. [2021] refer to as the fully informative invariant features settings, meaning that Zspu is redundant, having only information about Y that is already in Zdg. Ahuja et al. [2021] show that the empirical risk minimizer is domain-general for bounded features. Easy vs. hard DAGs imply the generality of TCRI. Consequently, we categorize the generative processes into easy and hard cases Table 1: (i) easy meaning that minimizing average risk gives domaingeneral solutions, i.e., ERM is su๏ฌƒcient (Fig. 2c), and (ii) hard meaning that one needs to identify Zdg to obtain domain-general solutions (Figs. 2a-2b). We show empirically that regularizing for Zdg โŠฅ โŠฅZspu | Y โˆ€e โˆˆ E also gives a domain-general solution in the easy case. The generality of TCRI follows from its su๏ฌƒciency for identifying domain-general Zdg in the hard cases while still giving domain-general solutions empirically in the easy case. 4 Proposed Learning Framework We have now clari๏ฌed that hard DAGs (i.e., those not solved by ERM) share the TCRI property. The challenge is that Zdg and Zspu are not independently observed; otherwise, one could directly regularize. Existing work such as Kaur et al. [2022] empirically study semi-synthetic datasets where Zspu is (partially) observed and directly learn Zdg by regularizing that ฮฆ(X) โŠฅ โŠฅZspu | Y, e for feature extractor ฮฆ. To our knowledge, we are the ๏ฌrst to leverage the TCRI property without requiring observation of Zspu. Next, we set up our approach with some key assumptions. The ๏ฌrst is that the observed distributions are Markov to an appropriate DAG. Assumption 4.1. All distributions, sources and targets, are generated by one of the structural causal models SCM that follow: causal z }| { SCM(e) := ๏ฃฑ ๏ฃด ๏ฃด ๏ฃด ๏ฃด ๏ฃฒ ๏ฃด ๏ฃด ๏ฃด ๏ฃด ๏ฃณ Z(e) dg โˆผP (e) Zdg, Y (e) โ†โŸจwโˆ— dg, Z(e) dg โŸฉ+ ฮทY , Z(e) spu โ†โŸจwโˆ— spu, Y โŸฉ+ ฮท(e) Zspu, X โ†ฮ“(Zdg, Zspu), (2) anticausal z }| { SCM(e) := ๏ฃฑ ๏ฃด ๏ฃด ๏ฃด ๏ฃด ๏ฃฒ ๏ฃด ๏ฃด ๏ฃด ๏ฃด ๏ฃณ Y (e) โˆผPY , Z(e) dg โ†โŸจe wdg, Y โŸฉ+ ฮท(e) Zdg, Z(e) spu โ†โŸจwโˆ— spu, Y โŸฉ+ ฮท(e) Zspu, X โ†ฮ“(Zdg, Zspu), (3) F IIF z }| { SCM(e) := ๏ฃฑ ๏ฃด ๏ฃด ๏ฃด ๏ฃด ๏ฃฒ ๏ฃด ๏ฃด ๏ฃด ๏ฃด ๏ฃณ Z(e) dg โˆผP (e) Zdg, Y (e) โ†โŸจwโˆ— dg, Z(e) dg โŸฉ+ ฮทY , Z(e) spu โ†โŸจwโˆ— spu, ZdgโŸฉ+ ฮท(e) Zspu, X โ†ฮ“(Zdg, Zspu), (4) where PZdg is the causal covariate distribution, wโ€™s are linear generative mechanisms, ฮทโ€™s are exogenous independent noise variables, and ฮ“ : Zdg ร— Zspu โ†’X is an invertible function. It follows from having causal mechanisms that we can learn a predictor wโˆ— dg for Zdg that is domain-general (Equation 2-4) โ€“ wโˆ— dg inverts the mapping e wdg in the anticausal case. These structural causal models (Equation 2-4) correspond to causal graphs Figures 2a-2c, respectively. Assumption 4.2 (Structural). Causal Graphs and their distributions are Markov and Faithful [Pearl, 2010]. Given Assumption 4.2, we aim to leverage TCRI property (Zdg โŠฅ โŠฅZspu | Y โˆ€e โˆˆEtr) to learn the latent Zdg without observing Zspu directly. We do this by learning two feature extractors that, together, recover Zdg and Zspu and satisfy TCRI (Figure 3). We formally de๏ฌne these properties as follows. De๏ฌnition 4.3 (Total Information Criterion (TIC)). ฮฆ = ฮฆdg โŠ•ฮฆspu satis๏ฌes TIC with respect to random variables X, Y, e if for ฮฆ(Xe) = [ฮฆdg(Xe); ฮฆspu(Xe)], there exists a linear operator T s.t., T (ฮฆ(Xe)) = [Ze dg; Ze spu]โˆ€e โˆˆEtr. 6 Xe ฮฆdg ฮฆspu b Zdg ฮธc โŠ• b Zspu ฮธe b yc b ye Figure 3: Modeling approach. During training, both representations, ฮฆdg, and ฮฆspu, generate domaingeneral and domain-speci๏ฌc predictions, respectively. However, only the domain-invariant representations/predictions are used during testing โ€“ indicated by the solid red arrows. In other words, a feature extractor that satis๏ฌes the total information criterion recovers the complete latent feature sets Zdg, Zspu. This allows us to de๏ฌne the proposed implementation of the TCRI property non-trivially โ€“ the conditional independence of subsets of the latents may not have the same implications on domain generalization. We note that X โŠฅ โŠฅY |Zdg, Zspu, so X has no information about Y that is not in Zdg, Zspu. De๏ฌnition 4.4 (Target Conditioned Representation Independence). ฮฆ = ฮฆdg โŠ•ฮฆspu satis๏ฌes TCRI with respect to random variables X, Y, e if ฮฆdg(X) โŠฅ โŠฅฮฆspu(X) | Y โˆ€e โˆˆE. Proposition 4.5. Assume that ฮฆdg(X) and ฮฆspu(X) are correlated with Y . Given Assumptions 4.1-4.2 and a representation ฮฆ = ฮฆdg โŠ•ฮฆspu that satis๏ฌes TIC, ฮฆdg(X) = Zdg โ‡ โ‡’ฮฆ satis๏ฌes TCRI. (see Appendix C for proof). Proposition 4.5 shows that TCRI is necessary and su๏ฌƒcient to identify Zdg from a set of training domains. We note that we can verify if ฮฆdg(X) and ฮฆspu(X) are correlated with Y by checking if the learned predictors are equivalent to chance. Next, we describe our proposed algorithm to implement the conditions to learn such a feature map. Figure 3 illustrates the learning framework. Learning Objective: The ๏ฌrst term in our proposed objective is Lฮฆdg = Re(ฮธc โ—ฆฮฆdg), where ฮฆdg : X 7โ†’Rm is a feature extractor, ฮธc : Rm 7โ†’Y is a linear predictor, and Re(ฮธc โ—ฆฮฆdg) = E  โ„“(y, ฮธc ยท ฮฆ(x))  is the empirical risk achieved by the feature extractor and predictor pair on samples from domain e. ฮฆdg and ฮธc are designed to capture the domain-general portion of the framework. Next, to implement the total information criterion, we use another feature extractor ฮฆspu : X 7โ†’Ro, designed to capture the domain-speci๏ฌc information in X that is not captured by ฮฆdg. Together, we have ฮฆ = ฮฆdg โŠ•ฮฆspu where ฮฆ has domain-speci๏ฌc predictors ฮธe : Rm+o 7โ†’Y for each training domain, allowing the feature extractor to utilize domain-speci๏ฌc information to learn distinct optimal domain-speci๏ฌc (nongeneral) predictors: Lฮฆ = Reฮธe โ—ฆฮฆ  . Lฮฆ aims to ensure that ฮฆdg and ฮฆspu capture all of the information about Y in X โ€“ total information criterion. Since we do not know o, m, we select them to be the same size on our experiments; o, m could be treated as hyperparameters though we do not treat them as such. Finally, we implement the TCRI property (De๏ฌnition 4.4). We denote LT CRI to be a conditional independence penalty for ฮฆdg and ฮฆspu. We utilize the Hilbert Schmidt independence Criterion (HSIC) [Gretton et al., 2007] as LT CRI. However, in principle, any conditional independence penalty can be used in its place. HSIC: LT CRI(ฮฆdg, ฮฆspu) = 1 2 X kโˆˆ{0,1} \ HSIC  ฮฆdg(X), ฮฆspu(X) y=k = 1 2 X kโˆˆ{0,1} 1 n2 k tr  KฮฆdgHnkKฮฆspuHnk y=k , 7 where k, indicates which class the examples in the estimate correspond to, C is the number of classes, Kฮฆdg โˆˆ Rnkร—nk, Kฮฆspu โˆˆRnkร—nk are Gram matrices, Ki,j ฮฆ = ฮบ(ฮฆdg(X)i, ฮฆdg(X)j), Ki,j ฮฆspu = ฯ‰(ฮฆspu(X)i, ฮฆspu(X)j) with kernels ฮบ, ฯ‰ are radial basis functions, Hnk = Ink โˆ’ 1 n2 k 11โŠคis a centering matrix, Ink is the nk ร— nk dimensional identity matrix, 1nk is the nk-dimensional vector whose elements are all 1, and โŠคdenotes the transpose. We condition on the label by taking only examples of each label and computing the empirical HSIC; then, we take the average. Taken together, the full objective to be minimized is as follows: L = 1 Etr X eโˆˆEtr " Re(ฮธc โ—ฆฮฆdg) + Re(ฮธe โ—ฆฮฆ) + ฮฒLT CRI(ฮฆdg, ฮฆspu) # , where ฮฒ > 0 is a hyperparameter and Etr is the number of training domains. Figure 3 shows the full framework. We note that when ฮฒ = 0, this loss reduces to ERM. Note that while we minimize this objective with respect to ฮฆ, ฮธc, ฮธ1, . . . , ฮธEtr, only the domain-general representation and its predictor, ฮธc ยท ฮฆdg are used for inference. 5 Experiments We begin by evaluating with simulated data, i.e., with known ground truth mechanisms; we use Equation 5 to generate our simulated data, with domain parameter ฯƒei; code is provided in the supplemental materials. SCM(ei) := ๏ฃฑ ๏ฃด ๏ฃฒ ๏ฃด ๏ฃณ Z(ei) dg โˆผN 0, ฯƒ2 ei  y(ei) = Z(ei) dg + N 0, ฯƒ2 y  , Z(ei) spu = Y (ei) + N 0, ฯƒ2 ei  . (5) Table 2: Continuous Simulated Results โ€“ Feature Extractor with a dummy predictor ฮธc = 1., i.e., b y = x ยท ฮฆdg ยท w, where x โˆˆRNร—2, ฮฆdg, ฮฆspu โˆˆR2ร—1, w โˆˆR. Oracle indicates the coe๏ฌƒcients achieved by regressing y on zc directly. Algorithm (ฮฆdg)0 (ฮฆdg)1 (i.e., Zdg weight) (i.e., Zspu weight) ERM 0.29 0.71 IRM 0.28 0.71 TCRI 1.01 0.06 Oracle 1.04 0.00 We observe 2 domains with parameters ฯƒe=0 = 0.1, ฯƒe=1 = 0.2 with ฯƒy = 0.25, 5000 samples, and linear feature extractors and predictors. We use partial covariance as our conditional independence penalty LT CRI. Table 2 shows the learned value of ฮฆdg, where โ€˜Oracleโ€™ indicates the true coe๏ฌƒcients obtained by regressing Y on domain-general Zdg directly. The ideal ฮฆdg recovers Zdg and puts zero weight on Zspu. Now, we evaluate the e๏ฌƒcacy of our proposed objective on non-simulated datasets. 5.1 Semisynthetic and Real-World Datasets Algorithms: We compare our method to baselines corresponding to DAG properties: Empirical Risk Minimization (ERM, [Vapnik, 1991]), Invariant Risk Minimization (IRM [Arjovsky et al., 2019]), Variance Risk Extrapolation (V-REx, [Krueger et al., 2021]), [Li et al., 2018a]), Group Distributionally Robust Optimization (GroupDRO), [Sagawa et al., 2019]), and Information Bottleneck methods (IB_ERM/IB_IRM, [Ahuja et al., 2021]). Additional baseline methods are provided in the Appendix A. We evaluate our proposed method on the semisynthetic ColoredMNIST [Arjovsky et al., 2019] and realworld Terra Incognita dataset [Beery et al., 2018]. Given observed domains Etr = {e : 1, 2, . . . , Etr}, we train on Etr \ ei and evaluate the model on the unseen domain ei, for each e โˆˆEtr. ColoredMNIST: The ColoredMNIST dataset [Arjovsky et al., 2019] is composed of 7000 (2 ร— 28 ร— 28, 1) images of a hand-written digit and binary-label pairs. There are three domains with di๏ฌ€erent correlations between image color and label, i.e., the image color is spuriously related to the label by assigning a color to 8 each of the two classes (0: digits 0-4, 1: digits 5-9). The color is then ๏ฌ‚ipped with probabilities {0.1, 0.2, 0.9} to create three domains, making the color-label relationship domain-speci๏ฌc because it changes across domains. There is also label ๏ฌ‚ip noise of 0.25, so we expect that the best accuracy a domain-general model can achieve is 75%, while a non-domain general model can achieve higher. In this dataset, Zdg corresponds to the original image, Zspu the color, e the label-color correlation, Y the image label, and X the observed colored image. This DAG follows the generative process of Figure 2a [Arjovsky et al., 2019]. Spurrious PACS: Variables. X: images, Y : non-urban (elephant, gira๏ฌ€e, horse) vs. urban (dog, guitar, house, person). Domains. {{cartoon, art painting}, {art painting, cartoon}, {photo}} [Li et al., 2017]. The photo domain is the same as in the original dataset. In the {cartoon, art painting} domain, urban examples are selected from the original cartoon domain, while non-urban examples are selected from the original art painting domain. In the {art painting, cartoon} domain, urban examples are selected from the original art painting domain, while non-urban examples are selected from the original cartoon domain. This sampling encourages the model to use spurious correlations (domain-related information) to predict the labels; however, since these relationships are ๏ฌ‚ipped between domains {{cartoon, art painting} and {art painting, cartoon}, these predictions will be wrong when generalized to other domains. Terra Incognita: The Terra Incognita dataset contains subsets of the Caltech Camera Traps dataset [Beery et al., 2018] de๏ฌned by [Gulrajani and Lopez-Paz, 2020]. There are four domains representing di๏ฌ€erent locations {L100, L38, L43, L46} of cameras in the American Southwest. There are 9 species of wild animals {bird, bobcat, cat, coyote, dog, empty, opossum, rabbit, raccoon, squirrel} and a โ€˜no-animalโ€™ class to be predicted. Like Ahuja et al. [2021], we classify this dataset as following the generative process in Figure 2c, the Fully Informative Invariant Features (FIIF) setting. Additional details on model architecture, training, and hyperparameters are detailed in Appendix 5. Model Selection. The standard approach for model selection is a training-domain hold-out validation set accuracy. We ๏ฌnd that model selection across hyperparameters using this held-out training domain validation accuracy often returns non-domain-general models in the โ€˜hardโ€™ cases. One advantage of our model is that we can do model selection based on the TCRI condition (conditional independence between the two representations) on held-out training domain validation examples to mitigate this challenge. In the easy case, we expect the empirical risk minimizer to be domain-general, so selecting the best-performing trainingdomain model is sound โ€“ we additionally do this for all baselines (see Appendix A.1 for further discussion). We ๏ฌnd that, empirically, this heuristic works in the examples we study in this work. Nevertheless, model selection under distribution shift remains a signi๏ฌcant bottleneck for domain generalization. 5.2 Results and Discussion Table 3: E\etest โ†’etest (model selection on held-out source domains validation set). The โ€˜meanโ€™ column indicates the average generalization accuracy over all three domains as the etest distinctly; the โ€˜minโ€™ column indicates the worst generalization accuracy. ColoredMNIST Spurious PACS Terra Incognita Algorithm average worst-case average worst-case average worst-case ERM 51.6 ยฑ 0.1 10.0 ยฑ 0.1 57.2 ยฑ 0.7 31.2 ยฑ 1.3 44.2 ยฑ 1.8 35.1 ยฑ 2.8 IRM 51.7 ยฑ 0.1 9.9 ยฑ 0.1 54.7 ยฑ 0.8 30.3 ยฑ 0.3 38.9 ยฑ 3.7 32.6 ยฑ 4.7 GroupDRO 52.0 ยฑ 0.1 9.9 ยฑ 0.1 58.5 ยฑ 0.4 37.7 ยฑ 0.7 47.8 ยฑ 0.9 39.9 ยฑ 0.7 VREx 51.7 ยฑ 0.2 10.2 ยฑ 0.0 58.8 ยฑ 0.4 37.5 ยฑ 1.1 45.1 ยฑ 0.4 38.1 ยฑ 1.3 IB_ERM 51.5 ยฑ 0.2 10.0 ยฑ 0.1 56.3 ยฑ 1.1 35.5 ยฑ 0.4 46.0 ยฑ 1.4 39.3 ยฑ 1.1 IB_IRM 51.7 ยฑ 0.0 9.9 ยฑ 0.0 55.9 ยฑ 1.2 33.8 ยฑ 2.2 37.0 ยฑ 2.8 29.6 ยฑ 4.1 TCRI_HSIC 59.6 ยฑ 1.8 45.1 ยฑ 6.7 63.4 ยฑ 0.2 62.3 ยฑ 0.2 49.2 ยฑ 0.3 40.4 ยฑ 1.6 9 Table 4: Total Information Criterion: Domain General (DG) and Domain Speci๏ฌc (DS) Accuracies. The DG classi๏ฌer is shared across all training domains, and the DS classi๏ฌers are trained on each domain. The ๏ฌrst row indicates the domain from which the held-out examples are sampled, and the second indicates which domain-speci๏ฌc predictor is used. {+90%, +80%, -90%} indicate domains โ€“ {0.1, 0.2, 0.9} digit label and color correlation, respectively. DG Classi๏ฌer DS Classi๏ฌer on +90 DS Classi๏ฌer on +80 DS Classi๏ฌer on -90 Test Domain No DS clf. +90% +80% -90% +90% +80% -90% +90% +80% -90% +90% +80% -90% +90% 68.7 69.0 68.5 90.1 9.8 79.9 20.1 10.4 89.9 +80% 63.1 62.4 64.4 76.3 24.3 70.0 30.4 24.5 76.3 -90% 65.6 63.4 44.1 75.3 75.3 69.2 69.5 29.3 26.0 Table 5: TIC ablation for ColoredMNIST. Algorithm average worst-case TCRI_HSIC (No TIC) 51.8 ยฑ 5.9 27.7 ยฑ 8.9 TCRI_HSIC 59.6 ยฑ 1.8 45.1 ยฑ 6.7 Worst-domain Accuracy. A critical implication of domain generality is stability โ€“ robustness in worstdomain performance up to domain di๏ฌƒculty. While average accuracy across domains provides some insight into an algorithmโ€™s ability to generalize to new domains, the average hides the variance of performance across domains. Average improvement can be increased while the worst-domain accuracy stays the same or decreases, leading to incorrect conclusions about domain generalization. Additionally, in real-world challenges such as algorithmic fairness where worst-group performance is considered, some metrics or fairness are analogous to achieving domain generalization [Creager et al., 2021]. Results. TCRI achieves the highest average and worst-case accuracy across all baselines (Table 3). We ๏ฌnd no method recovers the exact domain-general modelโ€™s accuracy of 75%. However, TCRI achieves over 7% increase in both average accuracy and worst-case accuracy. Appendix A.2 shows transfer accuracies with cross-validation on held-out test domain examples (oracle) and TCRI again outperforms all baselines, achieving an average accuracy of 70.0% ยฑ 0.4% and a worst-case accuracy of 65.7% ยฑ 1.5, showing that regularizing for TCRI gives very close to optimal domain-general solutions. Similarly, for the Spurious-PACS dataset, we observe that TCRI outperforms the baselines. TRCI achieves the highest average accuracy of 63.4% ยฑ 0.2 and worst-case accuracy of 62.3% ยฑ 0.1 with the next best, VREx, achieving 58.8 ยฑ 1.0 and 33.8 ยฑ 0.0, respectively. Additionally, for the Terra-Incognita dataset, TCRI achieves the highest average and worst-case accuracies of 49.2% ยฑ 0.3% and 40.4% ยฑ 1.6% with the next best, GroupDRO, achieving 47.8 ยฑ 0.9 and 39.9 ยฑ 0.7, respectively. Appendix A.2 shows transfer accuracies with cross-validation held-out target domain examples (oracle) where we observe that TCRI also obtains the highest average and worst-case accuracy for Spurrious-PACS and Terra Incognita. Overall, regularizing for TCRI gives the most domain-general solutions compared to our baselines, achieving the highest worst-case accuracy on all benchmarks. Additionally, TCRI achieves the highest average accuracy on ColoredMNIST and Spurious-PAC and the second highest on Terra Incognita, where we expect the empirical risk minimizer to be domain-general. Additional results are provided in the Appendix A. The E๏ฌ€ect of the Total Information Criterion. Without the TIC loss term, our proposed method is less e๏ฌ€ective. Table 5 shows that for Colored MNIST, the hardest โ€˜hardโ€™ case we encounter, removing the TIC criteria, performs worse in average and worst case accuracy, dropping over 8% and 18, respectively. Separation of Domain General and Domain Speci๏ฌc Features . In the case of Colored MNIST, we can reason about the extent of feature disentanglement from the accuracies achieved by the domain-general and domain-speci๏ฌc predictors. Table 4 shows how much each component of ฮฆ, ฮฆdg and ฮฆspu, behaves as 10 expected. For each domain, we observe that the domain-speci๏ฌc predictorsโ€™ accuracies follow the same trend as the color-label correlation, indicating that they capture the color-label relationship. The domain-general predictor, however, does not follow such a trend, indicating that it is not using color as the predictor. For example, when evaluating the domain-speci๏ฌc predictors from the +90% test domain experiment (row +90%) on held-out examples from the +80% training domain (column "DS Classi๏ฌer on +80%"), we ๏ฌnd that the +80% domain-speci๏ฌc predictor achieves an accuracy of nearly 79.9% โ€“ exactly what one would expect from a predictor that uses a color correlation with the same direction โ€˜+โ€™. Conversely, the -90% predictor achieves an accuracy of 20.1%, exactly what one would expect from a predictor that uses a color correlation with the opposite direction โ€˜-โ€™. The -90% domain has the opposite label-color pairing, so a color-based classi๏ฌer will give the opposite label in any โ€˜+โ€™ domain. Another advantage of this method, exempli๏ฌed by Table 4, is that if one believes a particular domain is close to one of the training domains, one can opt to use the close domainโ€™s domain-speci๏ฌc predictor and leverage spurious information to improve performance. On Benchmarking Domain Generalization. Previous work on benchmarking domain generalization showed that across standard benchmarks, the domain-unaware empirical risk minimizer outperforms or achieves equivalent performance to the state-of-the-art domain generalization methods [Gulrajani and Lopez-Paz, 2020]. Additionally, Rosenfeld et al. [2022] gives results that show weak conditions that de๏ฌne regimes where the empirical risk minimizer across domains is optimal in both average and worst-case accuracy. Consequently, to accurately evaluate our work and baselines, we focus on settings where it is clear that (i) the empirical risk minimizer fails, (ii) spurious features, as we have de๏ฌned them, do not generalize across the observed domains, and (iii) there is room for improvement via better domain-general predictions. We discuss this point further in the Appendix A.1. Oracle Transfer Accuracies. While model selection is an integral part of the machine learning development cycle, it remains a non-trivial challenge when there is a distribution shift. While we have proposed a selection process tailored to our method that can be generalized to other methods with an assumed causal graph, we acknowledge that model selection under distribution shift is still an important open problem. Consequently, we disentangle this challenge from the learning problem and evaluate an algorithmโ€™s capacity to give domain-general solutions independently of model selection. We report experimental reports using heldout test-set examples for model selection in Appendix A Table 6. We ๏ฌnd that our method, TCRI_HSIC, also outperforms baselines in this setting. 6
Olawale Salaudeen, Sanmi Koyejo
Original Paper
[ "cs.LG", "stat.ML" ]
http://arxiv.org/abs/2404.16283v1
2024-04-25 00:00:00
Andes: Defining and Enhancing Quality-of-Experience in LLM-Based Text Streaming Services
The advent of large language models (LLMs) has transformed text-based services, enabling capabilities ranging from real-time translation to AI-driven chatbots. However, existing serving systems primarily focus on optimizing server-side aggregate metrics like token generation throughput, ignoring individual user experience with streamed text. As a result, under high and/or bursty load, a significant number of users can receive unfavorable service quality or poor Quality-of-Experience (QoE). In this paper, we first formally define QoE of text streaming services, where text is delivered incrementally and interactively to users, by considering the end-to-end token delivery process throughout the entire interaction with the user. Thereafter, we propose Andes, a QoE-aware serving system that enhances user experience for LLM-enabled text streaming services. At its core, Andes strategically allocates contended GPU resources among multiple requests over time to optimize their QoE. Our evaluations demonstrate that, compared to the state-of-the-art LLM serving systems like vLLM, Andes improves the average QoE by up to 3.2$\times$ under high request rate, or alternatively, it attains up to 1.6$\times$ higher request rate while preserving high QoE.
The advent of large language models (LLMs) has transformed text-based services, enabling capabilities ranging from real-time translation to AI-driven chatbots. However, existing serving systems primarily focus on optimizing server-side aggregate metrics like token generation throughput, ignoring individual user experience with streamed text. As a result, under high and/or bursty load, a significant number of users can receive unfavorable service quality or poor Quality-of-Experience (QoE). In this paper, we first formally define QoE of text streaming services, where text is delivered incrementally and interactively to users, by considering the end-to-end token delivery process throughout the entire interaction with the user. Thereafter, we propose Andes, a QoE-aware serving system that enhances user experience for LLM-enabled text streaming services. At its core, Andes strategically allocates contended GPU resources among multiple requests over time to optimize their QoE. Our evaluations demonstrate that, compared to the state-of-the-art LLM serving systems like vLLM, Andes improves the average QoE by up to 3.2$\times$ under high request rate, or alternatively, it attains up to 1.6$\times$ higher request rate while preserving high QoE.
cs.DC
LLM Fairness
2024-04-25 00:00:00
Introduction Large language Models (LLMs) [4, 9, 21, 46, 51] have revolutionized natural language processing. By generating contextually relevant responses, they power a wide range of applications, more than 60% of which are centered around conversational interactions like chatbots, virtual assistants, language translation, and customer support systems [15]. In particular, the meteoric rise of ChatGPT [35] spearheaded the growth of conversational AI services by attracting over 100 million users in just two months after its launch [29]. Conversational AI services, by nature, provide interactive conversations between the user and an AI agent. At its core, an LLM generates tokens one by one1 and streams them back to the user to be digested, be it as written text or speech. As 1LLMs process and generate text in units of tokens. For instance, the word โ€œstreamingโ€ may be broken down into two tokens: โ€œstreamโ€ and โ€œing.โ€ Req 2 Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Throughput is not all you need. Throughput is not all you need. User 1 User 2 User 1 User 2 Req 2 Req 1 Req 2 Server Server TTFT TTFT (a) Existing LLM serving systems are oblivious of QoE. User 2 experiences a long wait time (TTFT) and therefore lower QoE. Req 2 Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Throughput is not all you need. Throughput is not all you need. User 1 User 2 User 1 User 2 Req 2 Req 1 Req 2 Server Server TTFT TTFT (b) A QoE-aware LLM serving system can schedule token generation over time to enhance QoE. User 2โ€™s TTFT is drastically improved without affecting User 1โ€™s token delivery timeline. Figure 1. Server-side token generation timeline and userside response digestion progress. Even if the server generates tokens very fast, users cannot digest them at such a pace. this token-by-token streaming nature is akin to the frameby-frame streaming nature of video streaming services, we dub such services text streaming services. In this paper, we seek to characterize and enhance the Quality-of-Experience (QoE) of text streaming services (ยง2.2). We realize that user interaction with LLM responses happens at moments when each new token is delivered (e.g., displayed or spoken) to the user over time. Thus, we define token delivery timeline (TDT), a series of timestamps when each token was delivered to a user, to capture the userโ€™s interaction with the service for a single request. The ideal TDT a user expects from a text streaming service can vary significantly based on the type of the service and user demographics. For instance, a chat service that uses a text-to-speech model to read out the LLMโ€™s response to users (e.g., voice chat in ChatGPT, real-time speech translation) could be less stringent in terms of its minimum token delivery speed (TDS) compared to a chat service in raw text, because a userโ€™s speaking speed is often slower than their reading speed, but it may require smaller time to first token (TTFT) to better resemble real-life arXiv:2404.16283v1 [cs.DC] 25 Apr 2024 verbal conversations. The minimum TDS and TTFT together define the expected TDT of a request. Unfortunately, existing LLM serving systems [20, 25, 30, 50] are designed to optimize aggregated server-side performance metrics such as token generation throughput [25, 50], which are not necessarily aligned with optimizing the QoE of text streaming services (ยง2.3). More importantly, by realigning the objectives of LLM serving systems towards QoE optimization, a QoE-aware serving system can utilize the same resources more effectively to manage a greater number of concurrent requests while ensuring high QoE, thus reducing the cost per request. To illustrate, we compare existing serving systems with a QoE-aware one, each with a serving capacity of 1, in Figure 1. In Figure 1a, due to the commonly adopted first-come-first-serve (FCFS) scheduling policy [25, 50, 52], User 2 experiences a long initial waiting time (TTFT). In contrast, in Figure 1b, a QoE-aware serving system schedules token generation in a manner that is aware of each userโ€™s reading speed, leading to a shorter wait time for User 2 without affecting User 1โ€™s interaction with the service. Although the average server-side token generation throughput or latency are the same for the two systems, overall user experience is improved in the QoE-aware system. We attribute this to the naรฏve FCFS scheduling policy in existing serving systems, which fails to account for the QoE requirements of individual requests and cannot efficiently utilize resources (ยง2.4). Consequently, some users may experience extended waiting time during their interaction with the service, especially when the system is under higher request rate or is serving requests with longer context lengths. To preserve good user experience, the service provider must provision more compute resources proportional to the excess request load, leading to higher operational costs. Designing a QoE-aware LLM serving system, however, is challenging from both conceptual and practical perspectives. Defining the QoE metric to capture the user experience in text streaming services is non-trivial. It should encapsulate the continuous interaction process over time, accounting for factors like TTFT and TDS. Designing a QoE-aware serving system faces several systems challenges as well: (a) Dynamic and unpredictable resource demand: Requests arrive dynamically with varying expected TDT and prompt length and the number of output tokens is not known a priori, making it challenging to implement a one-size-fits-all scheduling strategy such as round-robin. (b) Constrained resource supply: The system has limited GPU memory and computation resources, restricting the number of concurrent in-flight requests. To meet the QoE requirements of individual requests, the system needs to make runtime decisions to allocate resources among requests, which may incur non-negligible overhead. To this end, we first propose a mathematical definition of QoE for text streaming services (ยง3.1). Our QoE metric Age Group Reading Speed 18-24 (28.0%) 236 WPM 25-44 (51.9%) 200 WPM 45-54 (11.2%) 192 WPM 55-64 (5.6%) 185 WPM 65+ (3.3%) 175 WPM Table 1. Reading speed (Word Per Minute) by age group [10, 29]. Language Speaking Speed English (79.3%) 150 WPM Chinese (7.0%) 158 WPM Korean (6.9%) 150 WPM French (3.6%) 195 WPM Spanish (3.2%) 218 WPM Table 2. Speaking speed (Word Per Minute) by language [8, 29, 36]. compares the actual TDT of a request with its expected TDT, reflecting the userโ€™s experience throughout their entire interaction with the service. Then, we propose Andes, an LLM serving system that optimizes the overall QoE of text streaming services (ยง4). Andes employs a dynamic priority-based preemptive scheduler that operates at the granularity of tokens. Andes strategically allocates system resources to more urgent requests and preempts requests that have already received sufficient service, all to enhance QoE. By satisfying more requests with high QoE using the same amount of resource, Andes eliminates the need for additional resource provisioning, thus reducing LLM serving cost. Andes also codesigns a client-side token buffer that temporarily withholds excess tokens and displays them to the user at their expected pace (ยง5). This design ensures users experience smooth token delivery, oblivious to the intricacies of server-side scheduling or network fluctuations. We evaluate Andes using the OPT [51] family of models, ranging from 13B to 175B parameters (ยง6). Compared to vLLM [25], we find that Andes can manage 1.6ร— higher request rate with high QoE, or alternatively, improve the average QoE by 3.2ร— given the same amount of resource. Overall, we make the following contributions in this paper: 1. We identify an emerging category of LLM-based applications (text streaming services) and define a QoE metric for them. 2. We propose Andes, a QoE-aware LLM serving system designed to optimize QoE for text streaming services. 3. We evaluate Andes under different workloads and setups and show that Andes significantly improves QoE with negligible system overhead. 2 Background and Motivation In this section, we introduce the unique characteristics of LLM serving systems (ยง2.1) and the user experience of text streaming services (ยง2.2). We then discuss the opportunities for improving user experience (ยง2.3) and the limitations of existing solutions (ยง2.4). 2.1 LLM Serving Systems LLM text generation using Transformer-based [47] models is characterized by autoregressive token generation and significant memory usage. First, the LLM generates tokens 2 Time #Tokens Req 1 Req 2 Req 3 Req 4 Expected TDT Figure 2. Four requests arrive at ๐‘ก= 0. Requests 1 and 2 are equally satisfying. Requests 3 and 4 are frustrating, with request 4 being more so as it delivers fewer tokens earlier on, despite having the same TTFT and average token latency. sequentially, where the next token is conditioned on the previous tokens. Second, the LLM requires a large amount of memory to store intermediate data for each token in its input prompt and output response, known as KV cache [47]. As the number of tokens generated increases, so does the KV cache size. For instance, GPT-3 175B [9] requires 7 GB of GPU memory for a 1000-token request, limiting the number of requests that can be handled concurrently. 2.2 User Experience of Text Streaming Services Compared to traditional services that generate entire responses at once, text streaming services allow the user to start digesting the response as early as possible. The user experience includes two phases: Wait Phase. Users wait for the first token to arrive, known as the time-to-first-token (TTFT). For web applications, studies indicate that users expect an initial response to arrive within one second, with a significant 32% dropout rate if the response takes longer than three seconds [6]. Digest Phase. Following the first token, users enter the digest phase, which may last for tens of seconds or more [50], Hence, it is a common practice to stream tokens to the user on the fly so that they can start digesting the response as early as possible. The expected rate of token delivery, i.e., the Token Delivery Speed (TDS), depends on factors such as application type and user demographics. For example, reading speeds, measured in words per minute (WPM), differ across age groups (Table 1), while speaking speeds vary among languages (Table 2). By translating words to tokens using the average word-to-token ratio [38], we can estimate the average reading speed to 4.8 tokens/s and average speaking speed to 3.3 tokens/s. Intuition Behind QoE of Text Streaming Services. The expected TTFT and the expected TDS together define the expected token delivery timeline (TDT), represented by the black line in Figure 2. Similar to QoE in video streaming, a desired QoE metric should capture the gap between the actual TDT and the expected TDT. Intuitively, users are satisfied when the actual TDT is above the expected TDT, otherwise, they prefer to receive more tokens earlier on, as illustrated in 2 4 Request rate (req/s) 10 0 10 1 10 2 TTFT (s) Expected TTFT QoE-unaware QoE-aware (a) 90๐‘กโ„Ž-p TTFT increases dramatically as the request rate surpasses the serverโ€™s capacity. 2 3 4 5 Request rate (req/s) 0 5 10 TDS (tokens/s) Reading speed Speaking speed QoE-unaware QoE-aware (b) Token generation speed is much faster than the userexpected speed. Figure 3. System performance under different request rates. Figure 2. Therefore, the QoE should comprehensively measure the token delivery timeline throughout the entire user interaction, going beyond an aggregated number like TTFT or average token latency. We formally define such a QoE metric in Section 3.1. 2.3 Problems and Opportunities Existing LLM serving systems have primarily focused on optimizing aggregated server-side metrics, and often employ a first-come-first-serve (FCFS) scheduling approach without considering the user experience. In our experiment with ShareGPT [45] on OPT 66B [51] with 4 A100 GPUs, we notice that especially under high request rate, two issues arise: (1) certain users may encounter extended TTFT; (2) conversely, other users might receive tokens at a pace surpassing their digestion ability. Prolonged TTFT. As depicted in Figure 3a, the 90๐‘กโ„Žpercentile TTFT increases dramatically as the server faces more bursty request rates, resulting in a longer queuing delay and degraded user experience. To accommodate such bursty request volumes, service providers often have to over-provision resources, such as by adding more GPUs, which significantly increases operational costs. Excessively High Token Generation Speed. Conversely, as shown in Figure 3b, we report the token generation speed under different request rates. The observed server-side token generation speed (โ‰ฅ6.6 tokens/s) is much faster than the userexpected speed (3.3 or 4.8 tokens/s), as referenced in Table 1 and Table 2. This discrepancy indicates that the server often generates tokens faster than the user can consume them. While this might seem efficient from the serverโ€™s perspective, it may overwhelm this user while starving others. Opportunities. We observe that there is an opportunity to optimize user experience by balancing prolonged TTFT and excessively fast token generation speed. By temporarily pausing the response generation for requests with already sufficient tokens generated, we can spare the limited GPU resources to other pending requests. The ratio between the expected token generation speed ๐‘‡๐ท๐‘†expected and the actual token generation speed ๐‘‡๐ท๐‘†actual 3 Response length Prompt length Memory usage = Request Spec Request ID 1 2 3 4 Prompt length 90 90 180 90 Response length 10 10 10 20 Expected TTFT (s) 1 1 2 2 Expected TDS 1.25 1.25 5 5 (tokens/s) Server memory capacity 1 2 3 4 1,2,3,4 FCFS 1 2 3 4 1 2 3 4 1,2,3,4 Round Robin 1 2 3 4 1 2 4 1,2,3,4 QoE-aware 10 20 #Token 0 2 4 6 8 Time 10 20 #Token 0 2 4 6 8 Time 0 2 4 6 8 Time Req 1 Req 2 Req 3 Req 4 Expected TDT Figure 4. Suboptimal user experience from QoE-unaware scheduling policies. In this illustrative toy example, we consider a server that can serve at most 200 tokens simultaneously due to memory constraints. We consider four requests with different prompt lengths, response lengths, as well as different expected TTFT and TDS values, arriving at time 0. The figure shows the serving order (first row) and the cumulative tokens delivered over time for each request (second and third rows). Colored lines represent actual TDT, while the black line indicates the expected TDT. An optimal QoE is achieved when the actual token delivery curve is completely left and/or above the expected token delivery curve. determines the slack for which a request can be preempted, allowing the system to accommodate more concurrent requests. Thus, with appropriate request preemption and restarting, we can serve ๐‘‡๐ท๐‘†actual ๐‘‡๐ท๐‘†expected ร— concurrent requests than without request preemption, significantly improving user experience. In the example of text-based and voice-based chat services in Figure 3b, we could have increased the serving capacity by 6.6 4.8 = 1.38ร— and 6.6 3.3 = 2ร—, respectively. Our evaluation shows that Andes can nearly achieve this theoretical improvement in practice. 2.4 Limitation of Existing Solutions Let us consider a toy example in Figure 4 to illustrate the limitations of existing QoE-unaware scheduling (FCFS used by vLLM [25] and Round Robin). Under FCFS scheduling, while requests 1, 2, and 3 are served immediately, request 4 suffers from longer TTFT due to queuing delays. Round Robin partially mitigates queuing delay using fair-sharing but still fails to align the token delivery in the later stage of the interaction, leading to suboptimal QoE. In contrast, the QoE-aware policy manages to meet the QoE requirements for all requests by prioritizing requests based on their QoE requirements and resource demand. It prioritizes requests with stringent TTFT requirements. Meanwhile, it monitors the resource demand of each request to prevent small requests from being starved of necessary resources. As the served requests accumulate enough tokens for the user to digest, the system upgrades the priority of request 3, which then requires more urgent servicing, and serves it. Finally, the system brings back requests 1, 2, and 4 to continue supplying tokens. In sum, when the server load is below its capacity, all requests can be served promptly and achieve perfect QoE without smart request scheduling. However, when the server is operating at capacity due to unpredictable higher request loads, QoE-aware scheduling can significantly improve the user experience without over-provisioning resources. 3 Overview In this section, we first introduce a formal definition of Quality-of-Experience (QoE) for text streaming services (ยง3.1). Then, we provide an overview of Andes, an LLM serving system that optimizes QoE of text streaming services (ยง3.2). 3.1 Quality-of-Experience (QoE) in Text Streaming Text streaming services allow the developer to specify the expected token delivery timeline (TDT) in a request. We derive the QoE of a request by comparing its actual TDT with the expected TDT, considering the entire token delivery process. Informed by the distinctions between superior and inferior service depicted in Figure 2, the formulation of our QoE metric is guided by a set of principles that reflect user expectations and experiences throughout their interaction: 1. Perfect Satisfaction: Users are satisfied when the actual token delivery perfectly aligns with or exceeds the expected delivery, resulting in maximum QoE (QoE = 1). We normalize QoE โˆˆ[0, 1] for generality across applications. 2. Excess Token Delivery: At any given time, delivering tokens faster than the userโ€™s digest speed does not add 4 ) Perfect QoE (d) Pause in the middle Expected TDT Server generates User digests Sexpected Sactual Time #Tokens (a) TTFT missed. Time #Tokens (b) TDS missed. Time #Tokens (c) Perfect QoE. Time #Tokens (d) Pause in the middle. Figure 5. QoE example. The slope of the actual token delivery curve on the user side is capped by the expected TDS. value to the user experience, as the user cannot digest all tokens at once. So the QoE remains unchanged. 3. Early Token Delivery: Users prefer receiving more tokens earlier to start processing the response sooner. In scenarios where perfect satisfaction is not achieved, the QoE is higher for scenarios where more tokens are delivered earlier. For example, the QoE is worse for a longer TTFT with the same TDS, and similarly, the QoE is worse for a slower TDS with the same TTFT. Following these principles, we formalize the QoE metric by comparing two curves: (a) The expected token delivery curve ๐‘‡(๐‘ก) that is defined by expected TTFT and TDS. Specifically, ๐‘‡(๐‘ก) = ๐‘‡๐ท๐‘†expectedยท (๐‘กโˆ’๐‘‡๐‘‡๐น๐‘‡expected) represents the ideal timeline at which tokens should be delivered to the user (black lines in Figure 5). (b) The actual token delivery curve ๐ด(๐‘ก) reflects the timeline of how tokens are digested by the user over time (black dotted lines in Figure 5), with its slope at any time capped by the expected TDS. To quantify the QoE of a request with response length ๐‘™, we measure the area under both curves up to the actual time to the last token (TTLT). We then define QoE as the ratio of the actual and expected areas, as shown in Figure 5: ๐‘„๐‘œ๐ธ= ๐‘†actual ๐‘†expected = โˆซ๐‘‡๐‘‡๐ฟ๐‘‡ 0 ๐ด(๐‘ก)๐‘‘๐‘ก โˆซ๐‘‡๐‘‡๐ฟ๐‘‡ 0 min(๐‘‡(๐‘ก),๐‘™)๐‘‘๐‘ก (1) This formulation focuses on the relative QoE relationship between services, but Andes allows the service provider to prioritize specific aspects. For example, to stress a shorter TTFT, the provider can add a penalizing term on the defined QoE as ๐›ผ๐‘‡๐‘‡๐น๐‘‡actualโˆ’๐‘‡๐‘‡๐น๐‘‡expected ยท ๐‘†actual ๐‘†expected , where ๐›ผโˆˆ[0, 1]. In this paper, we will use the QoE definition in Equation 1 by default. Running Waiting Queue โ€ฆ โ€ฆ 1 Request Client Server 4 5 Buffer Request Priority GPU Admit Evict Submit Request {Prompt: โ€™What is the probability that this paper will be accepted?โ€™, TTFT: 1s, TDS: 5 tokens/s} Token Context Length QoE Tracker 2 3 3 Worker 0 Worker 1 Worker W-1 Request Metadata Receive Token Figure 6. Andes Overview. 3.2 Andes Overview The workflow of Andes is shown in Figure 6. 1 The interaction begins with the user submitting a request to the server. The request comes with its QoE requirement, which is prespecified by the application developer. 2 Upon receiving the request, the QoE tracker assigns a scheduling priority and puts it in the waiting queue. 3 At each scheduling iteration, the QoE tracker refreshes the priorities of all requests, both in the waiting and running queues. Then Andes reschedules the requests based on their priorities by admitting high-priority waiting requests to GPU workers and evicting low-priority running requests back to the server. For these evicted requests, their states (e.g., KV cache) are stored in the request metadata store on CPU RAM for future retrieval. 4 During each inference iteration, each running request generates one token, which is then sent to the client. 5 As tokens are delivered to the client, a token buffer is responsible for storing excess tokens and displaying them at the expected speed, ensuring smooth token delivery. 4 QoE-Aware Scheduling In this section, we describe how Andes schedules token generation across multiple requests to maximize the total QoE. Section 4.1 formulates the scheduling problem as a Knapsack variant, and Section 4.2 introduces an efficient solution. 4.1 Problem Formulation The core of Andes is an online preemptive scheduling algorithm for token generation, which requires designing three elements: (1) How often to make scheduling decisions (time quantum), (2) which requests to serve (scheduling objective), and (3) how many requests to serve at a time (batch size). Time Quantum. At the beginning of each time quantum, the scheduler inspects both queued and running requests, and determines which ones to admit and preempt. Following the 5 continuous batching used in existing systems [25, 50], Andes invokes its scheduler at the beginning of each iteration. Scheduling Objective. Just like any other online serving system, it is impractical to perfectly plan execution into the future. Therefore, Andes serves the set of requests that maximizes the scheduling objective in the upcoming time frame of length ฮ”๐‘ก. The parameter ฮ”๐‘กcannot be too short, as scheduling decisions will become shortsighted, or too long, as the actual system state would deviate too far from estimations. We find that setting it as the average request completion time is reasonable, and show in Section 6.5 that Andes is not sensitive to the setting of ฮ”๐‘ก. Andes supports various scheduling objectives including max average QoE and max-min QoE by designing its scheduling objective function appropriately. For the sake of presentation, we will focus on maximizing average QoE here (See Appendix A for alternative objectives). The objective function for request ๐‘–is defined as: ๐‘„serve,๐‘–โˆ’๐‘„wait,๐‘– (2) where ๐‘„serve,๐‘–and ๐‘„wait,๐‘–are the QoE of request ๐‘–after ฮ”๐‘ก if it is served and not served, respectively. In simple terms, Equation 2 is the amount of QoE gain when we decide to serve request ๐‘–compared to when it is not served, and we naturally want to serve more of the requests that give us large QoE gains when served. Batch Size. The number of requests picked to run in the upcoming quantum, or batch size, is limited by two factors. First, each token in a requestโ€™s context (prompt plus all generated tokens) consumes one entry in the LLM serving systemโ€™s KV cache [9], whose size is bounded by GPU memory. Thus, we have the following constraint: ๐‘ โˆ‘๏ธ ๐‘–=1 ๐‘™๐‘–๐‘ฅ๐‘–โ‰ค๐‘€ (3) where there are ๐‘requests in total (queued or running), ๐‘™๐‘– is request ๐‘–โ€™s context length, ๐‘ฅ๐‘–is an indicator variable that is 1 if request ๐‘–is served and 0 otherwise, and ๐‘€is the total number of tokens that can fit in GPU memory. Furthermore, Andes must take into account the latency to generate one token. That is, while a large batch size may increase server-side token generation throughput, the increase in the amount of compute will inflate the latency to generate one token from the perspective of each request, potentially hurting their QoE by delaying TTFT or failing to meet the expected TDS. On the other hand, a small batch size will be able to deliver tokens faster to each running request, but in turn more requests will not be served at all, again potentially hurting their QoE. Thus, the right intermediate batch size will have to be chosen in order to maximize average QoE. Knapsack Formulation. Putting these together, we observe that the problem setting resembles that of the classic knapsack problem [23]. The goal is to select items (requests) Time # Tokens Qserve(50) Qserve(30) Qserve(10) t Time # Tokens Qwait t Expected Actual Future Time # Tokens Qserve(50) Qserve(30) Qserve(10) t (a) ๐‘„serve, i(๐ต) Time # Tokens Qwait t (b) ๐‘„wait,๐‘– Figure 7. Visualization of ๐‘„serve, i(๐ต) and ๐‘„wait,๐‘–. The former depends on batch size ๐ตwhereas the latter is a constant. With batch size 50, request ๐‘–no longer has perfect QoE. to put in a knapsack (GPU) so that total item value (QoE gain) is maximized and total weight (๐‘™๐‘–) does not exceed the knapsackโ€™s capacity (๐‘€). However, our problem setting deviates from that of the classical knapsack because the value of each item depends on how many items there are in the knapsack. This is because, as noted above, the number of requests in the knapsack (batch size) affects token generation latency, which in turn means that ๐‘„serve,๐‘–is actually a function of batch size ๐ต.2 Figure 7 visualizes this. When ๐ตis just 10 or 30, the request maintains perfect QoE by always running ahead. However, when ๐ตis 50, the computation time of one iteration becomes longer and slows down token generation, degrading the requestโ€™s QoE by failing to meet its TDS expectation. On the other hand, ๐‘„wait,๐‘–does not depend on the batch size because it simply sits in the queue, waiting to be served. Thus, for a specific batch size ๐ต, we would like to solve: max ๐‘ฅ ๐‘ โˆ‘๏ธ ๐‘–=1 ๐‘„serve,๐‘–(๐ต) โˆ’๐‘„wait,๐‘–  ยท ๐‘ฅ๐‘– s.t. ๐‘ฅ๐‘–โˆˆ{0, 1}, ๐‘–โˆˆ1, . . . , ๐‘ ๐‘ โˆ‘๏ธ ๐‘–=1 ๐‘ฅ๐‘–= ๐ต ๐‘ โˆ‘๏ธ ๐‘–=1 ๐‘™๐‘–๐‘ฅ๐‘–โ‰ค๐‘€ (4) where the optimization variable ๐‘ฅis a length ๐‘array of ๐‘ฅ๐‘–s. The second constraint ensures that exactly ๐ตmany requests are chosen, whereas the final constraint ensures that the GPU memory capacity is not exceeded. Equation 4 should be solved for each possible batch size ๐ตand the solution that yields the best objective value should be selected. 2More precisely, token generation latency is a function of batch size and the total number of tokens in the batch, but batch size and total number of tokens are nearly perfectly correlated, allowing us to eliminate the latter and only leave batch size. See Appendix B for more detailed analysis. 6 4.2 Solution Design In this section, we discuss the hardness of the problem formulated in the previous section in terms of algorithmic hardness and systems overhead. Then, we propose efficiency optimizations and a greedy algorithm that gives an approximate solution with low systems overhead. Algorithmic Hardness. As Andes must solve its optimization problem repetitively online to determine the set of requests to solve, an efficient algorithm is needed. However, Equation 4 is a variant of the knapsack problem called the Exact K-item Knapsack, which is weakly NP-Hard [23]. We give an optimal 3D dynamic programming solution to the problem that runs in pseudo-polynomial time ๐‘‚(๐‘€ยท ๐‘2) in Appendix C. However, such an algorithm is also too slow in our case as the number of requests ๐‘and the maximum number of tokens that can fit in memory ๐‘€are easily in the order of hundreds and thousands, respectively. Furthermore, we need to solve Equation 4 for each possible batch size ๐ตโˆˆ[1, ๐‘], which is clearly intractable. Preemption Overhead. When some requests that were running in the previous time quantum are not selected to run on the next, such requests are preempted. This is the core mechanism that reduces TTFT inflation from head-of-line blocking. For this, Andes supports two preemption mechanisms: swapping and recomputation. The former moves the requestโ€™s KV cache entries between the GPU and CPU memory, whereas the latter drops all entries on preemption and recomputes them when the request restarts. If Andes runs out of host memory for storing KV cache, the preemption mechanism will automatically switch to recomputation. Preemption is not free โ€“ in general, the latency overhead of swapping is similar to one token generation iteration (See Appendix D for detailed benchmarking). Frequent preemption may slow down token generation and delay token delivery, potentially degrading request throughput and QoE. Therefore, our scheduling algorithm must make preemption decisions that strike a good balance between reaping QoE gains and causing slowdowns. Optimization #1: Selective Triggering. We observe that Equation 4 only needs to be solved when batch size is limited either by memory capacity or computation time. The former case can be detected easily by monitoring the KV cache occupancy and having a high-memory watermark (e.g., 90%). For the latter case, Andes monitors token generation latency and detects when it begins to exceed the most minimum token delivery speed requirement of the most stringent request. In all other cases, Andes does not trigger the optimization problem solver and serves every request. Optimization #2: Batch Size Search Space Pruning. In order to reduce the number of times Equation 4 needs to be solved, we reduce the search space of batch size ๐ตfrom [1, ๐‘] to [๐ตmin, ๐ตmax]. First, there is no point in exploring very large Algorithm 1 Greedy packing algorithm for Equation 4 Inputs: Number of requests ๐‘and KV cache capacity ๐‘€ Request context length array ๐‘™[๐‘] Request QoE gain array ๐‘ž[๐‘] Target batch size ๐ต Output: Solution array ๐‘ฅ[๐‘] 1: Initialize priority array ๐‘[๐‘] with all zeros 2: for ๐‘–= 0 to ๐‘โˆ’1 do 3: ๐‘[๐‘–] = ๐‘ž[๐‘–] ๐‘™[๐‘–] โŠฒPriority of request ๐‘– 4: ๐‘€current = 0 5: ๐‘current = 0 6: Initialize solution array ๐‘ฅ[๐‘] with all zeros 7: for all ๐‘–โˆˆ[0, ๐‘โˆ’1] in descending order of ๐‘[๐‘–] do 8: if ๐‘€current + ๐‘™[๐‘–] โ‰ค๐‘€and ๐‘current + 1 โ‰ค๐ตthen 9: ๐‘ฅ[๐‘–] = 1 โŠฒServe request ๐‘– 10: ๐‘€current = ๐‘€current + ๐‘™[๐‘–] 11: ๐‘current = ๐‘current + 1 12: else 13: break 14: return ๐‘ฅ batch sizes that cannot be realized. Thus, ๐ตmax is determined by adding to the batch requests with the shortest context lengths until the total number of tokens in the batch reaches ๐‘€, at which point the batch size is the largest that can be realized. On the other hand, very small batch sizes that can generate tokens faster than the expected TDS of any request are also suboptimal. This is because going that fast does not increase the QoE of requests that are served, but on the other hand will serve a smaller number of requests, potentially degrading the QoE of requests that are left waiting. Thus, ๐ตmin is set as the largest batch size that generates tokens faster than the most stringent TDS among all requests. Optimization #3: Greedy Packing for Knapsack. A direct solution to the exact k-item knapsack problem in Equation 4 is computationally too heavy. Instead, Andes designs an efficient algorithm that computes each requestโ€™s priority and greedily packs requests in that order. In designing the priority function, we have three goals: (a) Reflecting merit: Requests that yield high QoE gain and consume less resource should have high priority. (b) Preventing starvation: Requests should be automatically deprioritized as they receive service. (c) Reducing preemption: Selecting high priority requests should reduce the need for preemption. In light of these goals, request ๐‘–โ€™s priority is defined as: ๐‘„serve,๐‘–(๐ต) โˆ’๐‘„wait,๐‘– ๐‘™๐‘– (5) This priority function meets our goals. (a) A higher QoE gain will increase the requestโ€™s priority, but simultaneously discounted by the amount of GPU memory it will use. (b) As 7 a request receives service, its context length (๐‘™๐‘–) will increase, automatically deprioritizing itself. In contrast, requests will have higher QoE gain the more they wait, automatically boosting their priorities. (c) Finally, a request with long context length (๐‘™๐‘–) will be preempted first, freeing enough GPU memory to potentially bring in more than one waiting requests.3 This reduces the number of preemptions required to alleviate head-of-line blocking. The whole procedure is given in Algorithm 1. The greedy packing algorithm offers time complexity ๐‘‚(๐‘log ๐‘). We empirically show in Section 6.5 that this greedy solution can achieve performance comparable to the 3D DP algorithm while greatly reducing scheduling overhead. Optimization #4: Preemption Cap. We have discussed that preemption is not free and can potentially degrade QoE. However, we can empirically and theoretically show that Andes commonly does not result in excessive preemptions/thrashing that may cause average QoE to degrade. Empirically, Andes consistently maintains an average preemption frequency below 1 per request, even under a high server load (ยง6.2.3). Theoretically, the number of preemptions needed to optimize the QoE of requests is contingent upon the excessive request load. Assume the serving system can handle ๐‘Ÿ0 requests per second and the actual request rate is ๐‘˜ยท ๐‘Ÿ0 requests per second, where ๐‘˜โ‰ฅ1. Thus, there would be (๐‘˜โˆ’1) ยท๐‘Ÿ0 requests whose QoE might be degraded due to the queuing delay. To mitigate this, we need roughly one preemption to accommodate each of these requests. Sometimes, a single preemption of a long request can allow multiple new requests to be served, which further reduces the number of preemptions needed. Therefore, the average preemption frequency needed is bounded by ๐‘˜โˆ’1, which is small as long as the load is not excessively high. Nevertheless, in order to safeguard against thrashing that may happen in the worst case request pattern, Andes supports setting a cap ๐‘ƒon the average number of preemptions a request can experience throughout its lifetime. Too high a ๐‘ƒwill not be able to act as a safeguard, whereas too small a ๐‘ƒwill prevent even absolutely necessary preemptions from happening. We find that setting ๐‘ƒ= 1, i.e., a request on average experiences at most one preemption during its lifetime, is a good default (Section 6.5). 5 Implementation The two core elements of Andes are its QoE-aware scheduler and a client-side token buffer. Server-Side QoE-Aware Scheduler. Andesโ€™s scheduling algorithm can work with any LLM serving system that supports continuous batching and at least one preemption mechanism (swapping or recomputation). We note that an LLM 3The overhead of preemption depends on how much memory was freed, not the number of requests. Therefore, for the same amount of memory freed from preemption, itโ€™s better to free a smaller number of requests. 0 50 100 150 200 250 #Tokens Generation Pause Network Fluctuation 0 10 20 30 40 50 Time (s) 0 100 #Tokens in buffer Client receives User digests Figure 8. The client-side token buffer holds excess tokens sent from the server to absorb token generation fluctuations and paces token delivery based on the userโ€™s expected TDS. serving system that implements Paged Attention [25] is likely to also support at least one preemption mechanism to prevent the system from running out of memory. As a reference, we implemented Andesโ€™s scheduling algorithm on top of vLLM [25]. The scheduler only manages requests coming into the vLLM instance it is integrated with, assuming that cluster-level load balancing and fault tolerance are done separately. Client-Side Token Buffer. The server sends tokens to the buffer as soon as they are generated, even if they were generated at a pace that exceeds the userโ€™s expected TDS. Then, the token buffer smooths out the token delivery timeline to pace tokens at the userโ€™s expected TDS. The token buffer can also naturally smooth out some fluctuations in network latency, for instance in crowded mobile networks. The buffer should be implemented appropriately depending on the destination of streaming โ€“ e.g., TypeScript for web frontend, Python for API use. Figure 8 visualizes the token buffer in action. With an initial burst generation faster than the userโ€™s expected TDS, the buffer withholds excess tokens and paces token delivery, thus growing in size. The server is fully aware of the token buffer, and preempts the request to serve other requests. During this time, the buffer drains at a rate that matches the userโ€™s expected TDS. Finally, the server brings back the request and starts generating tokens again, and together with the token buffer, perfect QoE was achieved. 6 Evaluation We evaluate the performance of Andes under different workloads. We demonstrate that: 1. Andes improves the average QoE up to 3.2ร— when the system experiences high/bursty load (ยง6.2.1). 8 Model size 13B 30B 66B 175B GPUs A100 4ร—A100 4ร—A100 4ร—A100 GPU Memory 80 GB 320 GB 320 GB 320 GB Precision FP16 FP16 FP16 8-bit [14] Model Memory 26 GB 60 GB 132 GB 180 GB Table 3. OPT model family and GPU specifications used. 2. Andes can handle up to 1.6ร— higher request rates while preserving high QoE without additional resources, significantly reducing the serving cost(ยง6.2.2). 3. Andes maintains similar token generation throughput as the baseline, with a minor drop (โ‰ค10%) in throughput as the request rate increases (ยง6.2.3). 4. Andes significantly improves TTFT, while maintaining TDS above user expected speed (ยง6.3). 5. Andes outperforms the baselines across different workloads (ยง6.4) and setups (ยง6.5). 6.1 Experiment Setup Model and Server Configurations. Following state-ofthe-art LLM serving systems [25], we evaluate Andes using the OPT [51] series with 13B, 30B, 66B, and 175B parameters, with the 175B model employing INT8 quantization. We run all experiments on NVIDIA A100 GPUs in Chameleon Cloud [22], and use tensor parallelism to deploy the models, using the default configuration in vLLM [25]. We use swap as the preemption mechanism and set the CPU swap space to 240 GB in total. Detailed hardware specifications are provided in Table 3. Workloads. We experiment on ShareGPT [45], a dataset that gathers conversations shared by users with ChatGPT [35], including multiple rounds of input prompt and output response. By concatenating multiple rounds of conversations into one input while limiting its length to 1k tokens to fit the modelโ€™s maximum context length, and setting the final response as the output, we create the Multi-Round ShareGPT dataset for longer conversations. As shown in Figure 9, MultiRound-ShareGPT has about 3ร— longer input than ShareGPT, while both datasets have similar output length distribution. We generate request arrival traces using Poisson distribution with different arrival rates. The requestโ€™s QoE requirement trace is created with different expected TTFT and TDS. TTFT is set to 1 second for all, while TDS is based on user reading speeds (Table 1), and is translated from words to tokens using the average word-to-token ratio for ChatGPT [38]. In real applications, QoE requirements should be set depending on the applicationโ€™s specific use case. For instance, reading speed (and thus expected TDS) may be measured using screen scrolling [18] or eye-tracking [3, 34]. Another potential use case is to introduce API price tiering, 0 500 1000 1500 2000 #Tokens 0 200 400 Density Input (mean: 174.55) Output (mean: 314.22) (a) ShareGPT. 0 200 400 600 800 1000 #Tokens 0 200 400 600 Density Input (mean: 624.22) Output (mean: 365.52) (b) Multi-Round ShareGPT. Figure 9. Input and output length distributions of datasets. where a higher per-token price provides faster TDS, and API users can select the tier suitable for downstream digestion. Baselines. We compare Andes with vLLM (version 0.2.7). vLLM uses first-come-first-serve (FCFS) scheduling policy by default. We implement another scheduling policy, RoundRobin (RR), atop vLLM for more informed comparison, which is designed to guarantee equal service to requests through cyclic request preemption. For RR, we set the service interval to 50 inference iterations, maximizing its QoE performance. Metrics. We focus on the following metrics in evaluations: โ€ข Average QoE: We set the threshold to 0.9 as the minimum acceptable average QoE. The QoE of 0.9 corresponds to a 5% delay in TTFT, a 10% slowdown in TDS, or something in the middle. โ€ข System capacity: It measures the maximum request rate that the system can handle while maintaining an average QoE above the threshold. โ€ข System throughput: It measures how many tokens the system generates per second. We also report normalized latency, which is used by vLLM[25] and Orca[50], in Appendix E. 6.2 End-to-End Experiments In this section, we report the performance of Andes in terms of average QoE (ยง6.2.1), system capacity (ยง6.2.2), and system throughput (ยง6.2.3) under different setups. 6.2.1 Improvement on Average QoE. We evaluate the performance of Andes on all four models and two datasets. Figure 10 and Figure 11 show the result on the ShareGPT dataset and Multi-Round ShareGPT dataset respectively. As the request rate increases, Andes maintains a high average QoE, outperforming the baseline whose average QoE sharply decreases. In other words, Andes can serve more concurrent requests without compromising user experience. For ShareGPT dataset, Andes increases average QoE up to 3.1ร— at the same request rate, while maintaining an average QoE of 0.9, all with the same resources. For Multi-Round ShareGPT dataset, Andes improves average QoE up to 3.2ร—. For OPT-30B model, the improvement is less significant, as the model is less resource-constrained when compared to the OPT-66B model. 9 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 5.0 7.5 10.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) OPT-13B 5 10 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) OPT-30B 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) OPT-66B 1.4 1.6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (d) OPT-175B. Figure 10. Average QoE for different request rates using the ShareGPT dataset. 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 2 3 4 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) OPT-13B. 2 4 6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) OPT-30B. 1.5 2.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) OPT-66B. 0.8 1.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (d) OPT-175B. Figure 11. Average QoE for different request rates using the Multi-Round ShareGPT dataset. These improvements can be attributed to Andesโ€™s QoEaware scheduling policy, which dynamically prioritizes resources for urgent requests that risk falling below their expected QoE, preempting those that have been sufficiently served. In contrast, under higher load, traditional FCFS scheduling policy suffers from head-of-line blocking, leading to significant queuing delay. Although the RR policy mitigates head-of-line blocking by preemptions, frequent preemptions introduce significant overhead and degrade the average QoE. 6.2.2 Improvement on Server Capacity. As shown in Figures 10 and 11, the horizontal dotted lines represent the average QoE threshold of 0.9. For ShareGPT dataset, Andes can manage 1.2ร—โˆ’1.6ร— higher request rate than vLLM while maintaining an average QoE above the threshold. Specifically, for the OPT-66B model, Andes can handle 1.25ร— higher request rate than vLLM, nearing the 1.38ร— theoretical improvement suggested in Section 2.3, showcasing Andesโ€™s ability to optimize resource allocation and average QoE effectively. For Multi-Round ShareGPT dataset, Andes can serve 1.1 ร— โˆ’1.3ร— higher request rate. Additionally, by serving higher request rates with the same resources, Andes effectively reduces the resource cost per request. 6.2.3 Impact of Andes on System Throughput. We report the token generation throughput and the preemption frequency of Andes on OPT-66B with both datasets, as shown in Figure 12 and Figure 13. In both datasets, Andes maintains the same token throughput as vLLM when the request rate is moderate, and experiences a minor drop (โ‰ค10%) in throughput as the request rate increases. This demonstrates that 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 3 4 5 Request rate (req/s) 0 50 Throughput (tokens/s) (a) ShareGPT. 1.5 2.0 Request rate (req/s) 0 50 Throughput (tokens/s) (b) Multi-Round ShareGPT. Figure 12. Token generation throughput with OPT-66B under different request arrival rates. Andes marginally impacts system throughput. The throughput decrease can be attributed to the overheads introduced by request preemption. Despite the active request scheduling, the frequency of preemptions per request remains low (โ‰ค0.5) under reasonable average QoE as shown in Figure 13, minimizing the impact of overheads on throughput; Despite the minor decrease in throughput, the up to 60% improvement in server capacity offered by Andes can compensate for this, effectively reducing the resource cost per request while maintaining a satisfactory user experience. 6.3 Breakdown Analysis To understand Andesโ€™s performance in detail, we conducted a breakdown analysis focusing on QoE, time to first token (TTFT), and token delivery speed (TDS), as shown in Table 4. We report Andesโ€™s performance on OPT-66B and ShareGPT dataset with a request rate of 3.3, where Andes achieved an average QoE of 0.92. With these breakdown analyses, we can 10 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg preemption frequency Andes (a) ShareGPT. 1.5 2.0 2.5 Request rate (req/s) 0.0 0.5 1.0 Avg preemption frequency Andes (b) Multi-Round ShareGPT. Figure 13. Preemption frequency with OPT-66B under different request arrival rates. Metric Percentile Method vLLM Andes 10๐‘กโ„Ž 0.05 0.77 50๐‘กโ„Ž 0.39 1.00 QoE 90๐‘กโ„Ž 1.00 1.00 10๐‘กโ„Ž 0.33 0.35 50๐‘กโ„Ž 56.73 0.47 TTFT (s) 90๐‘กโ„Ž 144.95 0.66 10๐‘กโ„Ž 6.05 5.32 50๐‘กโ„Ž 6.45 5.44 TDS (tokens/s) 90๐‘กโ„Ž 7.84 7.02 Table 4. Andes significantly improves QoE and TTFT, while maintaining TDS above user expected speed. provide granular insights into individual user satisfaction under this level of QoE. QoE distribution. Andes significantly improves the lower and median user experiences, with the 10th percentile rising from 0.05 to 0.77 and the 50th percentile achieving a perfect score of 1, compared to 0.39 in vLLM. In order to understand how Andes handles requests with different request lengths, we present a scatter plot of QoE across different total lengths as shown in Figure 14. We observe Andes slightly starves a small fraction of longer requests, as they consume more resources or take longer time to complete. In contrast, FCFS starves lots of shorter requests that are blocked by longer requests. Token delivery timeline. Andes greatly enhances initial responsiveness, reducing median TTFT from 56.73 seconds in vLLM to just 0.47 seconds, and similarly improving the 90th percentile from 144.95 seconds to 0.66 seconds. This improved performance is attributed to Andesโ€™s QoE-aware scheduling, which effectively mitigates head-of-line blocking and reduces queuing delays. Additionally, we analyze the percentile distribution of the average TDS observed by users, excluding TTFT. While Andes slightly slows the average TDS, it remains above the userโ€™s expected speed, ensuring balanced delivery that neither overwhelms nor starves users. 0 1000 2000 Total Length 0 1 QoE (a) vLLM. 0 1000 2000 Total Length 0 1 QoE (b) Andes. Figure 14. QoE distribution across different total lengths. 6.4 Robustness to Diverse Workloads We evaluate the robustness of Andes under diverse settings including different hardware, arrival patterns, and QoE traces. We observed similar trends in diverse settings; therefore, we report our results with OPT-66B and ShareGPT. Hardware. We evaluate Andes on the NVIDIA A40 GPU with 46 GB RAM, as shown in Figure 15a. Andes improves average QoE up to 7ร— under a higher request rate and serves 1.1ร— higher request rate while maintaining an average QoE of 0.9. The reason for the smaller improvement on server capacity is that the A40 has a lower computational capability than the A100, leading to a slower average token generation speed. Consequently, the gap between the expected TDS and actual TDS on the A40 is smaller than on the A100, providing less opportunity for request scheduling and improving average QoE. However, as newer generations of GPUs are becoming more powerful in terms of computational capability, the potential improvement of Andes will be more significant. Bursty Arrival Process. We use a Gamma arrival process with the same request rate and a coefficient of variation of 3 to simulate the burst arrival of user requests. Figure 15b indicates that under bursty workload, the average QoE for FCFS policy begins to decrease at a lower request rate compared to the Poisson arrival, due to increased queuing delays. In contrast, Andes sustains a high average QoE, achieving up to a 2.7ร— improvement on average QoE at the same request rate and serves 1.3ร— higher request rate, showing Andesโ€™s adaptability to bursty workload. Different QoE Traces. Due to the unique QoE requirements of different applications, we evaluate Andesโ€™s performance under a voice chat QoE trace, with expected TTFT at 1 second and slower expected TDS adjusted according to the speaking speed outlined in Table 2. As shown in Figure 15c, both Andes and baseline achieve better average QoE even on higher request rates, attributed to the less strict TDS requirements. Nevertheless, Andes improves average QoE up to 1.25ร— and manages 2ร— request rate, which approaches the theoretical maximum improvement of 2ร— as discussed in Section 2.3. 6.5 Sensitivity Analysis All experiments in sensitivity analysis are conducted on OPT66B with the ShareGPT dataset and a request rate of 3.3. 11 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 0.4 0.5 0.6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) NVIDIA A40. 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) Burst request arrival. 5 10 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) Voice chat QoE trace. Figure 15. Robustness analysis on OPT-66B with ShareGPT dataset. 0.0 0.5 1.0 1.5 Preemption frequency cap p 0.5 1.0 Avg QoE vLLM Sedna 0.0 0.5 1.0 1.5 Preemption frequency cap P 0.5 1.0 Avg QoE (a) Average QoE. 0.0 0.5 1.0 1.5 Preemption frequency cap P 0 50 Throughput (tokens/s) (b) Throughput. Figure 16. Tuning preemption frequency cap ๐‘ƒ. 0 50 100 150 t 0.4 0.6 0.8 1.0 Avg QoE vLLM Andes Figure 17. Tuning ฮ”๐‘ก. 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE vLLM Andes w/ greedy Andes w/ DP Figure 18. Different solver. Preemption Frequency Cap ๐‘ƒ. Increasing preemption frequency cap ๐‘ƒcan lead to finer-grained scheduling, potentially enhancing average QoE, but at the cost of increased overhead and reduced throughput. Figure 16a shows the average QoE under different ๐‘ƒ. Improvements in QoE are observed as ๐‘ƒincreases up to 0.4 preemptions per request, stabilizing beyond this point. Conversely, Figure 16b illustrates a slight decrease in system throughput with increased ๐‘ƒ, stabilizing beyond 0.4 preemption per request. These observations suggest a trade-off between average QoE and system throughput, indicating the current setting of ๐‘ƒnearly optimizes QoE while maintaining satisfactory throughput. Prediction Timeframe ฮ”๐‘ก. We evaluate how different ฮ”๐‘ก influences average QoE to understand its effect on system performance. Figure 17 illustrates that the average QoE remains roughly consistent for ฮ”๐‘กvalues greater than 50, and significantly outperforms the baselines, indicating that Andes is not sensitive to the setting of ฮ”๐‘ก. Different Knapsack Solution. We compare the performance of Andes with different knapsack solutions between greedy and dynamic programming (DP). Figure 18 shows that the greedy consistently surpasses the DP solution, while both solutions outperform the baselines. The lower performance of the DP is due to its substantial computational overhead, which delays the inference process and degrades the average QoE. This suggests that the greedy approach is a more practical and efficient solution for Andes. 7 Related Work General Model Serving Systems. A variety of model serving systems have emerged, ranging from general-purpose, production-level frameworks like TensorFlow Serving [33] and NVIDIA Triton [31] to specialized systems such as Clipper [11], which sets application-level SLOs. Recent systems including Nexus[42], DeepRecSys [17], Clockwork [16], INFaaS [40], SuperServe [24] and AlpaServe [26] have introduced features like serving pipelines, hardware platform diversity, advanced scheduling, dynamic model selection, and model parallelism to boost resource efficiency. However, these general systems neglect the unique characteristics of LLM inference, leaving potential avenues for optimization. LLM Serving Systems. Numerous model serving systems are proposed to address the unique challenges of LLMs. Orca [50] introduced an iteration-level scheduling policy to enhance the throughput of batching inference, and vLLM [25] developed a PagedAttention to reduce the memory usage of LLMs. Splitwise [37], DistServe [52], TetriInfer [19] and Sarathi-Serve [1, 2] optimize the computation of prefill and decode phases through disaggregating or merging them. Some other systems focus on GPU kernel optimization and kernel fusion[5, 12, 32], model parallelism [5, 39], batching algorithm [13, 43, 50], KV-cache management [27, 28, 44] and parameter-sharing [53]. However, these systems focus on optimizing aggregated server-side performance and simply adopt a FCFS scheduling policy, which fail to address the queuing delay problem under higher request load. Finally, shortest remaining processing time [41] is a preemptive scheduling policy, but it does not consider the QoE of individual requests and requires knowledge of the response length of requests. To the best of our knowledge, Andes is the first to define and optimize QoE of text streaming services. 12 Video Streaming and QoE. The concept of text streaming draws inspiration from video streaming but encounters unique challenges and has a different QoE definition. While video streaming services are primarily limited by network bandwidth and latency [7], text streaming services are mainly constrained on computational resources [48]. Additionally, the QoE in video streaming is often measured by metrics like buffering ratio, resolution stability, and playback smoothness [7], while the QoE in text streaming primarily considers the token delivery timelines (TDT). 8
Jiachen Liu, Zhiyu Wu, Jae-Won Chung, Fan Lai, Myungjin Lee, Mosharaf Chowdhury
Original Paper
[ "cs.DC", "cs.LG" ]
http://arxiv.org/abs/2404.16294v1
2024-04-25 00:00:00
LLM-Based Section Identifiers Excel on Open Source but Stumble in Real World Applications
Electronic health records (EHR) even though a boon for healthcare practitioners, are growing convoluted and longer every day. Sifting around these lengthy EHRs is taxing and becomes a cumbersome part of physician-patient interaction. Several approaches have been proposed to help alleviate this prevalent issue either via summarization or sectioning, however, only a few approaches have truly been helpful in the past. With the rise of automated methods, machine learning (ML) has shown promise in solving the task of identifying relevant sections in EHR. However, most ML methods rely on labeled data which is difficult to get in healthcare. Large language models (LLMs) on the other hand, have performed impressive feats in natural language processing (NLP), that too in a zero-shot manner, i.e. without any labeled data. To that end, we propose using LLMs to identify relevant section headers. We find that GPT-4 can effectively solve the task on both zero and few-shot settings as well as segment dramatically better than state-of-the-art methods. Additionally, we also annotate a much harder real world dataset and find that GPT-4 struggles to perform well, alluding to further research and harder benchmarks.
Electronic health records (EHR) even though a boon for healthcare practitioners, are growing convoluted and longer every day. Sifting around these lengthy EHRs is taxing and becomes a cumbersome part of physician-patient interaction. Several approaches have been proposed to help alleviate this prevalent issue either via summarization or sectioning, however, only a few approaches have truly been helpful in the past. With the rise of automated methods, machine learning (ML) has shown promise in solving the task of identifying relevant sections in EHR. However, most ML methods rely on labeled data which is difficult to get in healthcare. Large language models (LLMs) on the other hand, have performed impressive feats in natural language processing (NLP), that too in a zero-shot manner, i.e. without any labeled data. To that end, we propose using LLMs to identify relevant section headers. We find that GPT-4 can effectively solve the task on both zero and few-shot settings as well as segment dramatically better than state-of-the-art methods. Additionally, we also annotate a much harder real world dataset and find that GPT-4 struggles to perform well, alluding to further research and harder benchmarks.
cs.CL
LLM Fairness
2024-04-25 00:00:00
Introduction Modern day healthcare systems are increasingly moving towards large scale adoption of maintaining electronic health records (EHR) of patients (Congress, 2009). EHRs help healthcare practitioners with relevant information about a patient such as history, medications, etc. However, in recent times this practice has led to very long and convoluted EHRs (Rule et al., 2021). Naturally, the need for better information retrieval tools emerged due to the progressively lengthy and unstructured doctor notes. One such need is the accurate identification of sections in an EHR, pertinent to a physicianโ€™s inquiry. For instance, a question like โ€œWhat Figure 1: Sample real world obscure image of an outpatient paper-based patient encounter form comprising of numerous sections (Hersh and Hoyt, 2018). treatments has the patient undergone in the past?โ€ concerning prior treatments administered to a patient necessitates the swift extraction of information from the โ€œtreatmentsโ€ and โ€œpast medical historyโ€ sections, while excluding sections related to โ€œancestral medical historyโ€. This swift extraction is vital for timely decision-making in patient care. Additionally, during critical procedures such as the evaluation of medical necessity for prior authorization requests, it is customary for experienced clinicians to locate vital data within specific sections. An illustrative case entails examining the โ€œphysical examโ€ section to identify particular findings, such as signs of neurological disorders or movementassociated pain, indicating the need for additional diagnostic tests. The timely identification of such information is of utmost importance in ensuring the provision of appropriate care and reducing the risk of potential complications. arXiv:2404.16294v1 [cs.CL] 25 Apr 2024 In general, regions found in EHR would often have a section heading preceding the body of the section, as can be seen in example Table 1. Even though these section types have limited cardinality, however, more often than not, physicians would fail to adhere to standards and use lexical variations generated on the fly. Moreover, practitioners not only will generate lexical variations of sections on the fly but also completely new sections altogether for valid reasons like imaging reports, etc. Apart from these variations, oftentimes there would be no headers at all, even though the information present could ideally be part of a pre-existing section in a document or a new section altogether. While studies like Gao et al. (2022) utilize the Subjective, Objective, Assessment and Plan heading (SOAP) framework, real-world clinical notes often contain sections beyond these categories. This limitation is further emphasized in Landes et al. (2022), warranting further investigation and analysis. The aforementioned factors have consequently contributed to the establishment of Section Identification (SI) as a distinct and enduring problem within the academic discourse (McKnight and Srinivasan, 2003), making it an indispensable component of any clinical natural language processing (NLP) pipeline. A SI task entails finding regions of text that are semantically related to an aspect of a patientโ€™s medical profile. More importantly, it helps to improve pre-existing information retrieval systems by enabling them to be more targeted and specific. Lastly, in light of recent findings of the negative impact of note bloat within EHRs on even the most sophisticated systems (Liu et al., 2022), using SI to shorten or create from EHR, a sub-EHR specific to a given task would prove to be a worthwhile effort for humans and machines both. Because finding sections and hence their corresponding headers involves inherent variability, machine learning (ML) methods have played an important role in this natural language processing (Pomares-Quimbaya et al., 2019). ML has increasingly been shown to be efficient in finding relevant sections within a document, however, a key drawback of traditional ML methods has been the dependence on labeled data (Tepper et al., 2012). Reliance on annotated data for training ML models to be able to predict the beginning and end of section headers has stalled the field from fully solving the task. The emergence of large language models (LLMs) in contemporary research presents a promising avenue to overcome the limitations inherent in traditional machine learning approaches, thereby expanding the scope of their applications. LLMs have emerged as the de-facto system for NLP in scenarios where data is scarce (OpenAI, 2023). The key distinction between traditional Machine Learning (ML) models and Large Language Models (LLMs) lies in their ability to understand tasks in natural language. While traditional ML models require labeled data for training, LLMs can leverage pre-training on vast amounts of unstructured text data, enabling them to perform tasks with minimal task-specific fine-tuning. This makes ML possible in an unsupervised manner (no need for labeled data) and therefore opens room for applications in domains where annotated data is hard to acquire like healthcare. While LLMs have been evaluated on a wide array of NLP tasks in healthcare (Nori et al., 2023), they are yet to be evaluated on their effectiveness in segmenting a document into semantically relevant sections. In this work, we address this gap and evaluate the efficacy of our approach on a widely-known datasets in the clinical medical domain. Findings show that GPT-4 (OpenAI, 2023) almost solved the section identification problem on the benchmark open-sourced dataset, however, on a private dataset the performance lags. Our contributions are threefold, listed as follows: 1. We show that GPT-4 can generate zero-shot headings of records with very high accuracy. 2. Contrary to the above, we find that its performance drops on internal real-world datasets. 3. An ontology of numerous section headers seen in real world EHR systems is shared which has much higher coverage. 2 Related Work Traditionally, SI task has been done using a pre-defined dictionary of plausible candidates. Pomares-Quimbaya et al. (2019) performed a comprehensive survey and found that rule-based methods still dominated the array of methods proposed while ML systems increasingly achieved better coverage when combined in a hybrid manner with rulebased methods. McKnight and Srinivasan (2003) later on extracted bag-of-words from MedLINE abstracts and used a support vector machine to train a classifier to categorize sentences into either Introduction, Method, Result, or Conclusion, demonstrating promising results. Similarly, Hirohata et al. Allergies Allergies: Patient recorded as having No Known Allergies to Drugs... History of Present Illness HPI: 61M w/ incidental L renal mass found during W/U for brachytherapy for low-grade [**Last Name (STitle) **], now w/ gradually worsening gross hematuria for the past several days. Labs Imaging Pertinent Results: [**2160-4-10**] 07:30AM BLOOD WBC-12.6* RBC-3.20* Hgb-8.2* Hct-24.5* MCV-77* MCH-25.6* MCHC-33.4 RDW-17.1* Plt Ct-438. Hospital Course Brief Hospital Course: 61M w/ low-grade [**Month/Day/Year **] awaiting brachytherapy and locallyadvanced L renal mass w/ collecting system invasion, renal vein thrombus, and likely metastases, presented w/gradually worsening gross hematuria. Table 1: This figure illustrates a sample data point from the MIMIC-III database, highlighting the sections annotated with MedSecID corpus. (2008) achieved very high accuracy by using conditional random fields to label scientific abstracts into Objectives, Methods, Results, and
Saranya Krishnamoorthy, Ayush Singh, Shabnam Tafreshi
Original Paper
[ "cs.CL", "cs.AI" ]
http://arxiv.org/abs/2404.16297v1
2024-04-25 00:00:00
When Fuzzing Meets LLMs: Challenges and Opportunities
Fuzzing, a widely-used technique for bug detection, has seen advancements through Large Language Models (LLMs). Despite their potential, LLMs face specific challenges in fuzzing. In this paper, we identified five major challenges of LLM-assisted fuzzing. To support our findings, we revisited the most recent papers from top-tier conferences, confirming that these challenges are widespread. As a remedy, we propose some actionable recommendations to help improve applying LLM in Fuzzing and conduct preliminary evaluations on DBMS fuzzing. The results demonstrate that our recommendations effectively address the identified challenges.
Fuzzing, a widely-used technique for bug detection, has seen advancements through Large Language Models (LLMs). Despite their potential, LLMs face specific challenges in fuzzing. In this paper, we identified five major challenges of LLM-assisted fuzzing. To support our findings, we revisited the most recent papers from top-tier conferences, confirming that these challenges are widespread. As a remedy, we propose some actionable recommendations to help improve applying LLM in Fuzzing and conduct preliminary evaluations on DBMS fuzzing. The results demonstrate that our recommendations effectively address the identified challenges.
cs.SE
LLM Fairness
2024-04-25 00:00:00
INTRODUCTION Fuzzing is a promising technique for software bug detection [8, 26]. Large Language Models (LLM) are rapidly gaining popularity across various applications for their versatility and capability [14, 15]. From natural language processing [7, 22, 27] to code generation [19, 24], LLMโ€™s broad utility is making it a prominent and sought-after solution in diverse domains. This development has naturally influenced fuzzing research: to help improve the fuzzing effectiveness, LLM has now become one of the key enablers to assist the core processes of fuzzing, including driver synthesis [28, 39], input generation [9, 10], and bug detection [11, 17]. While excelling in natural language analysis, LLM encounters some common pitfalls like limited context length [20] and hallucination problems [16, 23, 31], etc. Consequently, LLM exhibits limitations in complex program analysis. These pitfalls of LLM affect the effectiveness of fuzzing, leading to testing performance degradation, manifesting as high false positives, low test coverage, and limited scalability. In this paper, we identify five common challenges when using LLM-based fuzzing technology: 1) Firstly, they often produce lowquality outputs in fuzzing driver synthesis, lacking the precision required for effective bug detection. 2) Secondly, these models demonstrate a limited scope in their understanding and processing capabilities, constraining their utility in diverse fuzzing scenarios. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. FSEโ€™24, July 2024, Porto de Galinhas, Brazil ยฉ 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM https://doi.org/10.1145/nnnnnnn.nnnnnnn 3) Thirdly, LLMs struggle with generating sufficiently diverse inputs during the fuzzing process, which is critical for thorough and effective bug detection. 4) Fourthly, they face challenges in maintaining the validity of generated inputs, a crucial factor for accurate and reliable fuzzing. 5) Lastly, LLMsโ€™ inaccurate understanding of bug detection mechanisms hinders their ability to identify and address complex software vulnerabilities effectively, thereby limiting their overall effectiveness in the fuzzing process. We performed a comprehensive survey and revisited most recent fuzzing works that rely on LLM for tackling different problems in the fuzzing process. To our surprise, the results show that each work encounters at least one of these challenges. 1 Although LLMs are widespread, it is more important for us to avoid its weakness, and at the same time take advantage of its strengths. To this end, we perform an impact analysis of the implications in three key fuzzing steps. These findings inspire us with some opportunities for better usage of LLM in each fuzzing step according to whether the corresponding corpus and documentation are rich. Furthermore, we performed some preliminary evaluations according to these opportunities by applying LLM in fuzzing database management systems(DBMS). The results demonstrate that the reasonable instantiation of those recommendations can overcome the challenges in LLM-assisted DBMS fuzzing. 2 CHALLENGES AND OPPORTUNITIES Despite that LLM have achieved great success, the application of LLM in fuzzing is often prone to several problems, ranging from deduction accuracy to adapt scalability. Overlooking these issues may result in poor seed quality or omitting critical bugs, leading to a limited fuzzing performance. In this section, we summarize the five challenges that commonly occur when applying LLM in fuzzing. While these challenges might initially appear straightforward, they usually stem from small shortcomings that are typical in fuzzing. We group these challenges with respect to the states of a typical fuzzing workflow, as depicted in Figure 1. Limited Training Corpus Limited Long-text Understanding Hallucination C3.1: Inaccurate Understanding C2.1: Insufficient Diversity C1.1: Prone to Error C2.2: Limited Validity C1.2: Limited Scope Bug Detection Input Generation Driver Synthesis Target Program Prompt Bug Report Challenges Fuzzing Loop Large Language Model Figure 1: Fuzzing Workflow with LLM enhanced. 2.1 Driver Synthesis Description. Recently, several pioneer works have been proposed to utilize LLMs to enhance driver synthesis [11, 12, 28, 38, 39]. 1Remark: The purpose of this work is not to point fingers or critique. Instead, it wants to show how we can overcome the challenges of LLM-assisted fuzzing and effectively leverage the advantages of LLMs and make it truly beneficial for the fuzzing process. arXiv:2404.16297v1 [cs.SE] 25 Apr 2024 FSEโ€™24, July 2024, Porto de Galinhas, Brazil Jiang et al. Their basic idea is to use API documentation as the prompt context, and then ask LLMs to generate API invoking sequences as fuzzing drivers. For example, both TitanFuzz [11] and PromptFuzz [28] design customized prompt templates to guide LLMs in generating code that follows programming syntax and semantics. Challenges. The application of LLMs to driver synthesis can be ineffective if done directly, as LLMs have a tendency to produce hallucinations [7, 20] and perform less effectively on programs that are not included in their training corpus [20]. These limitations present two challenges for driver synthesis. The first one is that the synthesized drivers are prone to error, leading to a non-negligible number of false positives during fuzzing. For example, according to comprehensive evaluation on LLM-based driver synthesis for OSS-Fuzz projects [39], GPT-4 can correctly generate roughly 40% drivers, while the rest of the drivers contain errors. Among the erroneous drivers, 93% exhibit one or more of the following issues: type errors, mis-initialized function arguments, usage of non-existing identifiers, and imprecise control-flow dependencies. This occurrence primarily arises due to LLMs relying on pre-trained knowledge for driver synthesis, leading to the production of hallucinations [16]. The second challenge is that the application of directly using LLMs for driver synthesis has limited scope because LLMs have limited knowledge on unseen programs. For those target programs, LLMs sometimes use training knowledge to fill the gap, thus generating incorrect API invoking sequences. For example, developers from Googleโ€™s OSS-Fuzz project [35] attempted to leverage LLMs to synthesize drivers. Out of 31 tested OSS-Fuzz projects, 14 successfully compiled new targets and increased coverage with the synthesized drivers. The drivers unsuccessfully synthesized by LLMs typically originated from less common projects like krb5 and rtpproxy. In contrast, LLMs are more likely to generate compilable and effective drivers for more common projects, such as tinyxml2 and cjson. Recommendations. We have the following recommendations: REC 1.1 Some targets whose code or use cases have been included in the training corpus. For these cases, employing LLM for automated synthesis of fuzz drivers, complemented by error-guided corrective measures, is a practical approach. Iteratively querying the LLM based on identified errors and fixing the errors are practical measures [39], which helps to address the prone-to-error challenge. For example, libpng is a common library and has already been seen by GPT4 in its training process. Consequently, it is possible to directly ask GPT4 to generate a fuzz testing driver for libpng by giving the prompt โ€œGenerating LLVMFuzzerTestOneInput for test libpng.โ€ However, the generated driver might still contain errors in grammar or encounter issues during the process of compiling and linking. Test engineers can subsequently submit individual LLM queries containing the error messages to rectify these issues, occasionally necessitating multiple iterations. REC 1.2 For targets without a dedicated corpus in training, one can collect valuable materials such as function prototypes, example programs, or connection rules between functions. Conducting prompt engineering which involves embedding these materials, enhances the precision in generating logical sequences of function calls for the creation of drivers. The prompt engineering approach is a practical solution to tackle the challenge of limited scope. For example, typst is a new markup-based typesetting system like LaTex and claims it is more easier to learn and use. To generate a fuzz driver for it, feed the prompt โ€œGenerate LLVMFuzzerTestOneInput for typstโ€ to ChatGPT-3.5 will encounter hallucination problems and generate a completely non-existent driver. Instead, the project typst has lots of documents and unit tests. Feeding these materials that illustrate the usage of the functions is helpful for LLMs to generate effective drivers [35]. Additionally, it is also feasible to iteratively query LLMs to address any errors that may be present in the drivers. REC 1.3 Sometimes, even with adequate documentation and examples, LLMs can still encounter challenges in generating valid drivers at times, especially for extremely complex targets like Linux kernel. These systems frequently involve intricate dependencies among their APIs, or there exist implicit dependencies among lowerlevel systems that pose challenges for LLM to capture. For these targets, it is advisable to refrain from relying on LLMs. Instead, it is more practical and feasible to explore conventional methods. For example, KSG [33] uses the ebpf to dynamically infer the kernelโ€™s system call argument type and value constraints. In contrast, LLM-based approaches such as KernelGPT [38] use static inference based on kernel man pages and source code. But they may find some complex dummy operations. And itโ€™s hard for them to deduct pointer references. Therefore, KSG can generate 2,433 Syzlang, which is 17.86ร— more compared to KernelGPT [38]. 2.2 Input Generation Description. Recently, several pioneer works [5, 34, 36, 37] have been proposed to utilize LLM to enhance input generation. Their basic idea is to use input specifications and input examples as the prompt context and then ask LLMs to generate new inputs. For example, LLMFuzzer [5] feeds input specifications to LLMs to generate initial seeds for mutation-based fuzzers. Challenges. The application of LLMs to input generation can be ineffective if done directly, as LLMs heavily rely on training corpus and have limited long-text understanding [20, 32]. These limitations present two challenges for input generation. The first one is that the generated inputs have insufficient diversity, leading to inefficient exploration of the input space. This is because LLMs are pre-trained models and prone to responding to usersโ€™ queries in a similar manner when given the same prompt context. Therefore, it is difficult for LLMs to generate diverse inputs if they only provide limited information. For example, ChatAFL [29] demonstrates a significant limitation when directly applying LLMs to the RTPS protocol fuzzing. If only a limited amount of protocol information is provided in the prompts, LLMs can only generate inputs that cover 4 states out of 10 states that the RTPS protocol supported. This results in a substantial portion of the RTSP state remaining unexplored. The second challenge is that the generated inputs often have limited validity, leading to early termination when the target program executes these inputs. This is because LLMs cannot fully understand the long texts of input formats or examples due to limited ability on long text processing [32]. For example, Border Gateway Protocol (BGP) is a complex protocol, whose document (BGP RFC 9952) has more than 28,000 words to describe its functionalities. When generating inputs of BGP based on the RFC description, LLMs usually forget to generate the length field of the TLV substructures in the BGP message because the description of the main message structure and the TLV substructures are a little far, making LLMs hard to totally understand BGP format. Recommendations. We have the following recommendations: REC 2.1 Some of the testing inputs to the system are common and have a large number of examples on the web, and they have When Fuzzing Meets LLMs: Challenges and Opportunities FSEโ€™24, July 2024, Porto de Galinhas, Brazil been included in the LLMโ€™s training corpus. It is possible to directly employ LLM to generate test cases for them, combining methodologies focused on diversification. These methods encompass internal approaches, such as meticulously crafted prompts that demand using diverse features, as well as external methods, such as coverageguided genetic algorithms. They both contribute to address the challenge of insufficient diversity. For instance, when testing common text protocols such as HTTP and FTP, where LLM excels in its support for text-based languages, it is feasible to directly instruct LLM to generate test cases for these protocols. To increase diversity, for internal approaches, we can use prompts that encourage LLM to generate HTTP files with various methods (e.g., GET, POST, PUT), different headers, different query parameters, URL structures, various payloads, and other aspects. We can also interactively ask LLM to cover more types of messages [29]. For external approaches, we can utilize coverageguided generation used in conventional fuzzing along with more real-world examples to enhance LLM. REC 2.2 In many cases, the LLM is not trained with a dedicated training corpus specifically tailored for the test subjects. Rather than employing LLM directly for generating the final test cases, we suggest utilizing LLM to transform well-known knowledge to formulate the input specifications or build initial test cases. The input specification helps address the challenge of limited validity, and the initial test cases help address the challenge of insufficient diversity. For instance, in the case of protocol implementations lacking machine-readable grammar, generating valid test inputs automatically to adhere to the necessary structure and order becomes challenging. In such scenarios, leveraging that LLM has been trained on established protocols, allows the transfer of grammars from these protocols with the assistance of LLM and recorded message sequences. The grammar can enhance the validity of the generated test cases. With the grammar, conventional grammar-based fuzzers could be utilized to generate more test cases [29]. Another instance is transforming test cases of popular database systems to initial seeds for the tested database system. The SQL queries of popular database systems like PostgreSQL have rich diversity and they have already been trained for LLM. Therefore, leveraging the knowledge of LLM to transform them into the format of the target database system is feasible. Providing them to the fuzzer as the initial seed helps enhance the diversity of generated test cases. 2.3 Bug Detection Description. Recently, several pioneer works [21, 25] utilize LLM to enhance bug detection. Their basic idea is to use functionality descriptions of the target program as the prompt context, and then ask LLMs to generate code that implements the same functionalities with the target program. By comparing the execution results of the two functionally equivalent programs, they can detect logic bugs in the target program. For example, Differential Prompting [25] queries LLMs about the intention of a piece of provided code and then uses the obtained intention as a new prompt context for LLMs to generate code with the same intention. Challenges. The application of LLMs to bug detection can be ineffective if done directly, as LLMs have limited long-text understanding [32], posing a challenge to inaccurate understand of the semantics of the target program. For example, researchers [25] found that LLMs may misconstrue code designed to identify the longest common substring as being intended for finding the longest common subsequence. This misinterpretation can occur even though these two problems require entirely distinct code solutions. As a result, LLMs may generate code whose functionality deviates from the target program, thus leading to an inaccurate test oracle. According to the experiment results of Differential Prompting [25], it achieves 66.7% success rate when generating reference implementation for programs from the programming contest website Codeforces. While this is substantially better than its baseline, it still results in a false-positive rate of 33.3%, which is still not sufficient for practical usage. Recommendations. We have the following recommendations: REC 3.1 Defining test oracles is highly dependent on specific targets and scenarios, presenting the most formidable aspect of fuzzing. For complicated targets, we suggest to avoid analyzing results with LLM directly. Instead, consider employing LLM to extract features or patterns associated with a specific bug type, leveraging domain knowledge. Subsequently, monitoring the system using these patterns aids in addressing the challenge of inaccurate understanding. For example, many time-series databases like IoTDB implicitly handle exceptions. Consequently, the system will not crash or exhibit other abnormal behaviors. Nevertheless, these database systems generate extensive logs, and errors manifest as exceptions in these logs. Therefore, it becomes feasible to use LLM for analyzing the logs to discern error patterns. In such scenarios, we recommend employing LLM to scrutinize the logs, identify error patterns, and subsequently leverage these patterns for detecting logic errors. REC 3.2 Some targets or projects contain well-defined documentations, where the expected behaviors are clearly described, like the RFCs for protocols. For these cases, we suggest to leverage the natural language understanding ability of LLM to extract the expected behaviors from the documentations for test oracle definition. This helps LLM to understand the intention and design of the target programs, thus addressing the challenge of inaccurate understanding. For example, the RFCs for protocols usually contain detailed descriptions of the protocolโ€™s expected behaviors. Take the RFC 854 [4] for Telnet protocol as an example. It specifies expected behaviors during the negotiation of some disabled command options or unnegotiated commands. These can be used as test oracles and can be further used to uncover CVE-2021-40523 [30]. 3 POTENTIAL SOLUTIONS To demonstrate the practicality of our recommendations, we use the Database Management System (DBMS) as the target for LLMassisted fuzzing. Addressing challenges in driver synthesis, input generation, and bug detection, we propose three potential solutions: state-aware driver synthesis, cross-DBMS SQL transfer, and log-based Oracle definition. These solutions are implemented and compared with rudimentary uses of LLM, where it is directly employed. Experiments are conducted under identical settings on a machine with 256 cores (AMD EPYC 7742 Processor @ 2.25 GHz) and 512 GiB of main memory, demonstrating the efficacy of our recommended approaches in enhancing LLM-based fuzzing for intricate systems like DBMSs. 3.1 LLM-Enhanced Connector Synthesis Obstacle: Database connectors, also commonly known as database drivers, serve as intermediary components facilitating communication between applications and databases. These connectors define standard a set of interfaces, encompassing functions and parameters. The driver for fuzzing database connector consists of a sequence FSEโ€™24, July 2024, Porto de Galinhas, Brazil Jiang et al. of these interfaces. Directly utilizing LLM to generate drivers for database connector will encounter two challenges: First is prone to error: API sequences contain semantic information that is embedded in the state of the database connector, directly generating sequences may import errors. Second is limited scope: LLM lacks the state transition knowledge of the connectors because it lacks the related corpus in training. Solution: Following REC 1.2 , we propose LLM-enhanced stateaware database connector synthesis. We first collect JDBC function prototypes and example programs that utilize JDBC. Then we model the connection relationships between JDBC functions as state-transition rules. Next, we gather the function prototypes, example programs, and connection rules as input for LLM. The prompt we give is like โ€œ Based on the state-transition rules and state description of functions, please generate a sequence of APIS within length 15. It is required to cover a different combination of state transitions than before.โ€ Result: We implement LLM-enhanced connector synthesis into Wingfuzz๐‘๐‘œ๐‘›๐‘›and compare it against LLM๐‘๐‘œ๐‘›๐‘›, which directly utilizes LLM to generate drivers for MySQL Connector/J [3], MariaDB Connector/J [2], and AWS JDBC Driver for MySQL [1]. We perform fuzzing on ClickHouse for each tool. Table 1 shows the driver correctness ratios and branch coverage by LLM๐‘๐‘œ๐‘›๐‘›and Wingfuzz๐‘๐‘œ๐‘›๐‘›on three selected DBMSs in 12 hours. These statistics show that Wingfuzz๐‘๐‘œ๐‘›๐‘›always performs better in both driver correctness ratio and branch coverage than LLM๐‘๐‘œ๐‘›๐‘›on all three DBMSs. Specifically, Wingfuzz๐‘๐‘œ๐‘›๐‘›archives 94% more correctness rate for driven synthesis. And the drivers generated by Wingfuzz๐‘๐‘œ๐‘›๐‘›cover 56% more branches on average. The main reason is that the state-transition rules embed semantic information, and it also helps LLM generate API sequences that account for the diverse states within the database connector. Table 1: Driver Correctness Ratios and Branch Coverage. DBMS Driver Correctness Ratios Branch Coverage LLM๐‘๐‘œ๐‘›๐‘› Wingfuzz๐‘๐‘œ๐‘›๐‘› LLM๐‘๐‘œ๐‘›๐‘› Wingfuzz๐‘๐‘œ๐‘›๐‘› MariaDB Connector/J 0.142 0.331 583 843 MySQL Connector/J 0.216 0.367 1256 1982 AWS MySQL JDBC 0.203 0.394 1382 2293 3.2 Cross-DBMS SQL Transfer Obstacle: SQL queries, as the inputs of DBMS, are vital to DBMS fuzzing. Generating SQL queries directly via LLM faces two main challenges: ensuring semantic correctness and promoting query diversity. Semantically correct SQL queries are vital for triggering complex DBMS behaviors, as syntactical errors lead to parsing failures. The intricate SQL grammar, encompassing various clauses, expressions, and rules, poses a challenge for LLM in achieving semantic correct. Furthermore, diversity in SQL queries is crucial for probing deep DBMS logic. However, LLMโ€™s constrained variety, influenced by the absence of DBMS feedback, limits the exploration of diverse query structures. Solution: To overcome these challenges, we introduce the crossDBMS SQL transfer approach, aligned with the recommendation REC 2.2 , for SQL generation. In contrast to directly generating the SQL queries, we use LLM to transfer the test cases from other DBMSs as the initial seeds for fuzzing the target DBMS. These initial seeds are used to mutate new SQL test cases during the fuzzing loop. The process contains three key steps. First, it executes existing SQL test cases within its native DBMS to capture the schema information during execution. Second, it utilizes LLMs along with the captured schema information to guide the generation of new test cases based on the LLM responses. Finally, it temporarily comments out unparsable sections for fuzzers to ensure proper parsing and subsequently uncomments them after mutation. Result: We implement the solution called Wingfuzz๐‘–๐‘›๐‘๐‘ข๐‘กand compare it with LLM๐‘–๐‘›๐‘๐‘ข๐‘ก, which directly uses LLM to generate the SQL queries. We run Wingfuzz๐‘–๐‘›๐‘๐‘ข๐‘กand LLM๐‘–๐‘›๐‘๐‘ข๐‘กon three DBMS: MonetDB [6], DuckDB [13], and ClickHouse [18]. Table 2: Semantic Correctness Ratios and Branch Coverage. DBMS Semantic Correctness Ratios Branch Coverage LLM๐‘–๐‘›๐‘๐‘ข๐‘ก Wingfuzz๐‘–๐‘›๐‘๐‘ข๐‘ก LLM๐‘–๐‘›๐‘๐‘ข๐‘ก Wingfuzz๐‘–๐‘›๐‘๐‘ข๐‘ก MonetDB 0.1594 0.4134 26,828 41,840 DuckDB 0.2551 0.3486 57,937 70,583 ClickHouse 0.1458 0.3093 124,887 145,383 Table 2 shows semantic correctness ratios and covered branches of LLM๐‘–๐‘›๐‘๐‘ข๐‘กand Wingfuzz๐‘–๐‘›๐‘๐‘ข๐‘กon three selected DBMSs in 12 hours. From the table, we can see that Wingfuzz๐‘–๐‘›๐‘๐‘ข๐‘กperforms better than LLM๐‘–๐‘›๐‘๐‘ข๐‘กon DBMS fuzzing. Specifically, the test cases generated by Wingfuzz๐‘–๐‘›๐‘๐‘ข๐‘กcontain 159.35%, 36.65%, and 112.14% more semantic-correct SQL statements, and cover 55.96%, 21.83%, and 16.41% more code branches than that of LLM๐‘–๐‘›๐‘๐‘ข๐‘กon MonetDB, DuckDB, and ClickHouse, respectively. It indicates that LLM can not directly generate high-quality SQL queries as the input for DBMS fuzzing. The main reason is that the transfer seeds improve the diversity of mutated test cases, and the fuzzerโ€™s mutator promises the semantic correctness of SQL queries. 3.3 Monitor-Based DBMS Bug Detection Obstacle: The most critical step for DBMS bug detection is to construct the test oracles to identify the logic or performance bugs in DBMS. A test oracle refers to a mechanism in DBMS fuzzing to determine the correctness or validity of the DBMSโ€™s behaviors. Directly using LLMs to construct the test oracle is challenging as LLMs lack specific knowledge about the intricate workings and behaviors of DBMS. They can not access the internal logic, making it difficult to accurately predict or emulate DBMS behavior. Solution: To address the challenges, we propose the Runtime Monitor-Based DBMS Bug Detection following the REC 3.1 , which detects the anomalies of DBMS by analyzing the runtime information of DBMS in real-time. To ensure the robustness of DBMS, the DBMS usually contains the implicit exception handler mechanism, which captures the internal exceptions to avoid system crashes. These exceptions usually output some key internal states and behaviors of DBMS, such as wrong execution logic. Unlike directly using LLM to construct the test oracle by checking the execution result of the SQL query, our approach involves collecting runtime information from the DBMS and using LLM to analyze the runtime information for bug detection. The process contains two main steps. First, it instruments an agent to extract the runtime information of DBMS. Then, it collects the runtime information and uses LLM to detect the anomaly by predefining some error pattern. Table 3: Number of Reported Bugs and Real Bugs. DBMS LLM๐‘๐‘ข๐‘” Wingfuzz๐‘๐‘ข๐‘” Name Reported Real Reported Real MonetDB 61 0 6 3 DuckDB 54 0 5 3 ClickHouse 67 1 3 3 Result: To evaluate the effectiveness of our recommendation, we implement the solution with Wingfuzz๐‘๐‘ข๐‘”and compare it with LLM๐‘๐‘ข๐‘”, which directly uses LLM to determine whether the execution of the SQL query is right during the fuzz loop. Table 3 When Fuzzing Meets LLMs: Challenges and Opportunities FSEโ€™24, July 2024, Porto de Galinhas, Brazil shows the number of reported bugs and real bugs by LLM๐‘๐‘ข๐‘”and Wingfuzz๐‘๐‘ข๐‘”in 12 hours on MonetDB, DuckDB and ClickHouse. It shows the Wingfuzz๐‘๐‘ข๐‘”can detect more anomalies and has fewer false positives than LLM๐‘๐‘ข๐‘”. Specifically, LLM๐‘๐‘ข๐‘”totoally reported 182 bugs but only 1 bug is real. Instead, Wingfuzz๐‘๐‘ข๐‘”reported 14 bugs and 9 bugs are real bugs and have been confirmed. The main reason is that the collected runtime information contains the error message of DBMS, and it helps LLM to analyze and detect bugs. 4
Yu Jiang, Jie Liang, Fuchen Ma, Yuanliang Chen, Chijin Zhou, Yuheng Shen, Zhiyong Wu, Jingzhou Fu, Mingzhe Wang, ShanShan Li, Quan Zhang
Original Paper
[ "cs.SE", "cs.AI" ]
http://arxiv.org/abs/2404.16300v1
2024-04-25 00:00:00
Reinforcement Learning with Generative Models for Compact Support Sets
"Foundation models contain a wealth of information from their vast number of\ntraining samples. Howe(...TRUNCATED)
"Foundation models contain a wealth of information from their vast number of\ntraining samples. Howe(...TRUNCATED)
cs.LG
Model AND Based AND Reinforcement AND Learning
2024-04-25 00:00:00
"Introduction Deep learning [10] is one of the most popular and successful methods for any task wher(...TRUNCATED)
Nico Schiavone, Xingyu Li
Original Paper
[ "cs.LG", "cs.CV" ]
http://arxiv.org/abs/2404.16301v1
2024-04-25 00:00:00
Style Adaptation for Domain-adaptive Semantic Segmentation
"Unsupervised Domain Adaptation (UDA) refers to the method that utilizes\nannotated source domain da(...TRUNCATED)
"Unsupervised Domain Adaptation (UDA) refers to the method that utilizes\nannotated source domain da(...TRUNCATED)
cs.CV
Semantic AND Segmentation AND Image
2024-04-25 00:00:00
"INTRODUCTION Neural Networks [1] and Transformers [2] have achieved great success in semantic segme(...TRUNCATED)
Ting Li, Jianshu Chao, Deyu An
Original Paper
[ "cs.CV" ]
http://arxiv.org/abs/2404.16302v1
2024-04-25 00:00:00
"CFMW: Cross-modality Fusion Mamba for Multispectral Object Detection under Adverse Weather Conditio(...TRUNCATED)
"Cross-modality images that integrate visible-infrared spectra cues can\nprovide richer complementar(...TRUNCATED)
"Cross-modality images that integrate visible-infrared spectra cues can\nprovide richer complementar(...TRUNCATED)
cs.CV
Mamba
2024-04-25 00:00:00
"INTRODUCTION In an open and dynamic environment, object detection faces challenging weather conditi(...TRUNCATED)
Haoyuan Li, Qi Hu, You Yao, Kailun Yang, Peng Chen
Original Paper
[ "cs.CV", "cs.MM", "cs.RO", "eess.IV" ]
http://arxiv.org/abs/2404.16306v1
2024-04-25 00:00:00
TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models
"Text-conditioned image-to-video generation (TI2V) aims to synthesize a\nrealistic video starting fr(...TRUNCATED)
"Text-conditioned image-to-video generation (TI2V) aims to synthesize a\nrealistic video starting fr(...TRUNCATED)
cs.CV
Diffusion AND Model
2024-04-25 00:00:00
"Introduction Image-to-video (I2V) generation is an appealing topic with various applications, inclu(...TRUNCATED)
"Haomiao Ni, Bernhard Egger, Suhas Lohit, Anoop Cherian, Ye Wang, Toshiaki Koike-Akino, Sharon X. Hu(...TRUNCATED)
Original Paper
[ "cs.CV" ]
http://arxiv.org/abs/2404.16325v1
2024-04-25 00:00:00
Semantic Segmentation Refiner for Ultrasound Applications with Zero-Shot Foundation Models
"Despite the remarkable success of deep learning in medical imaging analysis,\nmedical image segment(...TRUNCATED)
"Despite the remarkable success of deep learning in medical imaging analysis,\nmedical image segment(...TRUNCATED)
cs.CV
Semantic AND Segmentation AND Image
2024-04-25 00:00:00
"Introduction Ultrasound is a popular medical imaging modality used to image a large variety of orga(...TRUNCATED)
"Hedda Cohen Indelman, Elay Dahan, Angeles M. Perez-Agosto, Carmit Shiran, Doron Shaked, Nati Daniel(...TRUNCATED)
Original Paper
[ "cs.CV", "cs.AI" ]
End of preview. Expand in Data Studio

AcademicEval Benchmark Introduction

We proposed AcademicEval, a live benchmark for evaluating LLMs over long-context generation tasks. AcademicEval adopts papers on arXiv to introduce several acadeic writing tasks with long-context inputs, i.e., Title, Abstract, Introduction, Related Work, wich covers a wide range of abstraction levels and require no manual labeling.

Comparing to existing long-context LLM benchmarks, our Comparing to existing long-context LLM benchmarks, our AcademicEval offers flexible length, automatic annotation, hierarchical abstraction, few-shot demonstrations, and live updates without data leakage risks.

๐ŸŒŸNote๐ŸŒŸ: currently, for the ease of downloading, we only uploaded the test set of AcademicEval (The rest of AcademicEval, i.e., train and val set, can be accessed via AcademicEval Full). The data viewer above shows the preview data information of title-10K, abs-9K, and intro-8K. For the complete test set data, please check "Files and versions" in this page.

Benchmark Avg Len Automatic Annotation Hierarchical Abstraction Few-shot Demonstrations Live Update
ZeroSCROLLS (Shaham et al., 2023) ~10K โœ“ โœ˜ โœ˜ โœ˜
L-Eval (An et al., 2023) ~8K โœ˜ โœ˜ โœ˜ โœ˜
BAMBOO (Dong et al., 2023) ~16K โœ˜ โœ˜ โœ˜ โœ˜
LongBench (Bai et al., 2023) ~8K โœ˜ โœ˜ โœ“ โœ˜
LooGLE (Li et al., 2023) ~20K โœ˜ โœ˜ โœ˜ โœ˜
โˆžBench (Zhang et al., 2024) ~200K โœ˜ โœ˜ โœ˜ โœ˜
AcademicEval (ours) Flexible โœ“ โœ“ โœ“ โœ“

Dataset Structure

Data Settings

  • Title Writing

    • title_10K

    • title_30K

    • title_31K_G

  • Abstract Writing

    • abs_9K

    • abs_28K

    • abs_29K_G

  • Introduction Writing

    • intro_8K

    • intro_28K

    • intro_28K_G

  • Related Work Writing

    • related_34K

    • related_53K

    • related_53K_G

Main Data Fields

  • url: the url of the original paper on arXiv

  • title: the title of the paper

  • abstract: the abstract of the paper

  • authors: the authors of the paper

  • published: the publication timestamp of the paper

  • primary_cat: arXiv category

  • gt: the ground truth of the corresponding task

  • main_content: the main body of the paper (w/o the corresponding section content)

  • additional_info: the few-shot demonstrations from randomly selected papers (the data fields of each demonstration are the same as above)

  • additional_graph_info: the few-shot demonstrations with the co-author subgraph structure from co-author papers (the data fields of each demonstration are the same as above)

Downloads last month
144